OpenStack in the Fast Lane: How to get high performance from your OpenStack cloud?

It’s been a while since I blogged and a lot has changed since my last post. I’ve gotten married, moved to a different state and travelled quite a bit for work and pleasure. In August, I had the opportunity to travel to Bangalore and present at Devconf India, the second edition of the hugely popular and free open source conference which started out in the Czech Republic, and now expanded India. I proposed a talk titled OpenStack in the Fast Lane. The idea behind the talk was to introduce the audience to a set of configuration options hidden in the gazillion OpenStack service configuration files, that they can simply toggle to get better performance out of the box. Wouldn’t it be cool, if you knew you could change one simple option in a file to get a cloud that performs better?  The inspiration for the talk came from Assaf Muller’s blog post (by the way you should totally check out his blog) about how sometimes developers pass down the complexity of architectural choice down to the users leading to the cost of documentation and testing for the various configurations. I wanted to deal with this topic from a performance perspective. Also, not everyone has the expertise of being a performance engineer, so a knob that helps you get better performance and scale for your cloud is an operator’s dream come true.



The presentation was divided into several sections, with each section going over a particular component or service in OpenStack. A brief introduction to the tooling used was also given. I started with talking about TripleO aka Director, a feature rich and extensible installation and lifecycle management software for OpenStack. For those of you who are aware of TripleO architecture, there is an undercloud which is an all-in-one OpenStack machine that is used as a jumphost to drive the installation of the overcloud, which is the actual cloud that people refer to when they say OpenStack. Since, creating the overcloud from the undercloud involves creating several resources through heat, keystone is used extensively for authentication when these resources are created. Hence, one of the configuration options that can be tuned to get faster deployments of the overcloud, is the keystone process count on the undercloud. While, the number of threads can also be be tweaked, python is not really a mult-threaded language thanks to the GIL. With OpenStack being written in python, one is better off tuning the process count as a rule of thumb for most services. Be warned though, that every tuning comes with a cost(more resource consumption in this case due to more processes) and you have to deal with the tradeoffs. In a subsequent slide, I also talk about reducing ansible memory consumption on the undercloud by limiting the number of forks of ansible that ceph-ansible driven by TripleO spawns for Ceph configuration. A default of 25 is recommended in this case (details on how exactly to configure this are in the slides linked).


I also talked quite a bit about neutron tuning, since I have a special place for SDN in my heart 🙂 Some decent performance gains could be seen by using distributed routing aka DVR and also by using the openvswitch firewall driver instead of the default iptables_hybrid driver since the former natively implements security groups as flows in OVS rather as opposed to using a linux bridge with iptables.


One of the most interesting and engaging things about proposing a new talk for a conference is that you get to explore and learn various things along the way as you start preparing the content of your talk. In preparation for this talk, as I was combing  through several OpenStack configuration files, I found one promising configuration option in nova.conf  that I had previously not explored. It was preallocate_images which defaults to none, meaning no space is allocated upfront for the VM. By toggling this option to space, all of the storage for the VM is allocated upfront, as noted in the comment in the configuration file. With preallocate_images=space, fallocate is called to provision file system blocks efficiently when the VM is initially provisioned, meaning improved I/O performance. But it is one thing to say something and totally a different thing to get the data to back it. In the process of trying to get performance data support this argument, I got some much needed and long overdue experience with using fio. You can pass a job file to fio or call fio on the command line with a certain set of options based on how much you want to stress the system and in what pattern. The command I used from inside the VM after installing fio was

fio --name randwrite --rw=randwrite --size=2G --direct=1

The above command instructs fio to run once fio job with randomwrite I/O type, for a total of 2G size with the default block sizes. The direct=1 flag is of particular interest as it instructs fio to by-pass the OS buffer and stress the I/O devices more directly. I ran the same exact test from inside the VM for each of the two scenarios. This was all good learning for me. There are tons of other cool fio options like iodepth that you can use to vary the stress and pattern of your storage tests. Using data from fio, I was able to prove that having preallocate_images set to space instead of none did indeed improve the performance significantly (1500 iops vs 900 iops).


The later sections of the talk went into tuning some OVS Parameters for scaling OpenDaylight based OpenStack, getting better performance from the Telemetry services etc. The last thing I talked about was putting the Swift service on a disk separate from the root disk on the OpenStack controller nodes, if deploying swift along with the controller services (which is the default way TripleO/Director deploys). The reason for this is, swift services like auditor do periodic reads of the disk to ensure data integrity which could severely bottleneck the controllers if some I/O heavy operation like launching hundreds of guest VMs is happening concurrently.


Overall the conference was well attended with 1323 attendees showing up from various companies including several startups (truly happy about the startup situation in Bangalore). There were some decent Linux performance tracing/tuning talks that I went to, delivered by some folks from IBM.  In addition to this talk, I did an RDO Birds of a Feather(BOF) session about OpenStack Deployment best practices. I was honestly impressed by the level of attention the audience paid as I got several questions both during the BOF session and during my talk on topics ranging from nova scheduler tuning to neutron firewall drivers.


Link to slides




Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s