This is where it began, 6 years ago, at Rackspace's home turf, with the first summit. 75 participants met in a hotel conference room back then. I have been to 4 summits – my first was San Francisco (Grizzly design summit) and I was actually overwhelmed there by seeing ~1000 attendees and giving a keynote in front of them talking about our OpenStack based hosting platform for Business Marketplace. This can't possibly grow much more, the hype must go over, I thought. How wrong I was – there's been a steady growth and we see ~7500 attendees from all over the world in Austin this time. And it's not a marketing show -- there's thousands of developers and operators here and hundreds of them present in one the many tracks. I have been to large shows before (such as LWE in Moscone), but the developer centric events in the Linux space (which is where I come from) have been a lot smaller.
OpenStack Summit is back in Austin - Day 2
Both Jonathan and Mark gave a piece of the answer to this when leading through the keynotes on Monday and Tuesday: OpenStack is a collaborative open source project that embraces the diverse needs and cultures that people bring into it. There is no one dominating paradigm how to use the cloud (like LAMP in the pre-cloud world did) that has emerged, and probably won't.
Many people use containers with Kubernetes or Mesos, or use PaaS Frwameworks such as CloudFoundry and OpenShift. But it's a diverse universe. Jonathan and Mark have an important point -- the goals and technology may morph over time, but the OpenStack Umbrella (or should I say the Big OpenStack Tent?) facilitates that happening inside the project.
It's hard to do justice to the program in any way - I can just highlight a few general observations, some of the keynotes and touch on some of the sessions that Clemens Hardewig, Götz Reinhäckel, Andreas Falkner or I were able to attend.
I hope to arise some curiosity to have a look at the videos: The talks have been taped and there are videos on openstack.org/. And of course the developer sessions have their usual etherpads.
There have been two really cool demos:
The citycloud project had a number of sensors installed in the Austin convention center; the data is collected by a RaspBerry here and then sent over to a container cloud in Czech Republic. We were able to query the realtime status, look at history.
The GIFEE (Google Infrastructure for everyone) project (CoreOS) – they put the OpenStack nodes in containers and let the container management engine take care of handling failures and deploying new hosts – killing containers on stage and see how new ones came up, replacing the old one. Self-healing control plane, really cool. He also demonstrated to do a change along the way.
I hate to write it, but one thing that is hard not to notice is that the US is really ahead of Europe w.r.t. the adoption of OpenStack and scale-out cloud in general. Talking to companies in the marketplace and their plans for Europe, comparing the state of cloud transformation, looking at the amount of OpenStack experts in Europe (this hurts us currently!), it's fair to say that we have a lot of ground to cover in Europe.
I definitely want to help making that happen. We saw a video about an OpenStack hackathon they did in Taiwan – hundreds of engineers creating cool solutions on OpenStack – the winners won an invitation to the OpenStack summit. We should do something like this in Germany! Stay posted! Anyway, next meeting there will be "Deutsche OpenStack Tage" in Cologne (Jun 21+22) and I'm looking forward to it. Hopefully seeing some new faces and new OpenStack experts and even someone willing to be recruited by me :-).
I participated in some design summit developer sessions – they have not changed. The moderators (typically core people from the relevant project) prepare the topics and questions in an etherpad and then help making sure the topics are discussed and covered in a structured way. The goal is to reach agreement with the active people in the room, record it and plan the activities and priorities for the next cycle (6 months). This is engineering collaboration at work and fun to participate in.
The short answer is that you don't – with the exception of Paris, where there was one huge party (in addition to several smaller ones of course). So you have instead a good set of sponsored smaller parties that you need to register for (and that always sell out). Some folks spend considerable effort to find out which ones are the coolest and then do the landgrab for tickets. I am not one of them and most of the time had to rely on some people I know to get me into one event or another anyway.
But this time the social event evening was done differently. The foundation blocked out a street with a dozen bars (each of them sponsored by a different sponsor) and you signed up for the overall event. And it was very nice: Nice places where you could sit outside, have a drink and some food and discuss over life and technology with people you ran into. No segmentation by party preference. And taking advantage of the warm wheather - really relaxing!
I think she has a very good point – many companies have various IT solutions in place and the requirements may differ significantly. Innovation and agility may be the most important design criteria for one project, while for another one working reliably at least 99.99% of the time may be the paramount objective.
There are tradeoffs, you can't have everything. You should treat them differently and you need different processes, structures and mindsets to be successful. The latter is what makes coexistence hard. I think Donna could have spent a bit more focus what this means for the IT infrastructure and platforms supporting this. While it's obvious that a highly agile development model does not fit static classical enterprise IT well, the similar conclusion for highly reliable applications on a dynamic (scale-out cloud) infrastructure is not so simple.
Sure, if you just put a classical application on a scale-out (public) cloud platform, you won't achieve your 4+ nines. The cloud does not afford all the expensive complexity to provide every virtual component with the illusion of being highly available. That's why it's significantly cheaper and more flexible than Enterprise IT after all.
But if your application is designed to deal with failures gracefully by automating the reactions to failed virtual infrastructure (or just to higher load), you can achieve even higher availability than in the classical model. Read on Cattle and Pets (Randy Bias), the Chaos Monkey and Cloud Native Applications to learn how to do this. Transforming applications this way is hard work, but the benefits are significant – a lot more flexibility (if you want it) and often multiple times lower infrastructure and operations cost.