Sharing wealth one developer at a time!

10 Deploys A Day

30 March 2014

This post was inspired by Gene Kim's talk on continuous delivery and his eye opening book The Phoenix Project.

What is a continuous delivery? It's a methodology that allows you to get new features and capabilities to market quickly and reliably. It's a practice that shortens work in progress and allows for rapid feedback on your new features or improvements. It's automation of many parts including testing, creation of environments, and one button push deployments. It's a magical unicorn that allows companies to deploy 10 times a day without much effort or risk of breaking stuff!

The reason why his story strikes a chord with me is because I strongly believe that automation can streamline the software development, make employees happy, and help organizations to become high performers. And automation is a big part of continuous delivery methodology. Throughout my career, I've participated in many painful deployments that lasted more than a day and usually were throughout the weekend. And I believe nobody should be a part of that because there is a better way that's within reach of many companies. As a result, developers can focus on doing stuff that matters, companies can deliver new features much more rapidly, and stake holders can see their ideas spring to life very quickly.

Average Horse

First, let's dive deeper into the normal software delivery practice of an average company.

  • Product managers will come up with arbitrary due dates without doing technical capacity planning, making promises that we cannot keep. And in result, when a due date comes, product is rushed with many shortcuts taken, which results in lower quality and more fragile applications, which means more technical debt and unplanned worked later.
  • Security is not even in the picture because new features are not getting to market quickly enough.
  • Technical debt continues to get stock piled and is rarely paid off. Like financial interest, it grows over time until most of the time is spent on just paying off the interest in the form of unplanned work.
  • Deployments are not automated and it takes long time to manually deploy. Therefore deployments are a lot less frequent, that means a huge number of features are being deployed at once. That means finished work does not make it into production for months, sometimes years (scary), and that means no rapid feedback on performance, adoption, or business value. Comparing that to the manufacturing plant, where at the bottleneck station you have a stock pile of work, and you have to stop everything just to catch up. At that point you cannot give any feedback because other stations already finished their work, and it's very costly to change already made stuff (unplanned work) and the solution is lower quality product (technical debt).
  • Due to lack of automated testing, companies have to deploy even more infrequent since it takes an army of QA engineers to regression test the entire application, and it becomes even longer as more features are deployed.
  • Failed features don't get pruned, but rather just left to rot and accumulate more security, technical, and maintance debt.

Unplanned work is not free, it's actually really expensive. Another side affect is when you spend all your time firefighting, no time or energy left for planning. When all you do is react, there's not enough time to do the hard mental work of figuring out whether you can accept new work. Which means more shortcuts, more multitasking. - The Phoenix Project

How Unicorns Work

The main idea behind continuous delivery is to reduce work in progress (WIP) that would allow for quicker feedback of what goes into production. For example, if you work on a 9 month long project it will take you longer than 9 months to see your code in production, and if something has a problem it will be very expensive to go back and change some design decisions. Therefore, there is a good possibility that a fix will just be a hack rather than a proper solution, meaning more technical debt accumulating, more problems later. And not to forget that after 9 months of projects it will take the whole weekend and huge amount of agony to release it.

Gene's term for WIP in IT terms is a "Lead Time" which measures how long it takes to go from code committed to code successfully running in production. And until code is in production is has absolutely no value because it's just sitting there. Focusing on fast flow will improve quality, customer satisfaction, return on investment, and employee happiness.

But you think it must be crazy to release so much a day, isn't it dangerous to make all those changes? It's actually a lot less risky to deploy small changes incrementally because you get rapid feedback. And even if there is a problem it's much easier to fix small problem sooner rather than later where you are forced to take shortcuts.

To get there, you have to "improve at bottlenecks", and any improvements that are not done at the bottlenecks are a waist of time. If your deployment process is a bottleneck, you have to automate it, until it becomes one button push deployment. There is absolutely no good reason why a developer cannot push a button which will create a QA environment that exactly matches production with code deployed, and if it passes automated functional tests it can be put into production with another push of the button. If the regression testing is a bottleneck then you need to pay off that debt by writing automated functional tests or end to end system tests.

"Automated tests transform fear into boredom." --Eran Messeri, Google

To become high performer, you will also need to add monitoring hooks to your applications, so that any developer can add his or her metrics at will. So when you release often, you can get rapid feedback on the performance, adoption, and value. That way you can make an inform decisions and rollback if necessary. It should be extremely easy for developer to add any kind of monitoring metrics to code and data must be accessible from production.

Gene proposed to spend 20% of the time to work on non-functional improvements, or non feature work, and I think if any organization adopted that they would be on their way of becoming a high performing unicorn. I honestly don't think it's much to invest comparing to opportunity cost of features not making out for long periods of time and where only 10% of features are successful. How can you test something when you can only release couple times a year?

Finally, you should be deploying to production with your features turned off that way your releases are not at the same time as deployments and turning features on and off is a simple button click.

It's not art, it's production. If you can't out-experiment and beat your competitors in time to market and agility, you are sunk! Features are always a gamble. If you are lucky, ten percent will get the desired benefits. So the faster you get those features to market and test them, the better you will be. - The Phoenix Project

And if you think 10 deploys a day is crazy take for example Amazon with a mean time between deployments of 11.6 seconds (insane). And it's not just them, companies like Intuit, Target, Google, Etsy, Flikr, Kansas State University and many others have embraced continuous delivery.

It Does Not Have To Be Radical. Small Steps Are Just Fine.

In a perfect world, a company with problems would stop everything to fix the production line and pay off the technical debt. And some companies like EBay had to do that to escape the vicious cycle. I don't think it has to be so drastic for an average company. I believe if you accept the culture of continuous improvement, and first focus on all the bottlenecks, you can soon get there. For example, you can make small changes that will bring a lot of improvement. For example, if you deployment is a manual process focus on automatically creating packages and create a script that will automatically deploy. If it requires database changes, add scripts to the package and your deployment script will deploy database changes automatically. There is no reason why a DBA has to compile and execute scripts by hand when it can be automated. If your need many QA engineers to regression test the site, why not spend some of their time to write automated tests, I'm sure they would be happier to find new bugs rather then doing senseless testing of the same stuff.

Final Thoughts

Gene urges us to create a culture of genuine learning and experimentation and that's how best companies get even better. In additional here a great quote if you think it's not relevant to you

Most of the money spent are on IT Projects these days and even if companies say it's not their core competency it's not true. Everyone must learn or you risk irrelevance in 10 years. - Gene Kim

Good luck and see you in the world of unicorns! :)

continuous delivery

Getting Serious About JavaScript

17 March 2014

Why Should I Care About JavaScript?

If you want to build next generation platforms or web applications you have to get serious about JavaScript. Rich user interfaces, or Single Page Applications, provide a much better user experience and give products an edge over competition. Many new applications like DropBox, Trello, Windows Azure and many others are a great example of amazing user experience. In addition, JavaScript is already very widely used on the server side as well. It's incredably fast, asynchronous out of the box, and is a perfect backend for your single page applications. Finally, it has a huge ecosystem with almost as many NPM packages than the largest platform, and catching up really fast. As we've seen before where community goes that's where the next most widely used platform will be.

And it's not only startups that choose Node.js, recently a company like PayPal announced that they chose Node.js to be an application platform of choice. And some great benefits they reported so far:

Built almost twice as fast with fewer people
Written in 33% fewer lines of code
Constructed with 40% fewer files

From my personal experience I can also say that development JavaScript is real fun, it's very fast and I enjoy learning functional nature of it. Paul Graham chose Lisp because it allowed them at ViaWeb to ship code faster than competition. I feel like JavaScript has that advantage now as well. It's also functionals and it has a large number of open source project that you can pick and choose, so you don't have to reinvent the wheel. So why wouldn't you use it if you had to choose a platform?

It's time to get serious about learning JavaScript

Writing large application will require developers to actually pick up few books and finally learn the language. You can no longer get away with hacking together bunch of spaghetti code with global variables. Good news is that now there are a lot more good resources than there was few years ago. Couple great books that I had a pleasure of reding provide invaluable deep knowledge of the language. Another super resource for learning JavaScript is to read existing open source libraries. You can extract a large amount of knowledge for free if you are willing to roll up your sleeves and get uncomfortable. Finally, like any serious developers we want to write our components with help of unit tests. I've also found that testability of Javascript is extremely important, browser debugging is somewhat not efficient and you have to jump through different places due to Javascrit's asynchronous event loop nature. It's easier to test components individually then debug entire application.

So here are the books that I've really enjoyed and will get you up to speed if you are a seasoned pro in other languages like C# or Java.

Great Books

JavaScript Patters: I found this very helpful with breaking down different patterns of object creations, function types, and overall best practices. It packs a lot great stuff in about 200 pages and will give you a quick intro without going too deep into language itself. After I read this book, I was finally able to understand different object creation patterns and why you would use one over another. If you don't want to get too deep into language but want to actually write clean code and to be able to read existing librarires I think you can get away with this book. Because this book was written in 2010, most of the examples covered are following EcmaScript 3 standard, so it's great if you want to support old browsers.

Async JavaScript This is an awesome book to pick up next. It explains very well asynchronous functions, async error handling and event loop. This knowledge is a must for any serious JavaScript programmer. Every page of this book 80 page book is densely packed with information and you want to read it slow and enjoy every bit of it. It also dives into of how to make your callbacks cleaner with promises and deferreds. Finally it takes a look at existing async libraries that can make your life a lot more easier when dealing with multiple asynchronous functions. The biggest "aha" moment when I read this book was an understanding of how event loop queue works and why some libraries execute functions with setTimeout.

Effective JavaScript From the legendary Effective series, this book is lives up to it's standard. It goes deep into the language in a series of 68 topics. It's very good at breaking down and explaining topics and contains a wealth of knowledge. It actually explains intricate details about JavaScript semicolon insertion, implicit coercions, and a lot of other goodness like implicit binding of "this". A lot of subjects are already covered in other books, but this book actually explains them in more detail and why things the way they are. It's not overly complex but it is very dense with information and joy to read. This is a must read for any serious developer who wants a deeper understanding of the JavaScript language.

Great JavaScript Libraries To Read

Ghost is a new, simple bloging platform that's build on top of Node.js with Express on backend and Backbone on the client. If you are looking to build a full stack applications with JavaScript this project will get you going. I found it a great way to get up to speed on how configuration, modules, and data access setup. Among other things you can learn about authentication, middleware, and see extensive use of deferreds.

Backbone What I like about backbone is that it has extensive suite of Unit Tests, the library itself is about 1700 lines of code, and it has very good comments. It's pretty incredible that such small library is the most widely used SPA library out there. I start with reading unit tests to understand the specifications. After I have general knowledge I would dive in to pieces that I find most interesting.

Express Whether you are looking to build a Restful API or a traditional web application, express is great a minimalist framework on top of Node.js for that. It's pretty simple and genius and also has a pretty small code base.

What others JavaScript libraries or books did you enjoy? Send me a line @mercury2269.

javascript learning book-recommendations

Changes In MSBuild with Visual Studio 2013

04 March 2014

After I uninstalled Visual Studio 2012, the deployment package creation script that builds and publishes a project using MSBuild started throwing a lovely exception:

.csproj(795,3): error MSB4019: The imported project "C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v11.0\We bApplications\Microsoft.WebApplication.targets" was not found. Confirm that the path in the declaration is correct, and that the file exists on disk.

From the error, I can tell that MSBuild is using the wrong Visual Studio version. My first thought was to to tell MSBuild to use v12 to build target by adding a VisualStudioVersion environment variable /p:VisualStudioVersion=12.0 but that resulted in a new error that was a little more confusing:

C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v12.0\Web\Transform\Microsoft.Web.Publishing.AspNetCompileMerge.targets(132,5): error : Can't find the valid AspnetMergePath

I guess something has changed, and indeed, after searching I found that that yes MSBuild now ships as a part of Visual Stuido.

So rather than having MSBuild shipped as a component of a .NET framework, it is now a stand along package that comes with Visual Studio and each version corresponds to the Visual Studio version with it's own toolsets. So the new MSBuild will be under:

On 32-bit machines they can be found in: C:\Program Files\MSBuild\12.0\bin

On 64-bit machines the 32-bit tools will be under: C:\Program Files (x86)\MSBuild\12.0\bin

So if you are build you project using C:\Windows\Microsoft.NET\Framework\v4.0.30319\msbuild.exe it will no longer work if you don't have Visual Studio 2012 installed and you need to switch to the version specific msbuild 12.0 in C:\Program Files (x86)\MSBuild\12.0\Bin\MSBuild.exe.

That's it, hopefully this post will save someone some time.

visual-studio-2013 msbuild
Previous Page: 1 of 14