Continuous Delivery

Jez Humble, David Farley

Mentioned 24

The step-by-step guide to going live with new software releases faster - reducing risk and delivering more value sooner! * *Fast, simple, repeatable techniques for deploying working code to production in hours or days, not months! *Crafting custom processes that get developers from idea to value faster than ever. *Best practices for everything from source code control to dependency management and in-production tracing. *Common obstacles to rapid release - and pragmatic solutions. In too many organizations, build, testing, and deployment processes can take six months or more. That's simply far too long for today's businesses. But it doesn't have to be that way. It's possible to deploy working code to production in hours or days after development work is complete - and Go Live presents comprehensive processes and techniques for doing so. Written by two of the world's most experienced software project leaders, this book demonstrates how to dramatically increase speed while reducing risk and improving code quality at the same time. The authors cover all facets of build, testing, and deployment, including: configuration management, source code control, release planning, auditing, compliance, integration, build automation, and more. They introduce a wide range of advanced techniques, including inproduction monitoring and tracing, dependency management, and the effective use of virtualization. For each area, they explain the issues, show how to mitigate the risks, and present best practices. Throughout, Go Live focuses on powerful opportunities for individual improvement, clearly and simply explaining skills and techniques so they can be used every day on real projects. With this book's help, any development organization can move from idea to release faster -- and deliver far more value, far more rapidly.

More on Amazon.com

Mentioned in questions and answers.

What is the best branching strategy to use when you want to do continuous integration?

  1. Release Branching: develop on trunk, keep a branch for each release.
  2. Feature Branching: develop each feature in a separate branch, only merge once stable.

Does it make sense to use both of these strategies together? As in, you branch for each release but you also branch for large features? Does one of these strategies mesh better with continuous integration? Would using continuous integration even make sense when using an unstable trunk?

I think either strategy can be used with continuous development provided you remember one of the key principles that each developer commits to trunk/mainline every day.

http://martinfowler.com/articles/continuousIntegration.html#EveryoneCommitsToTheMainlineEveryDay

EDIT

I've been doing some reading of this book on CI and the authors make suggest that branching by release is their preferred branching strategy. I have to agree. Branching by feature makes no sense to me when using CI.

I'll try and explain why I'm thinking this way. Say three developers each take a branch to work on a feature. Each feature will take several days or weeks to finish. To ensure the team is continuously integrating they must commit to the main branch at least once a day. As soon as they start doing this they lose the benefit of creating a feature branch. Their changes are no longer separate from all the other developer's changes. That being the case, why bother to create feature branches in the first place?

Using branching by release requires much less merging between branches (always a good thing), ensures that all changes get integrated ASAP and (if done correctly) ensures your code base in always ready to release. The down side to branching by release is that you have to be considerably more careful with changes. E.g. Large refactoring must be done incrementally and if you've already integrated a new feature which you don't want in the next release then it must be hidden using some kind of feature toggling mechanism.

ANOTHER EDIT

There is more than one opinion on this subject. Here is a blog post which is pro feature branching with CI

http://jamesmckay.net/2011/07/why-does-martin-fowler-not-understand-feature-branches/

Does anyone use Accurev for Source Control Management? We are switching (eventually) from StarTeam to Accurev.

My initial impression is that the GUI tool is severely lacking, however the underlying engine, and the branches as streams concept is incredible.

The biggest difficulty we are facing is assessing our own DIY tools that interfaced with starteam, and either replacing them with DIY new tools, or finding and purchasing appropriate replacements.

Additionally, is anyone using the AccuWork component for Issue management? Starteam had a very nice change request system, and AccuWork does not come close to matching it. We are evaluating either using Accuwork, or buying a 3rd party package such as JIRA.

Opinions?

Note: I am an AccuRev user and I like it very much. I have already upvoted a few answers here, and would like to add:

I've just recently stumbled over this "review" of AccuRev in the book Continuous Delivery by Jez Humble and David Farley:

[Chapter 14, p 385]

Commercial Version Control Systems

(...) the only commercial VCSs that we are able to wholeheartedly recommend are:

  • (...)
  • AccuRev. Offers ClearCase-like ability to do stream-based development without the crippling administrative overhead and poor performance associated with ClearCase.
  • (...)

To which I might add that I never have used ClearCase, but I am the AccuRev admin around here, and it is indeed very little work to administer. (WRT performance, this question might give more insight.)

Let's say I write a jQuery plugin and add it to my repository (Mercurial in my case). It's a single file, say jquery.plugin.js. I'm using BitBucket to manage this repository, and one of its features is a Downloads page. So, I add jquery.plugin.js as one of the downloads.

Now I want to make available a minified version of my plugin, but I'm not sure what the best practice is. I know that it should be available on the Downloads page as jquery.plugin.min.js, but should I also version control it each time I update it to reflect the unminified version?

The most obvious problem I see with version controlling the minified version is that I might forget to update it each time I make a change to the unminified version.

So, should I version control the minified file?

No, you should not need to keep generated minimized versions under source control.

We have had problems when adding generated files into source control (TFS), because of the way TFS sets local files to be read-only. Tools that generate files as part of the build process then have write access problems (this is probably not a problem with other version control systems).

But importantly, all the:

  • tools
  • scripts
  • source code
  • resources
  • third party libraries

and anything else you need to build, test and deploy your product should be under version control.

You should be able to check out a specific version from source control (by tag or revision number or the equivalent) and recreate the software exactly as it was at that point in time. Even on a 'fresh' machine.

The build should not be dependent on anything which is not under source control.

Scripts: build-scripts whether ant, make, MSBuild command files or whatever you are using, and any deployment scripts you may have need to be under version control - not just on the build machine.

Tools: this means the compilers, minimizers, test frameworks - everything you need for your build, test and deployment scripts to work - should be under source control. You need the exact version of those tools to be available to recreate to a point in time.

The book 'Continuous Delivery' taught me this lesson - I highly recommend it.

Although I believe this is a great idea - and stick to it as best as possible - there are some areas where I am not 100% sure. For example the operating system, the Java JDK, and the Continuous Integration tool (we are using Jenkins).

Do you practice Continuous Integration? It's a good way to test that you have all the above under control. If you have to do any manual installation on the Continuous Integration machine before it can build the software, something is probably wrong.

At the moment we are deploying our whole application chain together and at once to production, because of the many dependencies that the systems have.

Our Scrum teams are business theme based in order to ensure real business value at the end of each Sprint with every user story, so it often happens, that user stories need changes in several applications.

And we have several Scrum teams, working on the same systems. Logically we end up acceptance testing everything in a huge acceptance and (semi automated) regression test.

But doing a big bang roll-out to production is very time consuming, error prone and not scalable anymore... (or is it?) With continuous deployment we would like to enable the team to self service a roll out to production, so business rolls out features when they want to, not based on an IT schedule.

But how do we manage to roll-out changes (code, DB scripts) that are distributed over several code bases and find a strategy in order to deal with the dependencies between applications?

What's the strategy to have scalable continuous deployment? And how do you transition to this point?

What do you think?

(That is quite a few questions inside one big question.)

But I would refer to the Continuous Delivery book http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912/

Edit: (As commented you already read this book) Some suggestions which you may already do, but for others with similar issue:

But I have no solid solution to the inter-dependency auto-deploy strategy you actually ask for :|

I am thinking about a deployment pipeline using SVN, Jenkins and Maven. At the moment I'm stuck at the point where I usually would call mvn release:perform on a working copy.

When thinking in deployment pipelines, I want to create a pipeline where every commit could be used to release a software to test/production. Let's say I have 5 builds, and I decide to release build 3 (with revision 3) to production. There will already be 2 new commits to trunk (which is now at revision 5).

Is it possible to use the maven-release-plugin to checkout/build/tag/commit a release at revision 3? When the maven-release-plugin finishes the release it usually commits the modified POMs to trunk.

I'm happy about any kind of information or advice here, so feel free to point me to books (like http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912), blog posts, Jenkins documentation... Maybe I'm completely on the wrong track.

Maybe that's just a mad man's dream, but..

In my company we have a big C# .NET project, with ~25 solutions (very old) and ~3.5 mio. loc. The problems I'm facing is: too slow build times, right now it takes 7 minutes with SSD (dev machines), 15 minutes+ in VM with normal harddrives (would be the TeamCity build system I'd like to get deployed). I know, build system should be fastest, but that's nothing I can change in the short term.

I want to shorten the commit-build-unittest feedback-loop for the devs (preferably on Teamcity machine right now) by just compiling the project(s) which were touched by the last commit, taking all other assemblies from e.g. a local nuget server (teamcity server itself with version 7.0).

Now that would immensely cut down the feedback loop (15 minutes to less then a minute, given real unit tests) for small commits.

I know the problem of such a partial compile is the possibility of skipping compile errors (mismatching interfaces could go unnoticed), but that would be mitigated by running a second (Teamcity?) build server instance which runs the whole enchilada, in parallel. But getting first feedback immediatly is very important to me.

Now my question: is there any build system/continous integration system which can handle this task? Or would I have to write my own commit-aware background service? Which would be a bit nasty, as we use FinalBuilder Scripts, and that Format doesn't seem to be readable by any API (but didn't dig deep enough into that).

P.S.: Also, I would like to only run the unit tests for the projects which where changed by the last commit, or at least prioritize them. But that's an afterthought.

Most of the available CI engines adopts a deployment pipeline process, which is specifically designed to reduce the feedback time in the development loop. It works like this, with a FAIL status immediately if any of the steps went wrong.

It is suggested (by this book) to have the first 4 steps to be less then 2-5 mins even for the most complicated projects, if it goes above that then there is a problem with your configuration and the way you use CI process.

Code commit
 triggers --->

 Step 1. Automatic checkout on CI side
 Step 2. Compile code, ideally 1-2 mins  
 Step 3. Save binaries to the artifact repository
 Step 4. Unit test, ideally 1-2 mins
 Step 5. Deploy to staging
 Step 6. Automated integration testing
 Step 7. Automated acceptance testing
 ------------------------------------
 Manual testing
 Manual deploy to production

Specifically to Step 2 you can:

a. Split large solution into separate tiers. Each tier would have it's own Visual Studio solution and only projects relevant to that tier, in a sense you perform decentralisation of the initial bulky solution. On step 5 you would know how to assemble tiers into usable application.

b. In TeamCity configuration you can specify whether to perform a clean checkout or use the source already available (can save time on step 1). Check that the target to MSBuild is set to Build which will pick up only the source files that have changed since the last build (save time on step 2).

From personal experience option a) is the best one, but if needed you can also use both a) and b), which however may bring some latent bugs with old files being kept for longer than needed. Teamcity also supports several agents (up to 3 in free edition) which will allow you to run tasks in parallel.

I'm currently starting reading the book Continuos Delivery by Humble/Farley and while a lot of stuff in there makes sense, there one thing that's nagging me:

It does seem the authors are solely targeting server-based (single-client?) applications (like webapps) with their treatment of what things to do and to avoid wrt. automating the build process, the testing, the deployment.

Looking at the questions tagged continuous-deployment it does also seem the term is only used in context of server-based applications.

So, I was wondering, does automating stuff after "the setup" (talking of a windows app) for a desktop app has been created even make any sense? The "deployment" of a desktop app is always user-driver, so what kind of sense would it make to automate anything here -- and, really, what stuff could be automated that made any sense?

Oh, and btw. I'm entirely unsure whether this question would be better put on programmers.SE, so feel free to move it there if you think it would.

Please check the link: http://timothyfitz.wordpress.com/2009/03/09/cd-for-client-software/ which talks about Continuous Deployment for Downloadable Software.

I recently joined a company as Release Engineer where a large number of development teams develop numerous services, applications, web-apps in various languages with various inter-dependencies among them.

I am trying to find a way to simplify and preferably automate releases. Currently the release team is doing the following to "release" the software:

CURRENT PROCESS OF RELEASE

  1. Diff the latest revision from SCM between QA and INTEGRATION branches.
  2. Manually copy/paste "relevant" changes between those branches.
  3. Copy the latest binaries to the right location (this is automated using a .cmd script).
  4. Restart any services

MY QUESTION

I am hoping to avoid steps 1. and 2. altogether (obviously), but am running into issues where differences between the environments is causing the config files to be different for different environments (e.g. QA vs. INTEGRATION). Here is a sample:

IN THE QA ENVIRONMENT:

<setting name="ServiceUri" serializeAs="String">
        <value>https://servicepoint.QA.domain.net/</value>
</setting>

IN THE INTEGRATION ENVIRONMENT:

<setting name="ServiceUri" serializeAs="String">
        <value>https://servicepoint.integration.domain.net/</value>
</setting>

If you look closely then the only difference between the two <setting> tags above is the URL in the <value> tag. This is because the QA and INTEGRATION environments are in different data-centers and are ever so slightly not in sync (with them growing apart as development gets faster/better/stronger). Changes such as this where the URL/endpoint is different are TO BE IGNORED during "release" (i.e. these are not "relevant" changes to merge from QA to INTEGRATION).

Even in a regular release (about once a week) I have to deal with a dozen config files changes that have to released from QA to integration and I have to manually go through each config file and copy/paste non URL-related changes between the files. I can't simply take an entire package that the CI tool spits out from QA (or after QA), since the URL/endpoints are different.

Since there are multiple programming languages in use, the config file example above could be C#, C++ or Java. So am hoping any solution would be language agnostic.

SUMMARY OF ENVIRONMENTS/PROGRAMMING LANGUAGES/OS/ETC.

  1. Multiple programming languages - C#, C++, Java, Ruby. Management is aware of this as one of the problems, since Release team is has to be king-of-all-trades and is addressing this.
  2. Multiple OS - Windows 2003/2008/2012, CentOS, Red Hat, HP-UX. Management is addressing this too - starting to consolidate and limit to Windows 2012 and CentOS.
  3. SCM - Perforce, TFS. Management is trying to move everyone to a single tool (likely TFS)
  4. CI is being advocated, though not mandatory - Management is pushing change through but is taking time.
  5. I have given example of QA and INTEGRATION, but in reality there is QA (managed by developers+testers), INTEGRATION (managed by my team), STABLE (releases to STABLE by my team but supported by Production Ops), PRODUCTION (supported by Production Ops). These are the official environments - others are currently unofficial, but devs or test teams have a few more. I would eventually want to start standardizing/consolidating these unofficial envs too, since devs+tests should not have to worry about doing this kind of stuff.
  6. There is a lot of work being done to standardize how the binaries are being deployed using tools like DeployIT (http://www.xebialabs.com/products) which may provide some way to simplify these config changes.
  7. The devs teams are agile and release often, but that just means more work diffing config files.

SOLUTIONS SUGGESTED BY TEAM MEMBERS:

  1. Current mind-set is to use a LoadBalancer and standardize names across different environments, but I am not sure if "a process" such as this is the right solution. There must be a better way that can start with how devs write configs to how release environments meet dependencies.
  2. Alternatively some team members are working on install-scripts (InstallShield / MSI) to automate find/replace or URLs/enpoints between envs. I am hoping this is not the solution, but it is doable.

If I have missed anything or should provide more information, please let me know.

Thanks

[Update] References:

  1. Managing complex Web.Config files between deployment environments - C# web.config specific, though a very good start.
  2. http://www.hanselman.com/blog/ManagingMultipleConfigurationFileEnvironmentsWithPreBuildEvents.aspx - OK, though as a first look, this seems rather rudimentary, that may break easily.

I'm working through the process of creating a "deployment pipeline" for a web application at the moment and am sifting my way through similar problems. Your environment sounds more complicated than ours, but I've got some thoughts.

First, read this book, I'm 2/3 the way through it and it's answering every question I ever had about software delivery, and many that I never thought to ask: http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912/ref=sr_1_1?s=books&ie=UTF8&qid=1371099379&sr=1-1

Version Control Systems are your best friend. Absolutely everything required to build a deployable package should be retrievable from your VCS.

Use a Continuous Integration server, we use TeamCity and are pretty happy with it so far.

The CI server builds packages that are totally agnostic to the eventual target environment. We still have a lot of code that "knows" about the target environments, which of course means that if we add a new environment, we have to modify all such code to make sure it will cope and then re-test it to make sure we didn't break anything in the process. I now see that this is error-prone and completely avoidable.

Tools like Visual Studio support config file transformation, which we looked at briefly but quickly realized that it depends on environment-specific config files being prepared with the code, by the developers in order to be added to the package. Instead, break out any settings that are specific to a particular environment into their own config mechanism (e.g. another xml file) and have your deployment tool apply this to the package as it deploys. Keep these files in VCS, but use a separate repository so that revisions to config don't trigger new builds and cause the build number to get falsely inflated.

This way, your environment-specific config files only contain things that change on a per-environment basis, and only if that environment needs something different to the default. Contrary to @gbjbaanb's recommendation, we are planning to do whatever is necessary to keep the package "pure" and the environment-specific config separate, even if it requires custom scripting etc. so I guess we're heading down the path of madness. :-)

For us, Powershell, XML and Web Deploy parameterization will be instrumental.

I'm also planning to be quite aggressive about refactoring the config files so that the same information isn't repeated several times in various places.

Good luck!

In my company we have a system written in c# and uses sql server db. Whenever we want to upload our code changes to the production environment, we take the system offline for few hours. I was wondering if there is a way to do it without taking the system down.

What are the big companies doing? I can't imagine Amazon.com going offline for 3 hours when they change the code or the database.

Thanks.

It really does vary.

There are a few good articles about what some of the big players do. B

  • Netflicks - Deploy to a few 'canary' machines, then replace all the running machines with a new image.
  • Etsy - Continuous deployment using feature flags to switch new features off until they are ready to deploy

Where I work (Red Gate) we use one of our own tools (Deployment Manager) to deploy web applications and the supporting databases. It's free to use for 5 projects and servers if you want to give it a try.

The mechanism it's using under the hood to deploy is aimed to reduce downtime.

For web applications. It packages the latest version of your app and copies it to a new folder on each web server. Then it reconfigures IIS to point the existing website configuration at the new directory. This means that the cut over is very quick, and minimises downtime. You can add powershell scripts that are run either pre or post deployment to further customise the deployment. For example as @jamesakadamingo mentions you could use these to remove and then re-add machines from a load balancer, or put up a temporary maintenance page.

The database deployment is handled by generating diffs between the current and as-is schemas. However it's up to the user when they want to make database deployments in the workflow. We usually follow an approach of releasing database changes before the application code, as in general the 'old' code will run against the 'new' database. If this isn't the case then we use an abstraction approach where will launch a version of the code before we change the DB. This version will be written to handle both the old and new database. Then we will deploy the DB and later still we deploy the application code again to remove any code that is no longer needed in the old app.

There's some great advice in this book about automating deployments and minimising downtime. Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation by Jez Humble and David Farley

I wonder if there is any best practice or at least a more practical way to deploy C/C++ executable to Linux based production servers.

I have Jenkins up and running as CI server, and created a main SVN module which contains multiple svn:externals. This module is mainly served as a pipeline of related C++ applications. (Perhaps I should post this an another question on whether svn:externals is the correct way to do it)

So the main question is the deployment steps, I am planing to make all production servers as Jenkins' slaves with parameterized config, for the purpose of building from SVN tags. And use some scripts to copy all executables to, eg: /opt/mytools/bin in multiple production servers.

Any recommendations?

I'm working through some of the same questions, and I'm finding that Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation by Humble and Farley has been a good (technology agnostic) starting point - not perfect but it's pointed me in the right direction when I had no idea what to do next.

The continuous delivery book recommends setting up 'build pipelines' in which you run progressively more and more automated tests, with only the final manual tests and deploy rollback options being triggered by a real person.

My company is moving into a Service Oriented and modular development structure. So far it is looking great and working well.

One problem we can see starting to happen is keeping track of which programs are referencing various Dlls and (WCF) services. As dlls and services get updated, there will be applications using the older versions.

We are finding that keeping a plain test list of dependencies and product manifests (lists of components in a project)works. But it is not flexible.

A plain text list cannot be queried and manipulated to pull out needed information.

So, it leaves me wondering if there are any tools to track versions of files and the dependencies between them?

If there are several tools, I would (of course) prefer a free/open source solution.

In case it matters, we use TFS for our source control and auto builds.

This topic itself is quite complicated and there is no general or standard approach available for this. I can highly recommend to read a book Continious Delivery, there you will also find a list of available tools and a proper explanation of how to do things correctly. Keep in mind this will take significant ammount of time, but from the other hand will make your life so much easier.

In a nutshell, you will need to have:

  1. List of all products (binaries, msi, dlls, etc) available within your company
  2. A list of all possible configurations for the products
  3. Versions available of any product
  4. List of environments (staging, UAT testing, production, etc)

You will also need to have a place where you can select a product of particular version, it's configuration and deploy it via one button click to the selected environment. This user interface shall also allow you to see which product (version and configuration) is deployed to which environment. Behind the scene, the tool will just call the scripts that you wrote to perform custom deployment.

In terms of tools, there are quite a few things that you will need:

  1. CI server(Go, TeamCity, TFS)
  2. Build managements scripts, such as MSBuild, NAnt, etc.
  3. Tools like Puppet or CMDB for configuration management
  4. Deployment scripts, like Powershell

Background:

My team has been using Jenkins to run our continuous integration (CI) for our Grails applications. We are trying to move closer to Continuous Delivery by setting up a deployment pipeline and having push button deployments to multiple environments (Dev, Itg, Prod). We have tried to use the Jenkins Tomcat plugin to deploy our code but have run into occasional PermGen issues on Tomcat and have to manually restart it after the deployment.

Questions:

  1. Is Jenkins the right tool to use for automated deployments with Grails?
  2. How can we automate the deployment to Tomcat without having to manually restart it afterwords?
  1. I don't think anyone can say if Jenkins is the "right" tool, but it is a good one.
  2. When you hot-deploy to Tomcat, its PermGen will almost inevitably grow. A restart is the easiest way to handle this. See other questions like What makes hot deployment a "hard problem"? for more information. You can use the Post Build Task to run a shell script on the Jenkins server to deploys the war and restart Tomcat.

Time and again I am faced with the issue of having multiple environments that must be configured individually for an application that would run in all of them (e.g. QA, regional production env's, dev, staging, etc.) and I am wondering what would be the best way to organize different configurations?

Would it be in the database? Different configuration files per environment? Or maybe the same file with different sections/xml tags? How would these be then deployed? Embedded within the app? Or put manually in after installation to be modified in-place?

This question is not technology-specific - I've worked with .net and Java, web-apps and desktop apps and this issue comes up time and again. I'm looking to learn different approaches to maybe adapt a hybrid to address this.

EDIT: There's one caveat that I must point out - when configuration is part of the deployed solution, it is generally installed under root user on the host. In large organizations developers usually don't have a root access to production hosts so any changes to the configuration require a new build and deployment. Needless to say this isn't the nicest approach - especially at organizations that have a very strict release process involving multiple teams and approval levels... (sigh I know!)

Borrowed from Jez Humble and David Farley's book "Continuous Delivery (page 41)", you can:

  • Your build scripts can pull configuration in and incorporate it into your binaries at build time.
  • Your packaging software can inject configuration at packaging time, such as when creating assemblies, ears, or gems.
  • Your deployment scripts or installers can fetch the necessary information or ask the user for it and pass it to your application at deployment time as part of the installation process.
  • Your application itself can fetch configuration at startup time or run time.

It is considered bad practice by them to inject configuration files in build and compile times, because you should be able to deploy the same binary file to every environments.

My experience was that you could bake all configuration files for every environments (except sensitive information) to your deployment file (war, jar, zip, etc). And you design your application to take in an extra parameter when starts, to pickup the right sets of configuration files (from your extracted deployment file, or from local/remote file system if they are sensitive, or from a database) during application's startup time.

We are going to use Hudson/Jenkins build server to both build our server applications (just calling maven) and run integration tests against it. We are going to prepare 3 Hudson/Jenkins jobs: for build, deploy and run integration tests, which call each other in this order. All these jobs (build, deploy, integration tests) will be running nightly.

The integration tests are written with JUnit and are invoked by mvn test, (which will be invoked by the "test" Hudson/Jenkins job in turn). Since they require the server to be up and running we have to run that "deploy" job.

Does it make sense? Is there are any special server to deploy application and run tests or Hudson/Jenkins is ok for that?

It definitely makes sense, basically you are referring to a build pipeline. There is a Jenkins-plugin to help visualize the upstream/downstream projects (you create a new pipeline view in jenkins).

As for the deployment of the server component, this depends on what technology/stack you are running on. For instance you could write a script that deploys the application to a test environment using a post-build step in jenkins.

Another option is to use a maven plugin to deploy the application. You can separate the deployment step in profile, and run only the deploy goal on the deploy step etc.

Basically there are a lot of options, but the idea of a build pipeline makes a lot of sense. To read up on build pipelines and related topics I would suggest taking a look at Continuous Deployment.

For more information related to Jenkins, have a look at this video.

Does it make sense? Is there are any special server to deploy application and run tests or Hudson/Jenkins is ok for that?

You can run the application on the same server as jenkins, but wether that makes sense depends on the application. If it depends heavily on a specific server setup, a better choice may be to run the server in a vm, and but the configuration in source control. There are plenty of tools to help automate this, of the top of my head you have Puppet, Chef and Vagrant

Im currently reading Continius Delivery and in the book the author says that it is crucial to build the binarys only once, and then use the same binarys for every deployment. What im having problem understanding is how this can be done in practice? For examaple in order to run the mocked unit tests there would be a special build? What im refering to is the scope tag in Maven.

I've deployed my application to Elastic Beanstalk, and configured it with the Rolling Updates feature. This sounds pretty cool because it keeps a certain number of servers in the farm at all times while others are swapped out for upgrades - so it seems an outage-less upgrade is achievable.

But by my reasoning this just isn't going to work - at some point during a rolling update there will be some servers with v1 on and some servers with v2 on. If the JS or CSS is different between v1 and v2 then there could be a v1 page loaded that sends a request to the load balancer and gets the v2 JS/CSS and vice versa.

I can't think of an easy way to avoid this, and so I'm struggling to see the point of Rolling Upgrades at all.

Am I missing something? Is there a better way to achieve an outage-less upgrade for a website? I was thinking that I could also set up a complete parallel Elastic Beanstalk environment with v2 on it, and then switch them over in one go - but that seems so much more time consuming.

As you described, to use rolling deployments and have continuous deployments on the same environment, you need to guarantee that version N is compatible with version N+1. "Compatible" meaning they can run simultaneously, what can be challenging in cases such as different files and database schema changes.

A popular workaround to that is Blue-Green Deployments, where you deploy to a different environment and then redirect users. The Swap URL feature helps to implement that using AWS Elastic Beanstalk.

Progressive rollouts and blue-green are not mutually exclusive. You can use progressive rollouts for small changes and blue-green for bigger ones. For more patterns and techniques on continuos delivery, i'd recommend you check Jez Humble's book.

My technique for deploying an ASP.NET webapp into production is as follows:

Client:

  • Select 'Release' mode and then right-click to publish.
  • Go manually to the publish folder and zip contents.
  • Now transfer to server by FTP.

Server:

  • Unzip folder contents.
  • Stop IIS.
  • Deploy new folder contents for web app.
  • Start IIS.

I don't stop the database or run any additional tools to promote to production. It's a small company, and this seems fine. What's wrong with this technique in your opinion?

The only thing wrong with your approach is the manual intervention needed. I strongly encourage you to read Continuous Delivery.

we have multiple features that needs to be developed but management decides which features go into Live.. this requires us to have a script for each User Story/Change . But how can I link a DB schema change to a User story in TFS?

What we have now:

TFS with User Story/Task CC.net Buildserver

I've done research for SSDT, looks awesome! but how can i Link this with TFS?

thanks for reading,

Andy.

There can be a lot of complexity associated with making changes to a system independent of each other. The best solution I've found involves creating "feature toggles" so that you enable or disable a feature after it has been deployed. Take a look at continuous delivery as a topic. Jez Humble wrote a great book on the subject.

Database schema changes can be more complex in some cases than just enabling or disabling a feature. I would suggest using an expand / contract model. You would add any new structure to the database in advance and get that deployed into production in a non-breaking way. Then when you enable the feature that has a dependency on that structure, it's already there. If you need to clean up the database schema after you've removed something then you could do the "contract" cycle out of band with other software changes to reduce the surface area of test.

How do startups like say Feedly, or Buffer make changes to their code, by adding a new feature and at the same time testing it out without breaking things? Is there a framework for sandboxing, does it have to be built in house, or is it just cloning the git repo on a localhost and playing around?

There are many specific techniques which make it possible to add features to software and deploy it to a running site without breaking things. It's a big topic, but here are some current best practices:

  • Plan features to be easy to introduce: as small and independent of one another as possible, given business requirements, and thus easy to assess and change or revert if necessary.

  • Choose architecture that minimizes the scope and impact of each deployment. If an application consists of independent services accessed by defined APIs, a deployment can update and restart a single service, reducing the impact on the entire application.

  • Plan implementation of large features to be easily deployable: implement in pieces, all but the last invisible to users and deployable over multiple deployments, thus reducing the risk in any one deployment.

  • Since the site is running when you deploy, it's necessary that consecutive releases are compatible. For example, changing the database may mean three production changes: first change the software to run on both the old and new versions of the database, then make the database change, then remove support for the old database version from the software. You might also need to ensure that the current version of a browser app is compatible with successive releases of the server-side software.

  • Feature flags, aka feature toggles, remove some of the risk from deployment by leaving features hidden until an administrator activates them at runtime through a UI which is part of the application. Feature flags also allow rolling back a feature without rolling back the entire application if the feature has a bug or negatively affects the business. Wikipedia lists a few feature flag frameworks and you'll find more if you search.

  • Thorough automated testing and continuous integration prevent unexpected breakages. Manual regression is too slow, unreliable and expensive for any web business, and especially for a startup. Continuous integration makes sure that the tests are run every time a change is made.

  • Automated deployment prevents manual errors during deployment and makes it easier to deploy more often, meaning that each deployment contains less changes and therefore is lower risk. (Continuous deployment, aka continuous delivery, is simply completely automated deployment, meaning that no-one even has to press a button.)

  • Canary deployments allow changes to be exposed to a subset of users in production before committing to a full rollout.

  • Monitoring is essential to catch bugs that do get deployed to production. Monitor both low-level metrics, e.g. server and service uptime, and high-level metrics, e.g. credit card charges per minute.

  • Automated rollback minimizes risk and downtime when a bug does get deployed to production. At minimum one needs to be able to roll back to the previous release with no more than a button press. For bonus points, roll back completely automatically when a critical metric drops below a threshold. Blue-green deployment provides a fast way of rolling back.

Humble and Farley's Continuous Delivery covers many of these topics. I realize you didn't ask about continuous deployment, but many of the current best practices that support rapid iteration lead up to or fit well with continuous deployment, so it tends to be all the same discussion.

I could not find any forums to ask this question. I would like to put a Release Sign off process in my small company where I do the releases. The releases always break when it is deployed to the Production environment and question everyone asks is "how didn't QA test this?" Afte a couple of days, people forget about it and get busy with the next release.

I would like to put a process in where QA and a person who releases the software (a developer at the moment) signs off properly before It gets released.

Can anyone point me to a place where I can get some sample tempalates, which covers pre-release tests, etc.

Thank you

Regards, SHM

For Release Management processes, here is some good documentation.

The following book is a must read if releasing is a problem at your company. Continuous Delivery Book from Jez Humble and David Farley

If using Microsoft tools, here is a best practices document from Microsoft related to Continuous Delivery: Testing for Continuous Delivery with Visual Studio 2012

I've been googling a lot lately trying to find articles, books , or even the correct search terms on more 'agile' web application infrastructure/setups but dont think im currently searching the right terms!

Im looking for an overview of best practices that will take me through how i should set things up with regards to things like automating builds, automating deployment to staging and production, continuous integration, versioning, testing etc. etc.

Im working on a pretty complex online store using .net and have so far started getting to grips with using MSBuild to control my builds and TeamCity running builds after commits to GitHub.

I have been working through the 'Inside MSBuild' book which is pretty cool and also a book on brownfield applications which is actually equally useful for a fresh project.

So im getting to grips with individual pieces but really want some concrete processes to follow.

Any help, greatly appreciated as Im fed up with aimlessly googling!

Sam : )

You're on the right track with TeamCity in my opinion; we tried CruiseControl.NET first and found it required more XML-editing.

There is a book on Continuous Integration in .NET; I have not read it.

There is also Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation - I have not read that either, but Fowler-approved books are generally excellent. There's also an older book in the series on Continuous Integration.

If TeamCity is working for you, I'd suggest studying testing first. One of the major values in continuous integration is automated test-running. I can recommend a book on that: The Art of Unit Testing with Examples in .NET.

My personal opinion is that MSBuild scripts are best left to Visual Studio. If having TeamCity run solutions and NUnit/xUnit tests proves insufficient, you might take a look at NAnt. While it is XML-based, I find it easier to understand than MSBuild.

I've got

Expert .NET Delivery using NAnt & CruiseControl.net (Expert's Voice in .NET)

It's old, but covered everything I needed to get up and running last year.

I work for a very large organization that is looking for new ideas. Currently there is this large project in place that is supposed to provide a common architecture for a wide variety of applications. We are organized as many "shared" components that get used to create a "deployable unit" or DU, a DU is the final application.

Without going into too much boring (and sensitive) detail, the way we are doing it now is not working. We have 30+ applications and you can wait more than 2 years before getting permission to do anything more than the most desperate emergency fix. I am looking for suggestions that cover organization and testing.

If you are interested, my idea so far is as follows:

  1. Create teams based on specialized skills. For example a GUI team, a JSP team, a Database team, a HTML/CSS team, etc. When these guys get requests for solutions they are ideally placed to see reuse opportunities which should speed development. The fact that the team is composed of subject matter experts also means that the quality of the code produced should be better as well. These teams will produce custom solutions tailored for the request and will contain only what is needed to provide the solution - nothing else.
  2. Create a team that works with the business client who will gather requirements and then go to the appropriate specialized team for solutions. This team will then be responsible for integrating the various solutions into the final application and performing unit testing.
  3. As much as possible make use of automated testing tools (e.g. JUnit). Also, in order to maximize the number of applications that can go through user acceptance testing in production like environments you establish short fixed testing durations. If you are not ready at the end of that time frame you go back to development and unit testing - no exceptions! In other words be damn sure you are ready for user acceptance/production before you request it.

What you end up with is two groups of teams. One groups is application centric, concerned with providing big picture solutions to business clients. The other group is tech centric, doesn't really know or care about the big picture just their specialty.

Is there anything even remotely like what I have just described already out there?

Consider implementing the ideas in this book: Continuous Delivery

We are developing a new WPF application that interacts with a server. The application sends a request to server and gets a response. The response is shown in different ways in different views (i.e. single model with multiple views).

Now we want to automate the testing of the WPF application. I have the following test-automation needs:

  1. Validate the request that is sent to server with user-entered parameters.
  2. Validate the response that is received from the server with the data displayed in multiple views.

Please let me know how to achieve above using any of the Test automation tools.

This feature you described is called "Record and Playback". And as you already mentioned it is quite limited to simple UI interaction and can become difficult in maintaining.

As soon as your interaction logic gets more complex you will have to implement the main parts of your test case logic manually by using a more layered architecture. One possible architecture could have the following layers (some of the ideas here are taken from the book Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation)

  • The lowest layer would implement the access to the UI controls itself (for example by using one of the UI Test APIs you mentioned.)

  • Application driver layer which describes the functionalities of your application. Here you could have methods like LoginForm.LoginUser(userName,passWord). Internally this method could for example handle the complete user input, press all necessary buttons and even do some additional validation if needed (for example if the password has expired and must be retyped). The methods in this layer will access the different UI controls through the lower layer. In general this layer is an abstraction of your application under test.

  • Use case / Test case layer. Here you define the different test steps by calling your application layer.

So in your concrete case, you could have a class called ClientSoftware in your Application Driver Layer and this class could have methods like ValidateUserInput or SendRequestToServer. These methods would then implement the necessary UI interaction to execute the desired behavior. In your test case itself you would then have to setup an instance of the ClientSoftwareand call the required methods to implement your test case.

What does an extraordinary release process look like?

I ask because we keep having release related production problems. For example: - Code that was not supposed to be released is released - Code that runs on the test server does not run on the production server - Ops generates an alert for production but everyone ignores it

Pointers to books, articles, blogs, etc. on this topic would be helpful.

I thought the development process we use on my current project was pretty good until I started reading Continuous Delivery by Jez Humble and David Farley. This book explains how even the most complicated systems can be deployed at the push of a button using version control, continuous integration, configuration management, environment management and automated unit/acceptance/capacity testing. It is a great read.