Sunday 6 October 2019

Getting to Production


If you're a server side developer the fruition of your efforts is achieved once your code gets to production. For a team to be effective and efficient it's important that this final stage in releasing is not a scary or frightening proposition. Delivering value into production is the whole reason for your team to exist so it should be a natural and unhindered consequence of your efforts.

Achieving this productive release chain requires a deployment strategy that inspires confidence by being slick, repeatable and with a get out of jail solution for when things don't work out quite as expected.

There is no one correct strategy, this will depend on the makeup of your team, the code you are writing and the nature of your production environment. Each possibility has it's advantages and disadvantages and it will be up to you to decide what works best for your situation.

Redeploy

Perhaps the simplest strategy involves simply redeploying code to your existing servers. This will usually involve momentarily blocking incoming traffic whilst the new code is deployed and verified.

The advantage of this strategy is it doesn't involve standing up any additional infrastructure to support the release and is a straight forward and simple process to follow. However it does have disadvantages.

Firstly it involves service downtime, you are not able to service requests whilst the release is in process. Additionally since the release must be tested after deployment, whilst traffic is being blocked, it can mean this testing is conducted under pressure that is not necessarily conducive to thoroughness.

The other major disadvantage is the inability to roll back the release should a problem develop. Either you must fix forward by deploying updated code or by redeploying the previous release. Both of these mean repeating the original release process by blocking traffic, deploying and testing.

Blue Green

Using this strategy you maintain two identical production stacks, one blue and one green. At any point in time one is your active stack serving production traffic and one is inactive and only serving internal traffic, or stood down entirely. The release process involves deploying to the inactive stack, conducting appropriate testing and then switching production traffic to be served from the newly deployed code.

The advantage of this strategy is it involves virtually no down time and allows for thorough testing to be completed, on the infrastructure that will serve the traffic, prior to the release without causing impact to users. It also has the substantial benefit that should the worst happen and you need to roll back the release this is simply achieved by switching traffic back to the previously active stack, which has not been modified since it was last serving production traffic.

The main disadvantage of this approach is in the cost of maintaining two production stacks. However taking advantage of modern cloud techniques means your inactive stack only needs to be up and running in the build up to a release reducing the addtional cost.

A second disadvantage can be seen if your application is required to maintain a large amount of state, the transition of this data between the two stacks needs to be managed in order to avoid disruption to users.

Canary

Canary deployments can be viewed as a variant of the Blue Green approach. Whereas a Blue Green deployment moves all production traffic onto the new code all at once a Canary deployment does this in a more gradual phased manner.

The exact nature of the switch will depend on your technology stack. When deploying directly to servers it will likely take the form of applying a gradually increasing traffic waiting between the old and new stacks. As traffic is moved onto the new code error rates and diagnostics can be monitored for issues and the weighting reversed if necessary.

This approach is particularly well suited to a containerised deployment where the population of containers can gradually be migrated to the new code via the normal mechanism of adding and removing containers from the environment.

The advantage of this approach is that it makes it far easier to trial new approaches and implementations without having to commit to routing all production traffic to the new code. Sophisticated techniques can be applied to the change of traffic weighting to route particular use cases or users to the new code whilst allowing the majority of traffic to be served from the tried and tested deployment.

However there can be disadvantages, the deployment of new code, and any subsequent roll back, are naturally slower although this can depend on your technology stack. Depending on the nature of your codebase and functionality having multiple active versions in production can also come with its own problems in terms of backwards compatibility and state management.

As with most decisions related to software engineering whether or not any particular solution is right or wrong can be a grey area. The nature of your environment and code will influence the choice of approach, the important thing to consider is the properties that any solution should offer. These are things like the ability to quickly roll back to a working state, the ability to effectively monitor the impact of a release along with factors such as speed and repeatability.

Keep these factors in mind when choosing your solution and you will enjoy many happy years of deploying to production.


No comments:

Post a Comment