Not on MDMA use in colourful birds unfortunately - it's an attempt to organise my knowledge in the realm of commonly used deployment strategies, mind you my primary focus is backend development and delivery to cloud, so my ramblings here are leaning that direction.
In some teams deployment is a dreaded word, while in others it's hardly ever mentioned. I believe the latter results from automating the process, removing all possible obstacles from it and basically forgetting it ever happens. What are the strategies we can use to achieve this dream state of the release pipeline? And more importantly why do we need a strategy in the first place?
First of all you may want an extra layer of security, letting you know if the version being deployed actually works, before making the whole system / user base use it. You would not want hidden bugs to hinder the experience people get while using your applications / services.
Second, you want to eliminate the insecurity that's factored in any new delivery. Neither you nor your team want to be on the edge every time a deployment is taking place. Being sure you have a process to rollback or address potential issues is quite reassuring - if you choose a good strategy for your environment you will significantly limit the possibility of deployment outages.
You may also want to do partial adaptation, exposing a new version of a feature to a selected group of users (think alpha stages or experimental features). Admittedly this is easier done and managed with feature flags, but some of the strategies will let you use / expose two versions of your product at the same time.
There is a bunch of clever strategies which let you all just that, below are three I believe are most popular
IMPORTANT: Each of the services / nodes in every example represents the same replicated application - they are mostly applicacable in systems that are to scale, and have more than one node serving traffic (with the excception of Blue Green Deployments)
Incrementally updating each instance of your application, making it easy ro roll back. Rollbacks start if a singular node with the new version fails passing a health check. Issue (as with many of the strategies presented) is that the consumers should support both versions / the new version should be backward compatible, not to break anything for the clients that haven't migrated to the new one yet. Besides that having to do incremental updates prolongs the whole process, causes temporary inconsistencies during the release and causes confusion if it's not properly communicated to the business stakeholders.
Allow you to expose two versions of your application at the same time. This makes it easier to compare them, gather feedback for the new one and quickly roll back if anything goes wrong. May be cheaper than the Blue Green approach and you are exposing only a subset of your users to a newer version of the application (meaning the resources spawned for the new version can be significantly smaller or you could even use part of production systems.
Blue Green Deployments
Assumes you are keeping two similar deployments active. The testing happens on the blue deployments - where the new version lives. The proper traffic is always going to the green deployment (the consumer / clients of the service). This approach may seem similar to the test -> production branch deployments separation, but if you want to make use of the strategy you would have to route the traffic to the right instance. As in -only once the Blue environment passes the acceptance test, the traffic is routed there.
The difference between rolling updates & canary deployments is still quite elusive to me, and I will appreciate anyone that wants to set my thinking straight!
Heavily inspired by:
https://martinfowler.com/bliki/BlueGreenDeployment.html - this one sent me on the rabbit hunt
and the ghost impersonating servers of: