Kubernetes: A case-study about how Spotify solved their challenges using k8s
Ever since new technological revolution industries has adapted all those tactics which will help them to grow their services and reliability in market. Today we are going to talk about one of of the well-known technology which helped Spotify, a audio streaming platform to built an reliable and strong community named “Kubernetes”.
What is Kubernetes ?
Kubernetes, also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications.
Lets get into the case-study: “How Kubernetes helped Spotify, to identify and solve various challenges..!”
Challenges Spotify facing before adapting k8s:
Jai chakrabarty, Director of Engineering, Infrastructure and Operations kept a vision “To empower creators and enable a really immersive listening experience for all of the consumers that we have today — and hopefully the consumers we’ll have in the future”
Being an early adopter of containerize technology called “Docker” and Microservices, Spotify had containerized microservices running across its fleet of VMs with a homegrown container orchestration system called Helios.
Jai Chakrabarty added “By late 2017, it became clear that “having a small team working on the features was just not as efficient as adopting something that was supported by a much bigger community”?
Spotify saw the amazing community that had grown up around Kubernetes, and they wanted to be part of that, Kubernetes was more feature-rich than Helios. Plus, Spotify wanted to get benefitted from added velocity and reduced cost, and also align with the rest of the industry on best practices and tools. At the same time, the team wanted to contribute its expertise and influence in the flourishing Kubernetes community. The migration, which would happen in parallel with Helios running, could go smoothly because Kubernetes fit very nicely as a complement and now as a replacement to Helios.
What Impact Spotify observed?
The biggest service currently running on Kubernetes takes about 10 million requests per second as an aggregate service and benefits greatly from auto-scaling, also prior, teams would have to wait for an hour to create a new service and get an operational host to run it in production, but with Kubernetes, they can do that on the order of seconds and minutes. In addition, with Kubernetes’s bin-packing and multi-tenancy capabilities, CPU utilization has improved on average two- to threefold.
Experiences shared by the key Engineer’s and Directors of Spotify:
During the migration, the services run on both, so team doesn’t have to put all of their eggs in one basket until we can validate Kubernetes under a variety of load circumstances and stress circumstances.
Spotify’s experiences so far with Kubernetes bears this out. The community has been extremely helpful in getting them to work through all the technology much faster and much easier, It’s been surprisingly easy to get in touch with anybody they wanted to, to get expertise on any of the things they’re working with. And it’s helped them validate all the things they’re doing.
Here I conclude, hope you liked it..!