Spooky Halloween stories told by DevOps

To celebrate this day, we bring you the new darkest nightmares of every DevOps😱 - don't they look 'spook-tastic'? 🎃 😎 👻

Related services

Welcome to DevOps Nightmare

Once upon a time, a system administrator decided to become a DevOps. Diligently engaged in self-education and played іn sandbox with docker and python. He tried to learn a piece of basic knowledge to get a dream job.

The day came, and he posted his resume on all HR platforms. Oh, miracle! One company noticed him and invited him for an interview. The system administrator dressed up nicely. He repeated everything he had taught, smiled, and went to the interview.

On the way to the appointed place, he imagines how he already works as a DevOps engineer. How he uses IaaC on daily basis to the fullest and raises production in Kubernetes because automation is everything. But much to his dismay, this was not the fairytale case.

Here he sits in a meeting room, communicates with the interviewer, and answers the following questions:

I: Do you know docker?

SA: I have theoretical knowledge and did "this" and "that" things. I want to develop in this direction more in a professional way.

I: No one will teach you here. After work, you can stay and study for another 3-4 hours.

SA: Oooooh. - answered the system admin.

I: Your duties will generally include office support and printer cartridge refilling.

SA: Is this really a DevOps responsibilities?

I: Well, yes. All DevOps are doing things like this.

After 20 minutes of this interview, the System Administrator, full of earlier hopes,  ran away from this place as quickly as he hadn't ever before in his life.

funny illustration

Take a break for Lunch during Deployment if You Dare, Foolish Mortals

It happened once, on a typical sunny day. While working on one of the projects, the DevOps team decided to create basic umbrella helm charts that would deploy all the necessary chart sub-charts. There were produced two charts - System and Monitoring. The first chart deployed significant applications such as Istio, External Secrets, various CRDS, and a namespace for the Monitoring chart. The second chart deployed the ELK stack.

Since it was an ordinary, unremarkable day, everything was going well at work. Charts were rolled out to all clusters, and from time to time, they were updated, and new functions/variables were added. 

At a certain point, during the magical lunchtime, one of the guys noticed that the name of the variable for creating the namespace was misspelled. He rolled out the changes with a quick fix until his lunch had no time to cool. But according to his enormous desire to return to the office's general lunch, he forgot to change the variable's name in the values.yaml files.

The following things happened in the next minutes while he enjoyed the divine taste and an interesting conversation with colleagues. The subsequent helm upgrade monitoring sees that creteNamespace != createNamespace, which means createNamespace = false and completely removes the namespace for the Monitoring chart, where ElasticSearch is quietly spinning.

The hair stood on when DevOps returned to the workplace. He notices deployed resources from the monitoring chart begin to terminate and disappear. In a hurry, he starts to correct the situation.

In the end, the developer fixed everything in a matter of minutes but never again started to roll up the deployment and be distracted on lunch…

Knock-Knock

Once on a project that actively used AWS SSO. The following conversation took place between the client  and the DevOps:

D: Knock-knock.

C: Who's there?

D: EKS

C: EKS who?

D: Connections to all EKS clusters by hardcoded SSO roles in AWS-auth config map.

C: Didn't hear. Do we need to Restructure SSO and recreate SSO roles? I will do it perfectly!

After this re-creation, no one else saw correctly created roles, only changed suffixes in the names. As well as accesses.... accesses to all clusters were gone. Nobody saw this DevOps engineer after that…

Maaanual changes 

Once upon a time, there lived a trainee DevOps. He was very fond of using IaC to simplify his work. Once, while working on a project, he noticed that he had to write a lot of environment variables to deploy a new environment. 

It took a little time, but he came up with an idea. Since he deploys the infrastructure with Terraform in the AWS Parameter Store, creating all the secrets with Terraform would be good as well. And after that, make a pool of External Secrets in the cluster itself and substitute Kube secrets into the pods.

Everything was going well, but there were too many of these secrets. Most of them were not created by Terraform but were transferred in the form of Plain Text. The trainee Devops thought keeping secrets in Plain Text, and pushing them to GitHub, was terrible.

He explained the problem to the client. The client listened and came up with an idea that didn't work correctly. "Come on, firstly, we create all the secrets with the text <FILL IN THE AWS CONSOLE>, then go to the AWS Parameter Store and manually write the values ​​to the secrets!"

The trainee Devops was trying to convince that the idea was terrible for implementation. After all, each launch of the Terraform will want to overwrite the values ​​​​of the secrets that were updated by hand. But, the trainee DevOps remained sad because no one supported his decision.

For half a year, the trainee DevOps suffered when it came to these "manual" secrets. Until one day, the client wanted to deploy the new environment independently. He came to terms with the fact that writing down secrets by hand takes a lot of time. After that, without hesitation, he said - "Trainee DevOps, something is not right here. Somehow the code is not optimized." 

But even since that time, nothing has changed, and our trainee DevOps is still sad and waits for the boss to remember his decision

The Haunted Cluster

A long time ago, on a distant project, there was a little DevOps named Billy. There were many EKS clusters with versions of 1.21 on the project and one cluster with a version of 1.19. A specially designed helm chart controlled the rollout of all applications.

Once, to increase the stability, the team decided to add the “PodDisruptionBudget” chart to the helm. Billy was naive and inexperienced and decided to roll out this chart in all EKS clusters. Everything went well until he got to the cluster with the first 1.19.

Instead of going to the documentation and reading about this specification and turning off resources created through a variable, Billy wrote a message: "I can't deploy a helm chart to a cluster, swears at PodDisruptionBudget." A Senior DevOps engineer answers a colleague - "What version of the cluster do you have? Perhaps there is  an old version of the cluster?"

Billy opened AWS EKS. When he saw that the version of the production cluster was 1.19, he simply clicked "Upgrade Cluster" and wrote: "Is it okay that I started updating the cluster?" Nobody touched that cluster after that, it's still working to this day, but stories tell it to be haunted booooo

Let's arrange a free consultation

Just fill the form below and we will contaсt you via email to arrange a free call to discuss your project and estimates.