Jade ThirdEye Mon, Jun 21, '21 12 min read

Insights from a DevOps Engineer

We sit down with DevOps Engineer Dan Rosenbrock and ask him about his experience in the field, what's really happening and how organisations are improving their DevOps maturity. For the full interview listen to Episode 7 of our podcast Beta & Beyond.    

How did you get to being a DevOps Engineer?

Dan: I think I've had a bit of a unique start in my IT career, I started as an automated tester and was in that role for a couple of years, which then led into a development role. And then that's when I found my love for DevOps within that role. Having these previous roles helped me accelerate my DevOps career, because I was able to understand what the testers wanted, what developers wanted, and what managers wanted. And this is just in order to aid their workflow that they already have in place.

So I've worked for some large enterprise customers who work with software, both in financial crime or insurance. As you can imagine, both of these have very high regulatory pressures. So this is when DevSecOps really comes into it.

After working with a number of engineering teams, then what's your overall take on the state of DevOps?

Dan : So from what I've seen, first-hand, improving DevOps practices is seen as a very costly expense, and it's treated more as technical debt. It's very easy for developers to hide away all the manual processes they're having to do in order to give this elegant, in air quotations, elegant DevOps solutions. So I've seen bad practices take out entire teams for up to two weeks. And the thing that shocked me the most about that particular scenario is that it was an accepted thing. Almost that yeah, it was expected to happen.

So there was complexity. In that particular scenario, there was complexity around the DevOps structure and how they had implemented it. So they had the DevOps scripts spread across multiple platforms. So if you can think there was a GitHub connection, a team city connection, a new Git repository sitting in there, an octopus deployment connection, and their on premise host servers, as well. So they had this really big kind of DevOps pipeline, they incorporated a lot of different pieces.

And well it got the job done, it was a very inefficient way of doing it. And what that led to was them having build times, that would take up to four hours. And it was very typical to see builds go up to six hours, if they had front end test running on that as well. So people are good, they're great at automating their processes. But DevOps is a very iterative process. So it's understandable in some ways.

Is DevOps programme is reaching it's potential?  Launch the assessment   

 

We've heard about the promise of using DevOps to make multiple releases. Is this what you're currently seeing in the market? Or are we still on a journey towards that?

Dan : So this is an interesting question, because I've seen this happen both ways. So more often than not an organisation that is focusing on multiple releases per day, they've already had their foot in the DevOps game for at least a year or two. And that's something very common I see. What I've also seen, but not as often, is organisations focusing on fast releases from the start. The reason why I think that you don't see this so often is that 1) You've got no metrics to measure against, and 2) Usually by the time these places are implementing their DevOps practices, they haven't fully thought out the entire infrastructure of their solution that they're trying to do.

So organisations that have had their foot in the game, I definitely do see them moving towards multiple releases per day. And I've been witness to organisations make that transition as well.

Thinking about DevOps today then comparing to where it was this time last year, what advantages have there been from a process and/or technology perspective?

Dan : So I think the biggest thing I've seen is a consolidation of tools to enable a better DevOps experience. So when I say that I'm talking about tools like terraform or pulumi, that help them handle their infrastructure management. And I'm also seeing a lot of organisations move towards a cloud offering, or hybrid cloud offerings as well.

You talked about terraform is a tool for DevOps technology, we’ve also seen the likes of Azure DevOps and AWS Proton enter the market. How ready are these types of products for enterprises just to turn on and get working? Is it a simple plug in and play?

Dan : So if you stay within their ecosystem, it's very plug and play. But then you'll have issues with if you are wanting to go cross platform, because these aren't agnostic. But I've just finished working with a customer who has wanted us to build them a DevOps portal. The big difference here is that this portal is going to be agnostic to the DevOps platform it uses. And what I mean by that is that we will create an interface overtop of Azure DevOps or Bitbucket pipelines, or GitHub actions. So we build an interface over that. And we allow developers to come in and do and set up all their DevOps needs, so their resources, whatever it may be, and then we will let them select the provider to do that through. 

The reason why we've done this is, and you've just touched on it, is the plug and play architecture. So what that means is that say a new DevOps provider comes out, and they've developed an SDK for this provider, you can just grab that SDK, incorporate it into the solution that we've built for the customer and now their users have access to this new DevOps platform. We want to give them the illusion that they're using the same platform, but also give them the freedom to be able to use the tools that they need to do things best.

What safeguards can teams put in place to ensure quality is maintained with their DevOps methodologies?

Dan: Yeah so at a basic level, we have very easy safeguards that we can put on. So these are things, pull requests, making sure we have the right number of approvers, making sure we have the right approvals and the people, that kind of stuff. But we can take it a lot further than that. So another project I've been working on we use pipelines to enforce our safeguards and to make sure our quality is maintained. And it sounds a bit weird to start with...what they allowed us to do is essentially, we built a set of what we call master pipelines, and these master pipeline would do things like running dependency checks, security scans, all those common types of security safeguards that you want to have around your codes. So we have a pipeline for that.

When a new project comes along, their pipeline files will make a reference to our master pipeline files. And what that means is that any project that runs under our organisation, their projects are going to be enforced to run these security pipelines or master pipelines, they will have to run those before they can run any of their application specific stuff. And so yeah, this is how we at a very, very tight level keep control. We can do things like keep control of what steps and applications using within their YAML pipelines, we can keep control over what branches go where. So yeah there's a whole lot of control we can put into there.

And the good thing about it is that it all comes… as far as the users in the organisation are concerned, all of this security goodness comes for free, because as soon as they go to create a new project, their pipelines are going to be created for them by us. And we're going to make sure that they have to use our master pipelines. We even go as far as caching our pipelines. So that way, we can ensure that no one makes any changes. And we also version our pipelines as well. So if we do make any breaking changes, we can mitigate those with the necessary teams.

To listen to the full interview with Dan Rosenbrock, including his recommendations on documentation and the structure of DevOps roles, listen to Episode 7 of our podcast Beta & Beyond.


Talk to us about making data meaningful