MOD: Masters of Data

Bringing the human to the data

David Hafley: Deep Dive - Security & DevOps

Director of Engineering, Contrast Security

June 5, 2018

23:42 mins

Having security wherever your application may live or go is critical.

In this Deep Dive - How does security work in a DevOps World? How do you incorporate security into a DevOps process? How do you do DevSecOps?

Welcome to the Masters of Data podcast where we talk to the people on the front lines of the data revolution about how data affects our businesses and our lives.

I’m happy to have David Hafley with me from Contrast Security.

The DevOps Waiting Game Ends

This is the way it used to be with big data: you’d get a lot of data in some big store somewhere to process it. Then maybe days later you get answers to your questions.

There seems to be a transition now to where you want to have the data and act on it very, very quickly. For example, if you’re making decisions really quickly based on security vulnerabilities or things that expose certain vulnerabilities. You have to be able to act on that very quickly and data seems to be a big part of that.

There is a massive amount of data out there. There is a lot of really interesting work around how to find insight in massive data sets. The faster you can get feedback to the customer, the user, and the developer, the better off they are.


There is no waiting period, no guessing game, no dependency on an unknown third party. It is setting very clear expectations around what you should do with the data that you have.

If you introduce some new code or a new library in your application, then you are going to want to know that the library has known CDs in it as soon as possible. You don’t want to get to the end of your cycle of development and two weeks later come back to refactor all of this stuff because you learned the library is out of date. Getting that feedback fast is critical in order to have a very smooth delivery pipeline in DevOps practice.

Getting feedback fast is critical in order to have a smooth delivery pipeline in DevOps practice.

What Slows Down DevOps

As you’ve been working with customers, you’ve been seeing people try to adopt this and it shows where people struggle. All of this sounds really great on paper, and, obviously, if it was easy everybody would be doing it. What holds the company and the engineering teams back from being able to implement something like this?

From what I’ve seen it’s really all over the place. A lot of it is the maintenance of legacy applications. We talked earlier about how nowadays folks are developing small sets of micro-services that interact with one another and they have a defined set of APIs.

With that micro-service, that may have a handful of APIs. It’s a new development, everything is modern. Everything is designed from the start to be testable, modular, and have “baked in” components.

The difficulty is when customers have very large monolithic deployments that have been around for decades. It’s really hard to unravel that. How do you get into a DevOps friendly process that can get that feedback to you quickly, when it takes 15 minutes to start up your application for a test, or it requires a custom enterprise license to set up a test server?


There are old legacy decisions that were made 20-25 years ago that are difficult to shoe-horn into this process. We see some folks really embracing this and rewriting it, empowering Dev teams to replace components of large monoliths. We also work with teams to automate and replace their testing infrastructure.

If some of your team is in AWS and some of your team is in a co-location facility, how do you get the service endpoints that you need to test your application effectively and quickly, and get that feedback with confidence?

Keeping Up with the Times

It reminds me of Conway’s Law. It’s the organizational communication structure reflected in the architecture and vice versa. It definitely sounds like that might be part of it. If you have a company or a group that is still running a monolithic application, there is a good chance that they may not be culturally and organizationally ready for something like this either, right?

When we see companies like that embrace it and really work hard, the feedback cycles are still slower. Folks that are most successful with this feedback model are folks that will have newer services that have been rapidly integrating already. But if you are trying to do this as one big initiative, it is really challenging.

You’re right about Conway’s Law. These organizational structures that existed 15 to 20 years ago are completely different and don’t really fit in well with the way that folks want to develop and deploy software today.


I think sometimes those cultural issues are the hardest ones to deal with because you can go out to some cloud platform or buy some new software platform, but if you don’t change your culture to be able to adopt this, it won’t work.

I think this is absolutely fascinating. It’s very in line with some of the other E-companies that I’ve talked to about this type of thing. When these engineering teams are now being held responsible for the code, basically cradle to the grave, they are struggling with these disciplines that used to be handled by siloed teams.

I actually saw on your website the speed of DevOps. I like that. They are trying to run it 100 miles an hour and they don’t have those toolsets to help them do that. A lot of the investment is going to “How can I get the data and the toolsets to analyze that data, and be able to help me make decisions faster?” Does that sound right to you?

Yes, I think so.

DevOps Trends in the Future

What kind of trends do you see happening in the next few years based on self-protecting software and this kind of built-in embedded security? Are there any interesting trends you’re seeing that you want to talk about?

Having security with your application and wherever your application may live or go is critical to this being DevOps. It lets you make decisions and your team itself is empowered to deploy confidently, quickly, and knowing that security is with you and with your process.

You’re not relying on another team. There is not another dependency in your organization that you are trusting, that runs what application firewall has in place. You are able to verify that yourself so it is very empowering to give you that transparency of what’s going on with your application.

Having security with your application and wherever your application may live is critical to DevOps.

Some of the other advantages of being within the application is if there is an attack, we’re able to see exactly where the attack could have manifested itself and then how to fix it. Going forward we’re going to see more things that are small components shipped with application servers.

We used to see a lot of practice around hardening operating systems and lowering the OS, but as everything shifts to the application, you see things like AWS ECS Fargate where now you can just run containers on a managed cloud. That’s just an example. Everything becomes more about the application with doc or with managed container services like ECS Fargate or Google’s. It becomes increasingly about the application as the attack vector. So, if you have your protection and your assessment with the application, it’s pretty intuitive and makes a lot of common sense.

Devops and Security

It’s interesting you bring up Fargate and the whole containerization part of this. Did you think that makes this self-protecting software embedded security easier or harder with containers?

I think it makes it a lot easier because it really narrows the scope of where an attacker would go to exploit your application. It does put a degree of pressure on the cloud provider, like Amazon and Google. They have hundreds of thousands of engineers that are there to help with that problem.

As a consumer and user of their service, I’m able to focus on my application and the libraries in that application. I don’t have to worry about additional layers and vectors of attack that may exist in the operating system, as we see everything collapse towards the application delivery.


I’m definitely seeing the whole idea of containerization is compatible with the whole microservices idea and embedding very small components which are a lot easier to make with higher quality but also easier to protect.

I think really it highlights that security is another form of quality. A huge component of DevOps is that your tests are automated. They are baked into your pipeline. It goes without saying that you should have security tests as well. So having Dev Sec Ops is really more of a highlight than a focus.

Yes, absolutely. David, I really enjoyed having you on the podcast. This has been a great discussion and maybe we’ll have you back again sometime to talk more about contrast. I really appreciate your time.

The guy behind the mic

Ben Newton

Ben Newton

Ben is a veteran of the IT Operations market, with a two decade career across large and small companies like Loudcloud, BladeLogic, Northrop Grumman, EDS, and BMC. Ben got to do DevOps before DevOps was cool, working with government agencies and major commercial brands to be more agile and move faster. More recently, Ben spent 5 years in product management at Sumo Logic, and is now running product marketing for Operations Analytics at Sumo Logic. His latest project, Masters of Data, has let him combine his love of podcasts and music with his love of good conversations.

More posts by Ben Newton.

Listen anytime, anywhere

Available to stream or download via these and other prodcast apps.