Digital Signature Concept

Episode 216: Signed, Sealed and Delivered: The Future of Supply Chain Security

In this episode of the podcast (#216), sponsored by DigiCert, we talk with Brian Trzupek, DigiCert’s Vice President of Product, about the growing urgency of securing software supply chains, and how digital code signing can help prevent compromises like the recent hack of the firm SolarWinds.


We spend a lot of time talking about software supply chain security these days? But what does that mean. At the 10,000 foot level it means “don’t be the next Solar Winds” – don’t let a nation state actor infiltrate your build process and insert a backdoor that gets distributed to thousands of customers – including technology firms three letter government agencies. 

OK. Sure. But speaking practically, what are we talking about when we talk about securing the software supply chain? Well, for one thing: we’re talking about securing the software code itself. We’re talking about taking steps to insure that what is written by our  developers is actually what goes into a build and then gets distributed to users.

Digital code signing – using digital certificates to sign submitted code – is one way to do that. And use of code signing is on the rise. But is that alone enough?  In this episode of the podcast, we’re joined by Brian Trzupek the SVP of Product at DigiCert to talk about the growing role of digital code signing in preventing supply chain compromises and providing an audit trail for developed code.

Brian is the author of this recent Executive Insight on Security Ledger where he notes that code signing certificates are a highly effective way to ensure that software is not compromised -but only as effective as the strategy and best practices that support it. When poorly implemented, Brian notes, code signing loses its effectiveness in mitigating risk for software publishers and users.

In this conversation we talk about the changes to tooling, process and staff that DEVOPS organizations need to embrace to shore up the security of their software supply chain. 

“It boils down to do you have something in place to ensure code quality, fix vulnerabilities and make sure that code isn’t incurring tech debt,” Brian says. Ensuring those things involves both process, new products and tools as well as the right mix of staff and talent to assess new code for security issues. 

One idea that is gaining currency within DEVOPS organizations is “quorum based deployment” in which multiple staff members review and sign off on important code changes before they are deployed. Check out our full conversation using the player (above) or download the MP3 using the button below.


As always,  you can check our full conversation in our latest Security Ledger podcast at Blubrry. You can also listen to it on iTunes and check us out on SoundCloudStitcherRadio Public and more. Also: if you enjoy this podcast, consider signing up to receive it in your email. Just point your web browser to securityledger.com/subscribe to get notified whenever a new podcast is posted.


Episode 216 Transcript

[START OF RECORDING]

PAUL: This week’s Security Ledger podcast is sponsored by DigiCert. DigiCert is the world’s premier high assurance digital certificate provider, Simplifying, SSL, TLS and PKI and providing identity authentication and encryption solutions for the Web and the Internet of things. Check them out at DigiCert.com.

PAUL: Hello and welcome to The Security Ledger podcast, with Paul Roberts, Editor In Chief at The Security Ledger, in this week’s episode of the podcast, #216:

BRIAN: Signing is going to be an event that says, yeah, we’re attesting to all of those processes and quality to say this thing can actually be deployed in, and it should go out on the network. But then on the back end of it, people don’t usually think of this, if somebody breaches that deployment and goes and changes a config file or something like that, you’re breaking that signature. And now you can be monitoring for the signature of that thing that is deployed to say, only allow it to run if it is in the state that we signed it at.

PAUL: We spend a lot of time talking about software supply chain security these days. But what does it mean at the 10,000 foot level? I guess it means don’t be the next SolarWinds. That is, don’t let a nation-state actor infiltrate your software build process and insert a backdoor into your product that then gets distributed to thousands of your customers. But practically, what are we talking about when we talk about securing the software supply chain? Increasingly, we’re talking about securing the software code itself, making sure that what is written by our developers is actually what goes into a software build and then gets distributed to customers. Digital Code Signing- The use of digital encryption to sign submitted code – is one way to do that. Use of code signing is on the rise. But is digital code designing enough? In this episode of the podcast, we’re joined by Brian Trzupek, the Senior Vice President of Product at DigiCert, to talk about the growing use of digital code signing within development organizations. Brian and I talk about the changes to tooling process and staff that DevOps organizations need to embrace to make code signing work and shore up the security of their software supply chain.

BRIAN: I’m Brian Trzupek, Senior Vice President of Product at DigiCert.

PAUL: Brian, welcome to The Security Ledger podcast.

BRIAN: Thanks. It’s a first time caller. Longtime listener, right.

PAUL: We always love hearing that. There’s been so much that’s happened since the beginning of the year from a cybersecurity standpoint, and a lot of it raises really important questions. Obviously, there was a SolarWinds hack at the end of 2020, and then subsequently, there was the mass hack of Microsoft Exchange. And then we had the, of course, Colonial Pipeline ransomware attack so that the cybersecurity stories have been coming hot and heavy. And in the midst of all that, President Biden issued a new executive order on cybersecurity for the federal government and federal government contractors. It’s a really interesting document. And so I wanted to go over it with you because I think this notion of secure identity is really central to it. One of the concepts that the federal government has now really embraced is this notion of zero trust architecture. Some people call it zero trust networking, has kind of the approved architecture for now, US federal agencies and federal contractors who work with those agencies. And as you guys know, that’s a really big – that’s a lot of companies.

BRIAN: A good shift…

PAUL: And a lot of seats.

BRIAN: Yeah. I think at the start of a zero trust is fascinating coming from a PKI background where we inherently have zero trust of anybody zero trust kind of coming together and saying, hey, user identity, machine identity are very important things for you to access this network. And no longer you’re going to be accessing this network merely because you have a proper VPN connection. There needs to be some enforcement that that device that use or whatever it is acceptable into this network. It’s really kind of at the ethos of the stuff we’ve been building and seeing other vendors really embrace the zero trust from the networking hardware down to everything in that infrastructure to enable that to actually work. It’s so good from a security posture because you think about what we’ve been doing, and it’s almost crazy to think that we did allow people with like, way back… I’m thinking five years ago, people could come into a VPN with a username and password, and then their device was trusted and they’re on the network and they’re privileged in their access to corporate resources and go do what they want. And how many hacks did we see that came around from something like that? So I do think it’s just a good model for just network access, period.

PAUL: Or even just secure Http. I remember when it was a big there was this big thing about website should really be using Https. We should stop just sending data clear over the web. But it was like… And it took some folks to show really how exposed your data was for there to be the sort of like, oh yeah, well, we should just have Https by default. But that was like a conversation that we had, and it wasn’t that long ago. It was maybe ten years ago.

BRIAN: Yeah. No, for sure. I mean, there’s probably a whole another podcast about my misguided youth and WiFi networks that we could call.

PAUL: Yeah, let’s talk about your misguided youth. That sounds like an interesting topic. I think one of the things going on right now, obviously, across industries is just phenomena of digital transformation, which means a lot of different things. But I think the base definition is embracive cloud based computing, cloud based applications and data, physical assets that you used to own. These really kind of deperimeterize it environments where you’ve got home users, remote users, all kind of collaborating, and definitely a lot more engagement interaction with third parties, outsourced third party platforms, third party and open source code or proprietary code. From the perspective of company like DigiCert, which is working with companies both to secure legacy IT investments and also some of these new services, what does that really mean? How is digital transformation kind of impacting the work that you and DigiCert do with its customers?

BRIAN: Yeah. I mean, I think that change in security perimeter is something I often talk about because it is such a dramatic shift as customers or companies are shifting and starting to use cloud resources. And that could mean, hey, we’re just executing some application in that cloud. That could mean we’re just storing data in that cloud. That could mean that we’re deploying custom applications that are doing data storage and doing it across five different Amazon regions or whatever. There’s varying degrees of complexity as people start using things in the cloud as one aspect of the kind of data center perimeter. And then the network access perimeter. Probably one of the best examples I’ll give you a great example is in the 5G space, as we talk with the telecommunications companies, they’re living this right now in such an extreme way. They have these older, last Gen, 4G and below networks. That basically the security model, the access model to those resources was based on physical security was based on this is our building, our network, our center, our switching center, whatever. And you need to be an employee to gain access to that building and that network and those servers and the fiber is all ours. And so when you have that kind of security perimeter, you can make decisions about how you secure devices and authenticate things and provide identity that are not going to work. When all of a sudden now in the 5G spec, those same telcos can deploy assets and resources on multi tenant clouds. They can deploy across AWS regions or or regions or whoever. And they could load balance capacity and send it out while trying to adhere to things like lawful intercept and some of the legal guidelines and the stuff that they need to do to operate their business, that change in that perimeter for them is kind of a dramatic description of what I think a lot of companies see in a smaller way. Right. This switch from something I control, holy, and I can – air quotes – trust. I can define what I think is trusted around this, too. Well, now I’m using all these other kinds of resources, and people have access to it from a variety of different ways. And the way I need to identify people in the way trust is going to work to access that, to control that, to deploy that, to manage that, it’s going to be very different.

PAUL: I should note as well that you wrote an opinion piece on this. Staying secure through the 5G transition for Security Ledger. I think back in October, a verbal hyperlink back into your story.

BRIAN: It’s the space I follow. Great example, because that’s exactly what you’re talking about. These companies in a smaller way. It was huge operation, but taking that perimeter and changing it, that is changing so much for these people, and there’s so many things they need to address. But I think one of the other really compelling things for companies that get into the cloud is some of the newer things you can do there. There’s machine learning, AI, built into these cloud environments, and they’re buckling these into systems that they’re doing, whether it’s around insurance clause estimation or whatever their business is, they’re making use of these resources they couldn’t possibly get. So there’s also this not just I want to get rid of my data center. There’s obviously economic benefits potentially to doing those things. But there’s also this lure of man, there’s greater technology and capacity of things that I could never deploy or deal with, and I don’t have the expertise, but I can gain access to that to help accelerate my business. And I think that’s really attractive to people, too.

PAUL: So kind of help me connect the dots here, Brian. So on the one hand, as you just said, very eloquently digital transformation affecting every organization. It is a huge transition from traditional IT environments, assets, physical assets to what we’re seeing now, increasingly cloud based, virtual. And on the other hand, there is this prerogative now for zero trust for organizations to really get grip on, particularly identities, who their users and who they’re monitoring and managing what they’re monitoring and managing the IT assets and applications and data, and to really put some strict controls around those and also do some really robust monitoring and detection around that. Zero trust is really the notion that your environment is already compromised or certainly is certain to be, and therefore you have to operate as if a compromise is always happening in the background and still maintain security. So connect the dots for me between digital transformation and zero trust, like what is the enabling technology to be able to both embrace digital transformation and also achieve this concept of zero trust, where you’re not just one huge breach away from being out of business.

BRIAN: Yeah. I mean, I think at the core of it is kind of limiting scope is maybe what I would define it as best. Right. If you have now a network and access to a network that is going to be defined by pretty much explicit, like access controls, identity based access controls for whatever that is, that machine, that person, the device to come onto that network. Ideally, you’re boarding that into your environment in a way that that you haven’t just blindly given out those identities. Right. So you’re not just saying, oh, hey, I think I know who you are, come onto my network and things should be fine. I think the other piece of it kind of near enabling technology’s question is, you’ve identified some device because you’ve also done something to attempt to control. And like you said, monitor and maintain that device or that user’s device or whatever machine may be. That can mean that now I’m ensuring that those endpoints are being managed somehow, UEM type stuff or AV and malware and the different assemblies of the security layers that we put on these machines, that it’s now proving that it has some kind of minimally corporate acceptable policy around data protection to then earn that identity to now get onto the environments. I think that’s kind of really the whole view. It’s not just, hey, Paul, here’s a certain come access to our network because I think you’re great. No, it’s like, let’s make sure that those machines are actually somehow meeting some minimal requirement level. And it’s your point. You have the monitoring around them, too. So as we know, things can get breached and there’s unexpected stuff that can happen. It’s probably just as important to understand when that happens. Right. So you have the monitoring on the networks, you have the monitoring on the devices, you have the things that will tell you. Hey, there’s likely a problem over here that’s super important, too. Right. You don’t want to be finding out once you’ve had six terabytes of data exfiltrated from your network. That’s probably too late.

PAUL: Or the FBI knocking on your door.

BRIAN: Right.

PAUL: Do you think kind of the shift left and the focus on agile development is kind of intention with this concept of zero trust, does building security in mean kind of slowing that process down a little bit? Not that you’re going back to waterfall development methodologies, but maybe just a little bit less rapid and a little bit more careful development. That includes just more attention to security and ownership and authorization.

BRIAN: Many, many, many years ago, I worked with a professional services firm where we were custom software development shop because it was too difficult for a lot of places to kind of say, hey, we’ve got this business problem and we need something custom because that doesn’t exist. So go build it. And we were an agile shop very early on the edge of agile at that point and doing these things. And what was interesting was, agile is a component in my mind for helping you stay more closely tied to the business requirements for software to adjust quickly rather than a fail safe for ensuring that you’re delivering the exact software that somebody needs. And those are slightly different. But I think it’s the reality of agile is that you are releasing frequently, you are getting stakeholder, buy in frequently, and you’re ensuring that you’re working towards the value that they need. And the idea is you don’t get to far off the track building some crazy stuff that nobody ever needs, and then check in at the end. And they’re like, what in the world is this that is not at all what we wanted, you just wasted nine months. Right? So that component of accelerating software, I think, is great in agile, but I think then there’s the systems and tools that support that. And we’ve seen some great stuff in DevOps, even just on the developer side from the IDEs, the code intelligence stuff that kind of auto completes your code almost at this point, which makes it more reliable code as well. You’re getting higher quality code because you’re not necessarily doing such dumb things because your options that are more limited and intelligent. And so there’s things like that that are helping. And then in that development process is also code scanning and code quality reviews. And a lot of this is automated where you can go through source and find high risk things or passwords that are embedded or poor coding practices, memory leaks, all this kind of stuff, all of that kind of exists on that developer endpoint. And then as you get into the DevOps kind of end of it. Well, now, hey, I’ve got something I’ve built. I need to test it. I need to deploy it at some point. It needs to be blessed and go to production, and customers need to use it. And all the automation of the tooling around that is fantastic. We’ve got great workflows where we can pretty much have developers deploy if you allow them to directly into production. And that’s awesome. From the security perspective, I think there’s maybe a Delta, right. Because if you go full DevOps, you sure better have some automated tooling in there, down to the developer endpoint and in the DevOps process to monitor that from a security perspective. And if you can’t afford those tools, you probably should slow down a little bit because you still need to ensure that you’re doing some secure things. And I think that is maybe the crux in there. Right? We’ve built things that can help us move fast, and some of those are commercial things that carry large price tags to have high degrees of accuracy. Some things are open source that do very well, but you still have some gaps you need to address and kind of everything in between. And so I think as you kind of look at how you bring that Dev process forward, like security is a layer. It’s a layered system, and we need things that are ensuring code quality and ensuring security of developed code and then getting deeper if they’re using crypto, proper key management, proper algorithm usage. Right. All these things that are very esoteric that you could never expect a developer of some wearable, really. Why is that his world. And then I’m deploying it and as I’m deploying it… I want my environment to be the same environment. We went through this at DigiCert where you don’t want to deploy something and test something in an environment that’s different than where it’s going to run, there’s so many bugs. There’s so many security vulnerabilities and things that could be detected early and that you circumvent by creating a deployment process local or into some staging system that’s different, that then can seek their way into production because you’re not using the same environments. And I think in a true DevOps environment, if you’re utilizing the same environment everywhere, you’re inherently going to have more security because you’re trying to, quote-on-quote, do things right from the onset as opposed to – it’s just staging. Just open that up. But we don’t need that here, whatever that thing is, you know what I mean?

PAUL: This week’s Security Ledger podcast is sponsored by DigiCert. DigiCert is the world’s premier high assurance digital certificate provider, simplifying SSL, TLS and PKI and providing identity authentication and encryption solutions for the Web and the Internet of things. Check them out at DigiCert.com.

PAUL: The sort of cautionary tale around this, obviously, is the recent SolarWinds incident, where you had a compromise in the build process, malicious code, a backdoor injected into a software update that was then signed and sent out to customers, thousands of them. What in your mind is the lesson of SolarWinds then around for companies that either are software publishers like SolarWinds with their Orion product or obviously the downstream customers of those about what they should be doing to hedge against that type of risk.

BRIAN: I mean, I think it kind of boils down to some of the things that we just talked about here, right. Like, I think looking at the SolarWinds thing, and kind of the infrastructure, what happened there, if you get down to that developer laptop, let’s say, and their environment of what they’re doing, regardless of how that person is connecting to that source repository, somebody who’s working on that, do you have something in place that is going to ensure code quality, catching bugs and preventing weird behaviors and stuff? Do you have something in place that can fix vulnerabilities that can compromise the app and ensure security best practices? And then can you make sure that the code also is not incurring tech debt as it moves forward? The developer velocity can be maintained because there’s not these crazy branches of code that maybe never get used but can introduce compromise. Right? All of these things kind of package up to threat vectors for that kind of attack, and those things can be done by process back to in certain people do proper code reviews, do app security reviews, things like this. It can be product where you can get open source or commercial things that will automate and provide those things. We still need a person to look at them generally. Right. But to do that, and then what is the process for code to actually make its way into a repository? Right? Do you have integrated in your process? Those code reviews are taking place before a commit is accepted before merge happens, do you have to pass certain automated tests and app security tests and code quality tests and stuff before it can merge? Those are big questions, but things people need to get placed. Now, you can kind of at least say this code has gone through some modicum quality review, security review that we can say it’s acceptable to come in, acceptable to come into review, maybe to get urge. And now there’s things where you can do signed check-ins as well. So I can sign in GitHub. I can use that identity back to zero, trust back to the identity. I can use the identity of that developer and sign that check in into GitHub. So now I have an audit trail. If something does go malicious wrong and know exactly where that came from. And then let’s look a little further down the line. Now, I have this software, and I get to the point where I do want to deploy this. And this is going into production. Code these days, especially in a multi tenant cloud environment, a private cloud environment, or however somebody’s running it should absolutely be code signed, whether it’s a full docker container that is signed or independent binaries. That’s up to how they deploy and what they’re doing. But the idea is to ensure the integrity of that thing that’s running, because signing is going to be an event that says, yeah, we’re attesting to all of those processes and quality to say this thing can actually be deployed, and it should go out on the network. We’ve checked whatever our list and it’s out there. But then on the back end of it, people don’t usually think of this, if that thing gets changed right, if somebody breaches that deployment and goes and changes a config file or something like this, you’re breaking that signature. And now you can be monitoring for the signature of that thing that is deployed to say, only allow it to run if it is in the state that we signed it at right, and prevent things from that threat factor. And then lastly, I think is as code kind of hits that crossing that finish line of getting to deployment, you want to go agile like we’re talking about earlier, you kind of want to agile. You want to let that developer or that team deploy things into production in certain environments, going through whatever your approval process is. But what we’ve seen is, I think a good shift with a lot of companies who are security minded just saying they want to go to quorum based deployment. And so they have checks in the system to say we followed whatever those things are that I just talked about. And we went through our checklist of all the different things manual or automated or whatever. And here’s the score and the human version of what we think about that. That with our process for our audit trail for deployment. But now, Paul, your manager and somebody else in the organization need to approve your ability to sign that code and push that out to production. And you have a quorum that are reviewing those things that can push that out, and those can be automated. It sounds heavy handed, right? But it literally can be mostly automated and distributed to people in a rapid way for them to review and understand what’s happening. But it gives another one of those checkpoints for what is the quality of what’s being deployed and who knows about it? And can I monitor that through its life to ensure integrity?

PAUL: And DigiCert has a product that does this, a Secure Software Manager. Could you just talk about, like, let’s just take the SolarWinds Orion as an example. If there were these processes at each stage, and we don’t actually know what the internal processes of SolarWinds were, but just hypothetically, how do you prevent these sort of last minute injection of a backdoor into a software update? How does this process make it much harder to pull off that type of an attack?

BRIAN: Two pieces here the things I just talked about, there’s a multilayered approach I think would give them a really, really good chance of fighting against the attack vector you just described, right? It’s all of those things in place as a process for the holistic kind of security view that’s going to allow somebody to have a fighting chance against that. I think specifically with DigiCert’s Secure Software Manager product, what it has done is it’s reduced the barriers of difficulty for interacting with the signing processes and the various algorithm implementations and specifically access to the signing keys and key management functions for who can attest to that software from your company. We’ve built in these integrations into the Ides, into the DevOps tools. So from the developer workflow or whoever’s deploying it, whether it’s a developer or some SRE team or whatever, they don’t have to change what they’re doing per se. We integrate directly into the tool sets that they use. But the difference is, instead of them signing, the difference is instead of them deploying unsigned code in some staging environment because it was too difficult to do all the signing processes and key management and gosh, I don’t want to do that I’m a developer… I want to go quick… Is now done transparently for them. And so now, hey, they’ve got an environment that is properly signed and is properly managed, whether it’s running locally, whether it’s running in staging or deployed out into the cloud, it has the exact same security configurations from that aspect of the system, which we just talked about ensures that there’s not those changes, there’s not those risk in the environment that introduce vectors. And then lastly, that the signing operation, as you say, hey, let’s go put this thing into production. We do implement that quorum based approach and allow customers to optionally use that and say, yeah, through the whole double lifecycle have at it, go do your things, make sure they’re signed. We’re going to manage your keys and have appropriate audit trails, and we’ll know who’s doing what and where it’s happening. But, man, when that thing’s gone to production, we need two of three even to say, yeah, we went through our checklist our process and go ahead and do that, and we help people cross that line and just make it super integrated and super easy for them to do that.

PAUL: Final question. I mean, one of the other kind of recommendations or mandates, I guess, in the Executive Order, was for federal agencies and software companies, contractors that work with the federal government and federal agencies to create or issue software bill of materials for their products, basically list of the components that are in a software application or product version numbers, and so on. I kind of think of this like the Takata Airbags, right? When that whole thing happened, automakers could basically look and say, well, we know exactly what cars have these versions of the Airbags in them, and we can replace those. But in software, it’s a little bit more of the sausage factory model or it has been this is kind of an effort to move away from that, right? To something. So this is something DigiCert has done on its own and actually gets asked about by obviously, given the sensitive nature of the work that you do with companies, you get asked about stuff like this. But just talk about this S Bomb software bill of materials concept and how it works at DigiCert and what you see happening with that going forward.

BRIAN: Yeah. And we have gone through that. You’re right. And I think the reasoning that we went through it is interesting. We are an organization that has to adhere to several different compliance and regulatory programs in order to operate and provide these identities and trust systems to whatever it is, whether it’s an IoT infrastructure or public trust to web browsers or whatever. There’s things we have to do to make that happen. And in that is associated audits of our environments and our processes and all this stuff to make sure we’re actually doing what we say we do and not introducing risk for companies’ trust and in that process is kind of, curiously, a whole piece where you describe what you’ve been deploying and this has been happening for Gosh, probably at least twelve, maybe more years, twelve years of where my memory fails me. But I remember going back about that far, the auditors would say, well, we want to know everything that is related to crypto in your environment. And where did that come from? And as you saw, open source software start to introduce kind of cryptographic algorithms and implementations of varying things. And you start to then see other products rely on those implementations, and probably three, four core implementations that are relied upon by thousands and thousands of things. We had to go through a very manual processes initially to generate, hey, this is where the crypto came from, right? If we didn’t write that piece of crypto, but we’re relying upon something. This is where it came from. And this is kind of the audit trail and history of that. And I think it’s fascinating because it’s one aspect of security. You unpack all of this into the Executive Order and the bill of materials kind of mandate in there, to me, that really aligns with things that we see in the IoT space. And those guys have really they have had challenges, as we’re all aware in IoT with breaches and different, kind of different errors that have happened that have allowed systems to be compromised or other systems to be piggybacked and compromised. A lot of that goes down to, well, what’s on that device, right? And it could be what’s the software, what’s the open source software you’re relying upon, but also what chips and a lot of chips today that get embedded in devices come with software already baked on them. And what’s the software on that chip? And is it open source? And where did it come from? And so I think it’s an extreme example, but widely prevalent in the IoT industries of how do you track and manage all that stuff? And now, as you kind of sync this up into this Executive Order, I think it’s a wonderful thing, because the more that we can encourage, especially in the IoT space, the ability to say I know what’s on there. And when I talk to customers, I think the important part is not only knowing what’s on there, Paul, but when a breach occurs in something. Oh, hey, I heard there’s something in Wolf SSL that allows a side channel attack or something, the ability that I can go to some system or something in my infrastructure and say, am I using Wolf SSL version T0234 or whatever? And I can see, oh, that’s deployed out on 50,000 assets that are on customers’ networks because it’s an IoT device. And what’s my remediation strategy, right? We have such a kind of naive approach I would say in the IoT space that that sounds very simple, what I just said, but it’s surprisingly difficult for customers to be able to do something like that and answer those questions, let alone remediate it. I think that first step is answering.

PAUL: Yeah, we saw that happen with SolarWinds, where there were many companies that were like, oh, we’re not impacted by this and then turn around be like, oh, actually, we are. And you with Heart Bleed, you saw it going back many years with Sequel Slammer, where there were implementations of Sequel Server that were in Microsoft products that maybe were running on desktops that people didn’t realize there was Sequel Server code in there. Final point or question is, just as you pointed out around open source because so much of modern application development is relying on open source. And yet so many of these projects, even critical libraries and components or single developer projects or small community developers side projects kind of working on it in their basement, highly vulnerable to compromise of various sorts, either malicious contributions or just inadvertent errors. And that’s kind of a boil the ocean problem as I look at it, it’s such a huge space. How do you impose order and control? If you’re the federal government, you’re writing checks to people, you can lay down the law. But for open source, it’s much harder, right?

BRIAN: It really is. Yeah. I think in that regard, it’s interesting some of the moves that we’ve seen with GitHub where they’re doing code scanning and showing people vulnerabilities are checked in code automatically. I’d love that sort of stuff deepen, right? Because to your point, the single developer, he doesn’t have the time or necessarily expertise to do all those things. But if we can build some of that, I still encourage smart contributors to create something that’s widely usable. Let’s do that. But let’s give him a little safety net, you know?

PAUL: Brian Trzupek, thank you so much for coming on and speaking to us on the Security Ledger podcast. It was great, great having you on.

BRIAN: Thank you for having me. I really appreciate your time.

PAUL: We will do this again.

BRIAN: Absolutely, love to.

PAUL: Brian Trzupek is the Senior Vice President of Products at DigiCert. He was here to talk to us about digital code signing and securing software supply chain.

PAUL: This week’s Security Ledger podcast was sponsored by DigiCert. DigiCert is the world’s premier high assurance digital certificate provider, simplifying, SSL, TLS, and PKI and providing identity authentication and encryption solutions for the Web and the Internet of Things. Check them out at DigiCert.com.

[END OF RECORDING]