The Security and Exchange Commission’s decision to seek fraud charges against SolarWinds and its CISO Timothy Brown is the latest indicator that we have entered a new era in DevSecOps – one in which software developers and IT manufacturers will be held accountable for security lapses in their products.
The SEC’s Oct. 30, 2023 announcement coincidentally arrived on the same day that the Biden administration issued an executive order that imposes security and privacy expectations for artificial intelligence systems, which are finding their way into various enterprise and productivity software packages. And back in March 2023, the White House announced its National Cybersecurity Strategy, which calls for legislation that shifts cybersecurity liability further up the supply chain.
Not every security incident results from gross negligence. But with the U.S. showing it’s not averse to meting out justice, tech companies would be wise to assess the current state of their application security and DevSecOps processes in order to identify weak spots that could leave them vulnerable to regulatory action.
In December 2020, SolarWinds revealed it had fallen victim to a two-year supply-chain compromise that sabotaged its Orion network monitoring and management tool. According to the SEC, SolarWinds and specifically Brown “defrauded investors by overstating SolarWinds’ cybersecurity practices and understating or failing to disclose known risks,” despite the company’s alleged awareness of security deficiencies. The commission plans to level significant financial penalties, seek injunctive relief, and bar Brown from officer- or director-level positions. (A SolarWinds spokesperson reportedly called the charges “unfounded” and “an overreach.”
If Biden’s focus on supply-chain and AI security results in more stringent cyber legislation, there could be further crackdowns on AppSec negligence. Biden’s National Cybersecurity Strategy says it all: “Too many vendors ignore best practices for secure development, ship products with insecure default configurations or known vulnerabilities, and integrate third-party software of unvetted or unknown provenance. Software makers are able to leverage their market position to fully disclaim liability by contract, further reducing their incentive to follow secure-by-design principles or perform pre-release testing.”
“We must begin to shift liability onto those entities that fail to take reasonable precautions to secure their software,” the document continues.
Months later, the White House explained in a public statement that its artificial intelligence executive order tasks the National Institute of Standards and Technology to develop “rigorous standards for extensive red-team testing to ensure safety before public release” of AI systems.
The good news is that there is no shortage of cyber solutions, services and best practices that can help developers perform responsible due diligence. These options include static, dynamic and interactive application security testing (SAST, DAST and IAST); software composition analysis (SCA); application security testing orchestration (ASTO); pentesting and bug bounty services; API security solutions; and software bills of materials (SBOMs).
Meanwhile, innovations in automation and AI – when used responsibly – can help keep AppSec practices efficient, such that the software development lifecycle isn’t bogged down. Based on recent developments, organizations should consider some of these practices, if they don’t want to be made an example of – by attackers or by government regulators.