Editor’s note: The following is a sponsored blog post from Adobe:
Application testing is a critical component of a software development lifecycle. That’s a given. Typically, a solid testing battery for any application includes not only functionality and usability testing, but also takes security and reliability testing into account. However, most testing methodologies in our industry have a fatal flaw: they often fail to identify the right actionable items — and the right prioritizations — that move the needle in product quality and resilience.
While the “OWASP Top 10” and “CWE/SANS Top 25” are important, they are now truly just the bare minimum of a modern security testing strategy. Becoming more “adversary-aware” and helping development teams “shift left” with security controls in their applications should be the goal of any leading security organization today.
Adobe set out to solve these challenges not just by making our testing efforts more extensive or frequent, but also by making them smarter. Tighter alignment of testing with the software development lifecycle and better modeling of real-world adversary threats has allowed Adobe to become more DevSecOps-minded in our approach to application security testing.
The Current Standard
Annual external penetration testing is an expected standard practice in the software industry. A company gives an external pen tester an account and they have internal network access, a process called “gray-box testing.” They conduct network and application testing and share the findings with the company in a report. Simple enough, right?
Well, not really. One of the biggest application security problems that companies face is the sheer number of issues flagged and the number of false positives generated by external penetration testing. In most organizations, each of these issues is given a severity rating ranging from low to critical and ticketed for remediation. Product teams see their inboxes fill up with tickets and while they have a vague idea that critical and high severity issues must be fixed before low severity ones, they don’t know which ones truly matter to product quality and resilience. Nor do they know which issues adversaries are actively exploiting today — or could possibly exploit in the future.
This illustrates the fatal testing flaw noted earlier: Determinations of levels of severity and criticality often don’t align with the reality of what will actually move the needle in protection against adversaries. While acceptable in the industry to-date, this approach generally doesn’t give application developers adequate information to prioritize and act. Product teams rush to address issues found through this inadequate testing to get report sign-off without any clear view as to how that effort will improve product security. Plus, this may lead to a less-than-helpful assessment of actual application security and risk for customers.
Not Only More Frequent Testing…
In early 2022, Adobe revamped our entire product security testing strategy in two ways. First, in addition to annual external pen tests for each of our customer-facing products (typically a two-to-three week process), we have ramped up our continuous testing using an adversary driven approach for all our systems, both customer-facing and our back-end internal systems. While pen tests are important from a compliance perspective, we also supplement these tests with a wider variety of testing, which can find issues that arise between the annual external tests and this process earlier in our software development lifecycle. Then, we roll up all the testing data for a given product or service — both internal and external — into one, high-level report.
All this means that we’re getting more data for analysis that allows us to better predict possible issues that might occur down the pike. The volume of data we derive from continuous testing combined with external testing gives us more information than ever before, helping us to better stay ahead of adversaries in the long term. But it’s more than just volume that matters.
But Also Smarter Testing
As I mentioned, previously we received reports from our pen-test vendors and created tickets for every item. Often, this created “noise” that wasn’t impactful to overall software quality and security, and had the potential to be red herrings. Now, we challenge our pen-test vendors to tell us which findings are exploitable and which are simply best practices or informational but not truly exploitable today. This shift dovetails with the overall change in our testing approach.
By focusing on potential exploitability by adversaries rather than the traditional scattershot approach, Adobe can better focus our product teams on the issues that truly affect product quality and resilience. They now know that we have verified that what we are ticketing is high risk and they must remediate it. And by ticketing only the exploitable findings and putting an aggressive SLA on these tickets, we focus our product teams not only on what’s most important for us as a company, but also our customers. More importantly, by keeping focused to the most impactful issues, our product teams learn how to create better code.
The Net-Net
Combined, these two strategic changes are part of our overall effort to reduce noise for product teams, enabling them to more successfully adopt a “shift left” approach to product security. Moreover, we can implement a more DevSecOps-minded approach to security testing. Customers benefit as these changes enable us to be more transparent overall about our testing efforts, which, in turn, helps us deliver the safer digital experiences users demand and better gain and retain their trust.