Popularized by companies such as Google, PayPal and Facebook, bug bounties have become a common approach for securing web applications and infrastructure. Security is a baseline expectation of today’s consumers, and this is especially true in the banking sector. No technology is perfect, but encouraging an extensive, open review and working with skilled researchers is a crucial step in delivering that security.
The need for bug bounties
Bug bounties have become mainstream. And with good reason. They harness the intelligence of a wide, varied set of ethical hackers and security researchers. The concept of a bug bounty is simple: incentivize a community of cybersecurity experts to search for vulnerabilities and exploits on exposed perimeters. These hackers are rewarded for the responsible disclosure of any vulnerabilities. Platforms such as Yogosha and HackerOne streamline the process by running multiple bounties and managing communities of security researchers.
It’s nearly impossible to make an app or network impregnable from the start. Whether code underlies a bank or aviation system, vulnerabilities are inherent in how software is created. Developers are human, and they are bound to make mistakes. There is an oft-cited statistic that states there is an average of “15 – 50 errors per 1,000 lines of delivered code.”
Banks and financial institutions hold some of the largest collections of sensitive, private and valuable information, not to mention money. In fact, cybercriminals target financial services firms 300 times more frequently than other industries, per Forbes. So for financial software, the need to find and fix weaknesses is all the more urgent.
As financial organizations continue developing and managing highly connected and distributed products, combating external threats continues to be a major challenge. According to Accenture, financial services firms experience, on average, 125 breaches per year. Attack surfaces have become more complex, and the pressure on short time-to-market delivery continues to increase. Bug bounties then act as an extra layer of protection. They create a feedback loop between builders and people who strive to break things. And from this process, more resilient products are born.
A paradigm shift
Rather than trying to hide the flaws, bug bounties work to increase the number of eyeballs on them. To the banking industry, accustomed to the demands of proprietary software, the approach can seem radical. This sort of design ethos is counter to what banks have traditionally done: keep their code private, whether it was developed in-house or written by vendors like Fiserv or FIS, and try to deny access to hackers in order to keep their systems secure.
This is changing, though. Over the past few years, crowdsourced cybersecurity has seen increased adoption in financial services. N26 is offering cash rewards to those who manage to identify security issues. TBI Bank is asking white hats to go after all systems — including applications, web services, APIs and mobile. BNP Paribas, Starling Bank, PayTM and Goldman Sachs are also running bounty programs. More banks than ever want to stress-test their online defenses.
Despite growing adoption, a bounty program should not be considered a stand-alone solution. But it is a good complement to penetration testing. Bug bounties tend to be on-going and result-oriented, whereas penetration tests often focus on compliance and tend to be one-time or infrequent affairs. Bug bounties offer particular benefits in a Software as a Service (SaaS) context because with SaaS, there is frequent delivery, and this increases the need to correct any flaws quickly.
Benefits and execution issues
A common misconception about bug bounties is that they must be public. These programs can be run more discreetly with their architects retaining control. They don’t have to be continuous. They can be time-bound and set to match any desired scope or organizational goal.
In practice, bug bounties can require engineers and product owners to engage in consultation meetings, code reviews, threat modeling sessions and security tests — they provide the ability to interface with a global pool of technical talent. They help reduce the need to contain bad publicity and pay for valid results rather than time or effort. They bring diverse skill sets to the table and can be more cost-effective than hiring a team of full-time security researchers. In fact, large open-source projects are more secure than in-house, private software.
However, there are factors that may negate the positives if the necessary steps are not implemented and executed properly. It can be difficult to differentiate and detect if there is a malicious hack underway. And sometimes, scope creep or another dispute may lead to unintended disclosures. It’s also essential to have a system in place to triage and fix the bugs you do find — having a remediation process within a software development life cycle is a must.
Securing the future
While there is a solid case for bug bounty programs, it is possible to get carried away with industry claims that imply a huge success rate. Bug bounties are effective when used strategically and professionally, but they’re no replacement for a good secure development life cycle.
The bottom line is that firms should use bug bounty programs as an extra protection layer to quickly identify issues and to maintain their software, post-deployment.
In conjunction with Yogosha, Sopra Banking performed a bug bounty program on our Digital Banking Engagement Platform (DBEP) in 2019. Security researchers were invited to find potential vulnerabilities in the platform over a three-month period. More than a dozen vulnerabilities were discovered, and although none were critical, some of them were not identified by previously performed penetration tests. We aim to generalize this security testing approach to more projects in 2020.