Rethinking government use of commercial exploit tools after WhatsApp spying
January 31 2020Earlier this year, Facebook released an emergency patch after it discovered a software vulnerability in the voice over IP (VoIP) code used in WhatsApp that allowed attackers to remotely install malware on a user’s device by simply placing a call to their phone—the user would not even need to answer. The seriousness of this vulnerability became even more apparent a week ago after Facebook filed a lawsuit against NSO Group, an Israeli cyber security company, alleging that the company used its malware to infect 1,400 mobile phones belonging to journalists, diplomats, human rights activists and senior government officials in an attempt to access their encrypted WhatsApp messages (presumably on behalf of one or more unknown clients). WhatsApp worked with Citizens Lab, an academic research center at the University of Toronto’s Munk School, to identify the affected users and notify them of this privacy breach.
Most people are probably unfamiliar with NSO Group, but it is one of many specialized companies selling exploit software to law enforcement and intelligence agencies around the world. The company describes its flagship product, Pegasus, as an online digital tool that can access the personal data of users from Google, Apple, Facebook, Amazon and Microsoft without the knowledge of users or the affected tech companies. It can be used to track the location of individuals’ devices, and even turn on the microphones and cameras to remotely eavesdrop on people. The software works by exploiting documented and undocumented vulnerabilities in these systems.
Since these revelations have come out, there has been a lot of finger pointing for who should take the blame. NSO Group denies the allegations. It says that it only sells this tool to those “fighting crime and terror,” and so the implication here is that if one or more of its customers (i.e., a government agency) are misusing its tool, then that government is the only one who bears responsibility for any abuses. Most governments are blaming Facebook, which owns WhatsApp, arguing that the company should have relayed information about the attack sooner. And Facebook, which is seeking an injunction against NSO Group, clearly blames the company for developing and selling access to its online tool to hack the WhatsApp systems.
Each has a point. Clearly, government agencies should not be using a tool like this if it violates laws and international norms—which many suspect to be the case given the list of victims and the lack of strict oversight over the intelligence operations in some countries. Similarly, WhatsApp may have been able to reveal more details about the attack sooner, but that does not mean it was wrong to wait—after all, it takes time to pinpoint and verify the source of the attack. And finally, it is clear that these specific examples of surveillance would not have happened if the NSO Group had either not been selling its tool or better controlled who had access to it.
How should policymakers respond?
While Facebook should have its day in court and the opportunity to hold NSO Group accountable for any illegal activity, ultimately the bigger question here is how should policymakers respond to prevent this type of situation from happening again?
There are a few possible ways they could respond.
Policymakers could pressure countries to control the export of these types of products to ensure they do not get into the wrong hands. But it is not clear that these types of controls will be entirely effective. After all, NSO Group has an export license from the Israeli Ministry of Defense (although Amnesty International has sued to get it revoked). And not all of the individuals and companies making and selling exploit tools operate within the bounds of the law.
Policymakers could also try to stop government agencies from abusing these tools by passing laws putting significant limits on government use of these digital tools, especially against their own innocent citizens, and obtaining multi-lateral commitments to spread this practice globally. While this may place some limits on law enforcement use of these tools for inappropriate purposes (they may still allow such tools to be used openly for legitimate reasons, such as accessing the device of a criminal with an authorized court order), the intelligence community operates covertly, and it will be difficult to maintain oversight over their use of these tools.
Better options?
A better option is for policymakers to rethink the role of government in improving security in commercial products. Many governments have already faced sharp criticism in the past for their failure to disclose known vulnerabilities in commercial systems and instead use knowledge of these vulnerabilities to enhance their offensive cyber capabilities. Indeed, the US government revamped its Vulnerabilities Equities Process (VEP) in 2017, to create a more reasonable, consistent and transparent process for deciding when to notify companies of discovered vulnerabilities.
However, there is still a lot of grey area in this policy. For example, government agencies may use commercial tools, such as Pegasus, to take advantage of zero-day exploits while avoiding direct knowledge of vulnerabilities which might obligate them to disclose these details to the vendor thereby circumventing this policy. The result is that consumer and business products have exploitable vulnerabilities that can be misused by government agencies around the world.
Policymakers should address these shortcomings. For example, Congress should consider opportunities to expand bug bounty programs (programs that provide a reward for people to identify security vulnerabilities) where none exist or are insufficient—the US government already has some bug bounty programs, such as one where it invites authorized “ethical hackers” to break into public-facing US Department of Defense websites and applications. Policymakers should also take a closer look at the market for exploits, oversight on the use of exploit tools by the intelligence community, and the role of government agencies in reporting on known exploit capabilities as part of the VEP.
The recent WhatsApp lawsuit shows the significant risk to individuals, as well as public trust, that comes from allowing commercial systems to remain exploitable. The US government, and its allies, should recognize that they need to play a larger role in promoting cybersecurity in commercial systems and realign incentives so that it more profitable to fix these vulnerabilities than it is to exploit them.
Rethinking government use of commercial exploit tools after WhatsApp spying
January 31 2020Earlier this year, Facebook released an emergency patch after it discovered a software vulnerability in the voice over IP (VoIP) code used in WhatsApp that allowed attackers to remotely install malware on a user’s device by simply placing a call to their phone—the user would not even need to answer. The seriousness of this vulnerability became even more apparent a week ago after Facebook filed a lawsuit against NSO Group, an Israeli cyber security company, alleging that the company used its malware to infect 1,400 mobile phones belonging to journalists, diplomats, human rights activists and senior government officials in an attempt to access their encrypted WhatsApp messages (presumably on behalf of one or more unknown clients). WhatsApp worked with Citizens Lab, an academic research center at the University of Toronto’s Munk School, to identify the affected users and notify them of this privacy breach.
Most people are probably unfamiliar with NSO Group, but it is one of many specialized companies selling exploit software to law enforcement and intelligence agencies around the world. The company describes its flagship product, Pegasus, as an online digital tool that can access the personal data of users from Google, Apple, Facebook, Amazon and Microsoft without the knowledge of users or the affected tech companies. It can be used to track the location of individuals’ devices, and even turn on the microphones and cameras to remotely eavesdrop on people. The software works by exploiting documented and undocumented vulnerabilities in these systems.
Since these revelations have come out, there has been a lot of finger pointing for who should take the blame. NSO Group denies the allegations. It says that it only sells this tool to those “fighting crime and terror,” and so the implication here is that if one or more of its customers (i.e., a government agency) are misusing its tool, then that government is the only one who bears responsibility for any abuses. Most governments are blaming Facebook, which owns WhatsApp, arguing that the company should have relayed information about the attack sooner. And Facebook, which is seeking an injunction against NSO Group, clearly blames the company for developing and selling access to its online tool to hack the WhatsApp systems.
Each has a point. Clearly, government agencies should not be using a tool like this if it violates laws and international norms—which many suspect to be the case given the list of victims and the lack of strict oversight over the intelligence operations in some countries. Similarly, WhatsApp may have been able to reveal more details about the attack sooner, but that does not mean it was wrong to wait—after all, it takes time to pinpoint and verify the source of the attack. And finally, it is clear that these specific examples of surveillance would not have happened if the NSO Group had either not been selling its tool or better controlled who had access to it.
How should policymakers respond?
While Facebook should have its day in court and the opportunity to hold NSO Group accountable for any illegal activity, ultimately the bigger question here is how should policymakers respond to prevent this type of situation from happening again?
There are a few possible ways they could respond.
Policymakers could pressure countries to control the export of these types of products to ensure they do not get into the wrong hands. But it is not clear that these types of controls will be entirely effective. After all, NSO Group has an export license from the Israeli Ministry of Defense (although Amnesty International has sued to get it revoked). And not all of the individuals and companies making and selling exploit tools operate within the bounds of the law.
Policymakers could also try to stop government agencies from abusing these tools by passing laws putting significant limits on government use of these digital tools, especially against their own innocent citizens, and obtaining multi-lateral commitments to spread this practice globally. While this may place some limits on law enforcement use of these tools for inappropriate purposes (they may still allow such tools to be used openly for legitimate reasons, such as accessing the device of a criminal with an authorized court order), the intelligence community operates covertly, and it will be difficult to maintain oversight over their use of these tools.
Better options?
A better option is for policymakers to rethink the role of government in improving security in commercial products. Many governments have already faced sharp criticism in the past for their failure to disclose known vulnerabilities in commercial systems and instead use knowledge of these vulnerabilities to enhance their offensive cyber capabilities. Indeed, the US government revamped its Vulnerabilities Equities Process (VEP) in 2017, to create a more reasonable, consistent and transparent process for deciding when to notify companies of discovered vulnerabilities.
However, there is still a lot of grey area in this policy. For example, government agencies may use commercial tools, such as Pegasus, to take advantage of zero-day exploits while avoiding direct knowledge of vulnerabilities which might obligate them to disclose these details to the vendor thereby circumventing this policy. The result is that consumer and business products have exploitable vulnerabilities that can be misused by government agencies around the world.
Policymakers should address these shortcomings. For example, Congress should consider opportunities to expand bug bounty programs (programs that provide a reward for people to identify security vulnerabilities) where none exist or are insufficient—the US government already has some bug bounty programs, such as one where it invites authorized “ethical hackers” to break into public-facing US Department of Defense websites and applications. Policymakers should also take a closer look at the market for exploits, oversight on the use of exploit tools by the intelligence community, and the role of government agencies in reporting on known exploit capabilities as part of the VEP.
The recent WhatsApp lawsuit shows the significant risk to individuals, as well as public trust, that comes from allowing commercial systems to remain exploitable. The US government, and its allies, should recognize that they need to play a larger role in promoting cybersecurity in commercial systems and realign incentives so that it more profitable to fix these vulnerabilities than it is to exploit them.