‘Stand Your Cyberground’ Law: A Novel Proposal for Digital Security
From: The Atlantic
by Patrick Lin
Though problematic, authorizing industry victims to counterattack may prove a good stop-gap measure to remove the political risk of government intervention while still creating deterrence.
With the Cyber Intelligence Sharing and Protection Act (CISPA), we’re in a political tug-of-war over who should lead the security of our digital borders: should it be a civilian organization such as the Department of Homeland Security (DHS), or a military organization such as the Department of Defense (DoD)? I want to suggest a third option that government need not be involved–a solution that would avoid very difficult issues related to international humanitarian law (IHL) and therefore reduce the risk of an accidental cyberwar or worse. This option models itself on the (admittedly controversial) “Stand Your Ground” law that’s rooted in our basic right to self-defense, and it authorizes counter-cyberattacks by private companies, which have been the main victims of harmful cyberactivities by foreign actors to date.
Why We Need More Options
First, as a nation of law, we may not be ready yet for government to lead cyberdefense against foreign adversaries. To do so would trigger serious and unresolved issues with IHL, also known as the laws of war which include Geneva and Hague Conventions as well as binding rules established by the International Committee of the Red Cross. For instance, IHL requires that we take care in distinguishing combatants (such as military personnel) from noncombatants (such as most civilians) when we use force. Yet containing any cyberattack to lawful military targets is perhaps impossible today; even the Stuxnet worm against Iranian nuclear facilities has infected more than 100,000 private, civilian computers worldwide, including in the US. Any cyberattack would likely go through civilian infrastructure; for example, the Internet is not owned by the military, in the case where that’s the delivery channel for the attack. If civilian programmers were to be involved–let’s say the government enlists the help of Google or Microsoft employees in designing a cyberweapon–then those computer scientists and engineers may transform into legitimate targets for retaliation in either a cyber or kinetic (i.e., bullets or bombs) war.
Other IHL issues that we have yet to settle, but would need to for a state actor to lawfully and justly engage in armed conflict, include the principle of proportionality: a counterattack must apply the minimum force necessary to achieve military objectives, yet how effective any cyberattack would be is largely unknown. We might launch several cyberattacks to ensure that at least one of them goes through; but if all of them work, then the resulting damage could be disproportionate or overkill. This and other issues I won’t discuss here–such as the problem of attribution or knowing who attacked us and deserves to be our target–add up to a real risk that the US might act improperly and illegally given IHL, and this could trigger either a cyber war, or a kinetic war, or both.
In thinking about cyberpolicy, it’s natural to look for familiar analogies to guide us. Some have argued that we should follow the policy model for nuclear arms, or outer space, or Antarctica, and so on; and none seems quite right. As imperfect as analogies inevitably are, let’s take another look at this model for a possible solution: the “Wild West” of American history. Both the Wild West and cyberspace now are marked by general lawlessness; bad guys often operate with impunity against private individuals and companies, as well as what government exists in those realms, such as the lone sheriff. The distinctively American solution to the Wild West was found in the second amendment to the US Constitution: the right to bear arms. As more private citizens and organizations carried firearms and could defend themselves, the more outlaws were deterred, and society as well as the rule of law could then stabilize and flourish. We also find this thinking in current “Stand Your Ground” laws that authorize the use of force by individual citizens. If such laws make sense, could this model work for cyberspace?
Why It Could Work
Not to endorse this solution (or “Stand Your Ground” laws) but merely to offer it for consideration as a new option, what if we authorized commercial companies to fight cyberfire with cyberfire? As in the Wild West, civilians are the main victims of pernicious cyberactivities. Some estimate that industrial cyberespionage costs US companies billions of dollars a year in lost intellectual property and other harms. As in the Wild West, they now look to government for protection, but government is struggling badly in this role, for the above-mentioned reasons and others. If we consider the US as one member of the world community, there is no clear authority governing international relationships, and this make our situation look like a “state of nature” where no obvious legal norms exist, at least with respect to cyber.
This option isn’t completely outlandish, because precedents or similar models exist for the physical, nondigital world today. In the open sea, commercial ships are permitted to shoot and kill would-be pirates. Security guards for banks are allowed to shoot fleeing robbers. Again, “Stand Your Ground” laws–which give some authority and immunity to citizens who are being threatened or attacked–also operate on the same basic principle of self-defense, especially where few other options exist.
A key virtue of “Stand Your Cyberground” is that it avoids the unsolved and paralyzing question of what a state’s response can be, legally and ethically, against foreign-based attacks. The wrong move could become a war crime or provide an adversary with just cause to respond with force. Without a clear US cyberpolicy, there’s not much deterrent against would-be attackers. Even when we know who has attacked us, there’s little political will to do anything about it, much less retaliate; so big talk about counter-cyberattacks by the state seems to be a nonstarter.
Possible Objections
Perhaps the reason that we haven’t heard open calls for this option is that commercial companies are fearful of legal consequences or liability in counterstriking, and rightly so. Part of the proposed solution, however, is to give them some immunity in the presumptive right to self-defense. While companies still could be mistaken in identifying and retaliating against a suspected attacker, the same risk exists with current “Stand Your Ground” laws. Also, attributing an attack suffered is a difficult problem even for the military and broader government, so we can’t expect that a national response will always get it right either.
This is to say that a “Stand Your Cyberground” option isn’t necessarily worse with respect to attributing attacks, and we can create safeguards to make it better. Any cases of misidentification and unreasonable action–a corporate George Zimmerman-like case–can be adjudicated with a standard of reasonable proof. An international convention may be needed to set a standard of proof, but this seems easier to reach than broad international agreement on standard or “red line” that would authorize a military response. Our competitors, such as China, explicitly disavow IHL–the laws of war–as the proper frame for disputes and activities in cyberspace. An economic dispute may be easier to negotiate than a political or military one that may force the United Nations, International Criminal Court, and other such organizations to be involved where state-sponsored attacks occur.
Domestically, perhaps we’d require companies to present evidence and secure a judicial warrant in advance of their counter-cyberstrike. To better attribute an attack, we can enlist the support of government while shielding the state from a direct role that may trigger IHL. For instance, agencies such as the Federal Bureau of Investigation (FBI) or even the National Security Agency (NSA) could help gather evidence for attributing an attack. Further, in counterstriking, a commercial company could make its activities difficult to trace back too; what’s good for the goose is good for the gander, as they say. A company has no legal obligation to raise its flag or identify itself as if it were waging war, because the conflict would be an economic one, not a military war in which combatants must identify their affiliation.
But there are still good reasons to worry about wrongful attribution, namely given the criminal tactic of using “botnets” or otherwise-innocent computers hijacked to do misdeeds. If a US company that suffered a cyberattack were to retaliate against the last computer from which the attack as launched, and the last computer was an unwitting or “innocent” computer, that seems to present at least an ethical problem of targeting the wrong person or organization. But again, we see this risk with other models. A pirate could actually be a coerced fisherman whose family is being held hostage if he does not become a marauder. A bank robber could actually be an innocent businessperson who was similarly forced to commit the crime. It’s regrettable that a counterattack could be launched on such an innocent pirate or robber, but our action would seem to be reasonable under the circumstances. Notice also that the identities of even true pirates and robbers–or even enemy snipers in wartime–aren’t usually determined before the counterattack; so insisting on attribution before use of force appears to be an impossible standard.
Moreover, there’s a reasonable argument in claiming that a botnet is not fully innocent and therefore not immune to harm. Most if not all botnets are made possible by negligence in applying security patches to software, installing anti-malware measures, and using legally purchased and not pirated, vulnerable copies of software. The method of retaliation open to companies also need not be as dramatic as an “attack” as usually understood, such as blowing up physical equipment as the Stuxnet worm had done; it could be a softer, reversible response such as installing an anti-malware program to a hijacked computer against its owner’s will, or encrypting data that make a computer temporarily inoperable. If computer systems of a third-party nation, i.e., a victim itself, wrongfully suffer a counterattack by a US company, that would perhaps provide an incentive for the third-party nation to shore up its own defenses and help trace, deter, and punish the real attacker.
It could also be complained that only nation-states can lawfully declare war and retaliate to an attack. That is, the “Stand Your Cyberground” option could be a war-powers problem for Constitutional law, even if not IHL. But again, anti-piracy laws in the open seas provide a model for commercial enterprise to protect itself with force, no matter who the aggressor is. One major difference between a private citizen or company launching a kinetic weapon from the US to foreign soil is that it’s a cross-border attack that violates the target-nation’s sovereignty; but in cyberspace, it’s unclear where the national borders are, if they exist at all. Anyway, we haven’t established that this tit-for-tat cyberexchange is war in the first place; otherwise, this would be clearly the area of responsibility for the Department of Defense, and that is not at all clear today, as the CISPA debate shows.
If we can resolve legal issues with this “Stand Your Cyberground” option, there still would be other concerns. For one thing, it puts the burden and cost of cyberdefense on industry. But to the extent industry is the primary victim, the option of doing nothing could be more costly. Further, insofar as the US is increasing investment in cyberdefense–whether to DHS or DoD–it could shift some of those funds to the private sector, providing an investment for their cyberdefense but, more importantly, not directly engaging foreign adversaries while unclear on the legal and ethical implications of state engagement. Other concerns include that the defending company could be perceived badly by the public, stockholders, and customers. On the other hand, a company that can defend its intellectual property may be seen as a more responsible investment: this year, Microsoft disrupted international botnets, and its reputation is no worse for the wear.
Conclusion
To be clear, I am not proposing that we adopt this solution, but only develop it for full consideration. Surely there are other objections to resolve and details to work out, including the technical feasibility of the proposal, though many US companies have considerable resources and expertise, more so than some nations. But if “Stand Your Ground” laws make sense in domestic criminal policy, then it could be a reasonable model for cyberpolicy. Despite its roots in US military, the Internet has a physical infrastructure that ultimately is owned mostly by commercial organizations worldwide–again, the primary victims of cybercrimes–so it is arguably appropriate that they should defend themselves in the absence of state protection.
Besides plausibly having legal precedents, this option may sidestep our current dilemma of a state-sponsored cyber response, the unclear legality of which risks escalation into an international incident and armed conflict. It turns our problem from one of military ethics to business ethics, a less dangerous and less contentious frame. It also promotes justice by filling the response-gap we currently have: striking back at perpetrators of industrial cybercrime–some sponsored by foreign states–has deterrence value and could be better than doing nothing or, worse, panicked foreign policy that might drag us into war.
Of course, we still need a military policy for the cyber domain and to eventually sort out IHL and other issues in the event of an actual act of war, such as a cyberattack on our military assets by a foreign state. So, if workable, “Stand Your Cyberground” would be merely a stop-gap measure in the digital Wild West until we develop that national cyberpolicy or international treaties. And this is critical especially where other options seem to require tradeoffs of treasured civil liberties–such as privacy and freedom of expression–in the desperate name of security.
Acknowledgements: This article has benefitted from the US Naval Academy’s VADM Stockdale Center for Ethical Leadership’s McCain Conference on cyberwarfare ethics and policy, held on April 26-27, 2012 in Annapolis, Maryland. Specifically, the author thanks US Air Force Lt. Col. Matteo Martemucci, Col. Edward Barrett, and Col. James Cook for their discussions, and California Polytechnic State University in San Luis Obispo for its support. Opinions expressed in this article are the author’s alone and not necessarily that of the organizations and persons acknowledged or other organizations affiliated with the author.
Print article |