Backdoor: A Technical Definition

Original Date: April, 2016

A clear technical definition of the term backdoor has never reached wide consensus in the computing community. In this paper, I present a three-prong test to determine if a mechanism is a backdoor: “intent”, “consent”, and “access”; all three tests must be satisfied in order for a mechanism to meet the definition of a backdoor. This three-prong test may be applied to software, firmware, and even hardware mechanisms in any computing environment that establish a security boundary, either explicitly or implicitly. These tests, as I will explain, take more complex issues such as disclosure and authorization into account.

The technical definition I present is rigid enough to identify the taxonomy that backdoors share in common, but is also flexible enough to allow for valid arguments and discussion.

1.0    Introduction

Since the early 1980s[1], backdoors and vulnerabilities in computer systems have intrigued many in the computing world and the government, and have both influenced and been influenced by popular culture. Shortly after the movie Wargames was released, Ronald Reagan discussed the plot, which revolved around a backdoor in a defense computer system, with members of Congress and the Joint Chiefs of Staff[2], which led to research into the government’s own risk assessments. Before the Internet was largely in place globally, computers at Lawrence Berkeley National Laboratory were compromised by an unknown vulnerability in the popular Emacs editor[3], the story of which led to a New York Times bestseller. Since the 1980s, and with the now global scale of the Internet, remote access trojans (RATs), root kits, and numerous other types of backdoors have been discovered from some of the world’s largest data breaches[4], and have made it into popular culture such as the CSI: Cyber television series, movies, and books. All of these events have included what has been referred to as a backdoor, but without any clear definition to test the validity of that statement.

While backdoors have become a significant concern in today’s computing infrastructure, the lack of a clear definition of a backdoor has led the media (and some members of the computing community) to misuse or abuse the word. System vulnerabilities that are clearly not backdoors are often reported as such in many media news articles[5][6], helping to spread confusion among the general public. This has the capacity to cause not only a disconnect with non-technical readers, but can also engender distrust, misplaced attribution, and even panic.

By misappropriating the use of the term backdoor, media and/or the entertainment industry can incite the panic that all computer systems are as vulnerable and open to attack as the fictional NORAD defense center in Wargames, and that physical safety is always subject to imminent danger due to such widespread vulnerability. Modern day paranoid has led to many conspiracy articles about power grids, dam computers, and other SCADA systems, painting a bleak picture of numerous doomsday scenarios[10]. While such systems are susceptible to real world attacks, with the help of the media and a little fiction, the public’s fears can escalate beyond a healthy concern for security into paranoid delusions leading to stockpiling weapons, food, and even building underground bunkers. While backdoors are not necessarily the root cause of all of this paranoia, the attribution and conspiracy undertones of the word can help to fuel them.

1.0.1 Need for a Definition

In addition to public panic due to cyber threat “fan fiction”, the lack of a clear technical definition of a backdoor stands to affect technical analysis and possibly attribution of newly discovered implants in computing devices, which has become increasingly regular. By defining a backdoor, the technical community has a framework by which it can identify, analyze, and attribute new weaknesses as they are discovered.

On a legal front, privacy legislation is anticipated in Congress, and pending legal cases already exist within the court system in which a technical definition of backdoor would be beneficial. Without a clear definition, these proceedings pose the risk of misinformation in criminal prosecution, search warrants, warrants of assistance, secret court proceedings, and proposed legislation – all by preventing a duly appointed legal body from adequately understanding the concept.

In February 2016, the Federal Bureau of Investigation sought an order under the All Writs Act to force Apple Inc. to assist them in bypassing the security mechanisms of their own firmware in a terrorist investigation. Throughout proceedings and Congressional hearings to follow, the term backdoor had been strongly used by both sides to describe the FBI’s order, as well as different scenarios describing future orders or proposed legislation. It is crucial, then, that there be an accepted definition of the term as the very definition of backdoor stands to influence justice and legislation on a national, and possibly worldwide stage. Any future attempts at a legal definition of a backdoor must clearly begin with a technical definition, one of which is presented here.

1.0.2 Prior Attempts at Definitions

Some attempts have been made to define backdoor, however all fall short of being both specific enough to cover the intentional and subversive nature of backdoors, and general enough to avoid covering mechanisms that the computing community does not consider a backdoor.

The Oxford Dictionary defines “back door” as “a feature or defect of a computer system that allows surreptitious unauthorized access to data”[13]. This definition makes several incorrect technical assumptions. First, by labeling the backdoor as either a feature or defect, the definition makes two incorrect assumptions about the mechanism itself: namely, that it is either designed to improve functionality (a feature), or that any unintentional vulnerability in a system could be considered a backdoor. Either falls short of what I propose as a definition, which rests somewhere in between: the mechanism is not a feature, but an intentionally placed component that is not disclosed to the user, and not the result of a programming error; otherwise any computer vulnerability could be considered a backdoor. The definition also fails to sufficiently cover the purpose of the backdoor, implying that its purpose is only about access to data. There are many backdoors into system that do not provide access to data whatsoever, but rather surrender control of a system to an actor. There are many backdoors placed in security boundary mechanisms that do not protect data. The definition fails to acknowledge such backdoors.

The Linux Information project (LINFO) defines “backdoor” as “any hidden method for obtaining remote access to a computer or other system”[14]. This definition fails to sufficiently identify a backdoor as a specific mechanism within the software, but rather defines it as a method, suggesting that using any technique to obtain remote access should be considered a backdoor. This is too general, then, and could consider anything from hacking methods to social engineering, as a backdoor. It also allows worms, viruses, exploits, or other means to gain unauthorized access a backdoor, even though many in the computing community would not share that opinion. The second part of the definition requires that remote access be a mandatory requirement for a backdoor, however this paper will demonstrate that backdoors can exist for purposes including local (non-remote) access, or even access by a different user on the same machine. In this paper, I argue that it is the actor and not the origin of access that matters. Lastly, this definition is so broad that it would suggest that a software update mechanism (such as the kinds distributed with Linux distributions themselves) or other similar mechanisms could constitute backdoors.

Aside from generalized definitions, it is surprising that no academic papers could be found that specifically attempted to define the term. There are countless papers in which backdoors are documented, and their taxonomy analyzed, however no clear definition of backdoor was found that could be applied to a general component of a computer system. It would seem that for decades, the computing community at large has taken a “know it when I see it” approach to the term, without ever accepting a clear definition or test.

1.0.3 General Taxonomy

While backdoors have become increasingly complex and vary in design over time, all backdoors share the same basic taxonomy. They affect security mechanisms (more specifically, boundary mechanisms) in the following ways:

  • They operate without consent of the computer system owner
  • They perform tasks that avert disclosed purposes
  • They are under the control of undisclosed actors

1.1    Purpose

The purpose of the three-prong test in this paper is to provide a basis for technical argument: to be able to effectively argue that a component within a security boundary mechanism constitutes a backdoor, or does not constitute a backdoor, and support that argument with consistent facts.

Essentially, this framework is designed to enable one to point at something, in a technical context, and argue, “this is a backdoor, and here are my facts to support that argument”, while also enabling someone else to argue, “no it isn’t, and here are my supporting arguments”, with the expectation that after thorough analysis, consensus may be achieved.

1.2    Definitions

Throughout this paper, the term mechanism is used to describe a boundary enforcement mechanism (security mechanism) that is being evaluated. A mechanism (or security mechanism) can be any piece of software, firmware, or hardware that establishes, either explicitly or implicitly, a security boundary. For example, an authentication mechanism explicitly establishes a security boundary by controlling user access. A software update mechanism implicitly establishes a security boundary by means of code control; that is, controlling the code introduced into a computer based on the user contract, which I will explain in depth. An encrypted channel establishes an implied security boundary by controlling who, or what, can communicate over a privileged channel of communication. All of these are referred to as security mechanisms throughout this paper.

When evaluating a mechanism, components of that mechanism may be explored; for example, an authentication mechanism with a component that allows “golden key” access. In this context, the mechanism is said to be “backdoored” if the component itself satisfies the requirements of a backdoor. Here, the malicious component, a mechanism in and of itself, is the overall subject of the evaluation within the context of the computer, however it must be explored in the context of the larger security mechanism.

Throughout this paper, the term owner or computer’s owner is referenced. Because ownership is complex, this term is intended to mean one who has entitlements and authorization to control access on a computer. This is sufficient to address complex ownership models such as employer owned equipment.

2.0    Three-Prong Test

This section identifies three specific requirements a security mechanism must satisfy in order to meet the definition of a backdoor, and proposes three crucial questions that must be satisfied to meet these requirements.

2.1    Intent

The intent requirement determines whether or not the actions performed by the security mechanism, as intended by its manufacturer, were adequately disclosed to the owner of the computer. Typically, a backdoor can exhibit malicious behavior related to subverting a security boundary that is expected by the user; the requirement allows for this to satisfy the intent of the manufacturer, however also leaves the requirement broad enough so as to accommodate other mechanisms that violate a security boundary that may not be as straightforward, such as the software controls in a software update service.

Does the mechanism behave in a way that subverts purposes disclosed to the computer owner?”

The concept of intent plays closely to user trust and perception. In other words, does the mechanism perform only the tasks that the user expects it to perform (or support tasks that the user expects the larger components to perform), or does it, by design, subvert these purposes? If the mechanism can exhibit undisclosed behavior that is contrary to the intents disclosed to the user by the manufacturer, then this satisfies the intent requirement.

Consider the code controls of a typical software update mechanism. The intent, as disclosed by the manufacturer, is to prevent unauthorized updates. Disclosure by the manufacturer about what types of updates are authorized establishes a user contract.

A user contract, as I refer to herein, is an abstract construct whereby the manufacturer and the user have developed a mutual understanding and expectation of the intention and proper function of a mechanism. This construct allows the user to manage consent, which will be discussed in the next section. For example, the manufacturer will state that the purpose of software updates are to fix bugs and introduce new code into the system that the user would not find objectionable, according to the manufacturer’s privacy policies and end-user agreement.

An example of a user contract can be considered with the example of an authentication mechanism inside of a router that provides maintenance access with a “golden key” password. Here, the intent of the manufacturer for the authentication module was disclosed to the user as a function understood to allow only authorized users into the system (understood to mean having a password that matches a known password created by an administrator). By allowing for a maintenance password to bypass the administrator’s user list, the mechanism has subverted its original purpose in providing a security boundary, and violated the user contract.

Such discussions about intent can quickly become complicated. A manufacturer’s intent can change over time, and with that the user contract must be re-evaluated. For example, consider a software update that updates itself and adds a licensing component that the user may find objectionable. This new purpose must be expressed to the user prior to an update; otherwise the mechanism can be seen as having broken its user contract.

When new functionality is added to an existing user contract, additional disclosure is required in order to modify the user contract; this is frequently seen in practice. For example, release notes are displayed or published by software manufacturers prior to a software update. If intent has changed, the disclosed changes can continue to revise the user contract by again implying consent through disclosure, so long as the user continues to have an informed decision about the mechanism running on their computer. Software that initially obtained the user’s consent, but then revised its intent without disclosure to the user has violated the user contract, and therefore invalidated consent.

2.1.1 Subverted versus exploited

Lastly, consider the term intent, as well as the term subvert, incorporated into the definition. The term subvert takes into account a certain level of intentionality by design. This framework does not attempt to address the matter of negligence, but rather leaves that up to existing legal remedies to explore. It does, however leave open the argument that the manufacturer, for the purpose of exploitation, can leave a mechanism intentionally vulnerable; this is difficult to demonstrate in practice.

Whether or not the manufacturer’s intent for the mechanism is clear can determine whether or not stated purposes were subverted. There is enough freedom here to use other arguments to suggest intent through negligence, but also enough depth to this to ensure that a mechanism is not a backdoor simply because it is vulnerable. Some such arguments may overall be semantic ones, rather than technical ones.

2.2    Consent

The second requirement to determine if a mechanism meets the definition of a backdoor is a test of consent. This determines whether or not the owner of the computer has authorized the mechanism in question, based on the user contract established by means of disclosing their intent.

“Is the mechanism, or are subcomponents of the mechanism, active on a computer without the consent of the computer’s owner?”

This requirement provides enough room to be sufficiently satisfied in cases where consent is compelled (therefore, not truly consent), as well as cases where consent cannot be revoked (such as a service that cannot be turned off, which is also not consent).

Consider the controls of our automatic software update service from the prior section. Software update services typically behave in such a way as their capabilities rest in the hands of owner consent, however consent of the controls themselves are more or less implied. By activating software updates, the user is granting consent to the underlying security mechanisms with the pretense that they will behave according to their stated intent; in other words, they will only permit authorized code to be introduced into the system.

By enabling software updates, the owner implicitly grants consent to the underlying security mechanisms to place controls on the kind of software that is installed, but only to the extent of their disclosed intent. As long as these security controls are performing their disclosed tasks (in accordance with the user contract), such a mechanism would not satisfy this requirement to meet the definition of a backdoor, because it has the user’s consent to control the introduction of code accordingly.

In contrast, backdoors are mechanisms that are active without consent (e.g. that is, “unauthorized”), or cannot be disabled by means made available to the owner. For example, consider a subcomponent of the software update mechanisms that permits unauthorized software to be introduced into the system. The owner did not authorize this subcomponent (since it was not part of the user contract), and therefore did not grant consent for it to be active on the system – this satisfies the consent requirement to meet the definition of a backdoor. On the other hand, if the security mechanism was compromised (“hacked”), then it does not meet the consent requirement to meet the definition of a backdoor, because it was still running with the user’s consent. In this case, it is a compromised mechanism, but not a backdoor. This concept of compromised mechanisms will be explored in more detail throughout the paper. The intention of the manufacturer, which ultimately effects the user contract, plays a key role in determining the difference between the two.

Consider the following examples that would satisfy the consent requirement to meet the definition of a backdoor:

  • A software daemon that is installed when the computer owner runs a new application for the first time, and is not capable of being disabled through the user interface. Here, the owner is not given the opportunity to grant or revoke consent from its underlying mechanisms. (Note: legitimate software may also satisfy the consent requirement, but will not satisfy the intent requirement, or the access requirement, discussed next).
  • An authentication mechanism for router firmware that includes an undocumented subcomponent granting “golden key” access; that is, grants access if the given password matches a built-in maintenance password. Without knowledge of or the ability to disable this mechanism on the router, the user can be said to have not given consent.
  • An undocumented diagnostics service allowing the manufacturer to bypass user-level encryption to make repairs easier. Here, the mechanism is undocumented, and therefore cannot have the user’s consent.

As demonstrated by these examples, consent is inherently tied to the manufacturer’s intent, and ultimately the concept of a user contract established between the manufacturer and the user.

2.3    Access

The access requirement determines two factors:

  • Whether or not the mechanism can be controlled (or accessed) at all
  • Whether or not the mechanism is subject to control (or accessible) by an undisclosed actor.

“Is the mechanism under the control of an undisclosed actor?”

The access requirement establishes both whether the mechanism is under control (that is, can be controlled or accessed at all by anyone other than users explicitly authorized by the computer owner), and whether or not it can be controlled or accessed by any undisclosed actors, such as unknown third parties. Here, the term undisclosed actors means anyone other than the computer owner, any users he or she has authorized to access the mechanism, and any disclosed external actors, such as the software manufacturer (delivering updates).

This is probably the most crucial requirement of all three tests because it contrasts the difference between a backdoor and other types of malicious code, such as malware, trojans, viruses, and adware. All of these can be backdoors, if they include a command-and-control (C2) component, however not every instance of these are in fact backdoors. A destructive piece of malware, for example, that is not controllable by the malware’s creator, is not a backdoor because it does not satisfy this requirement. A botnet payload that is controlled by a bot-master, however, does satisfy this requirement.

A piece of software that queues up information for future access is considered to be accessible by an actor. For example, a piece of malware that caches personal data, to be sent in batch, would satisfy this requirement towards meeting the definition of a backdoor.

This requirement also covers mechanisms involving access by a third party by means of proxy, for example a piece of ransomware in which the actor controls the software through decryption keys entered into the software by the computer owner. Such a mechanism satisfies this requirement towards meeting the definition of a backdoor.

This test is where the rubber meets the road when discussing government backdoors, such as the concept of pushing malicious software through an update mechanism without the computer owner’s knowledge. A software update mechanism, when behaving healthy, may not appear to satisfy the requirements of a backdoor, however if the mechanism can be controlled by a third party (either directly, or indirectly via court order) to subvert its stated intent, then it has violated the user contract, invalidated consent, and satisfies the definition of a backdoor.

2.4    Liability

Does a legitimate software update service that is attacked and used to push malware to the computer system constitute a backdoor? On a technical level, no, because the intentions of the software have not changed (unless malicious intent by the manufacturer can be demonstrated). There is an argument to be made, however, that the service has effectively been backdoored; i.e. “a hacker turned the service into a backdoor”, or, “a hacker backdoored the service”, however this is not a technical argument, only a semantic one. I make no attempt in this paper to define the proper use of backdoor as a verb.

3.0    Three-Prong Test Thought Examples

This section will examine a number of thought exercises, applying the three-prong test to various scenarios and expanding on the different ways it may be applied. These examples are intended for guidance only, and not to demand a specific technical conclusion about the examples used. The reader has the freedom to make counter-arguments and to test their interpretation of the three-prong test to arrive at their own conclusion.

3.1 Three-Prong Test Applied to Legislation

Considering recent “backdoor” legislation as it pertains to a legislated backdoor into end-user computer systems, legislation to allow the government to compel a manufacturer to install malicious software through a software update mechanism alone would not necessarily constitute a backdoor, unless this information was withheld from customers. If a manufacturer were to fully disclose that specific government agencies had control over a software update mechanism, and that the mechanism could install software whose intent was to introduce code deemed to be objectionable to the user, then the mechanism no longer satisfies the intent or the access prongs of the test.

In other words, in order for the government to legislate a mechanism that would no longer meet the definition of a backdoor, they must disclose to the owner that the government can install functionality through auto-update (the third prong), or disclose that functionality that can introduce code deemed objectionable by the owner (the second prong). If the user chooses to still update their software, then this is not a backdoor because it’s been disclosed, and either its intent or its origins have been fully stated. It is, in fact, much worse than a backdoor at this point; it is a surveillance tool and should be treated as such in law.

Some may wish to use a definition such as “government backdoor”, implying a disclosed form of a backdoor, however this is a semantic argument and not a technical one; it also does little justice to describe the civil rights issues that are raised by compelling such a surveillance tool.

Consider the following more realistic scenario, however. If the government were to misrepresent or hide the intent and the origins of their capabilities to subvert the auto-update software controls, ordering this functionality in secret, then this has not been disclosed to the user, and pushing malware or spyware (under the direction of an actor undisclosed to the user) would meet the definition of a backdoor, invalidating consent given by means of a user contract. This will be explored in the next section of this paper.

The construction of the three-prong test provides enough flexibility for technical arguments to be made of what constitutes consent and disclosure on a national stage. Because this framework has led us to such arguments (which are out of scope), the framework itself has done its job in providing a construct in which these arguments can be explored.

3.2    Three-Prong Test Applied to Secret Court Orders

In today’s legal landscape, secret court orders are a possibility. In such scenarios, we are no longer discussing disclosed actors or intent, but rather secret orders such as those going through a FISA court, such as section 702 orders or secret orders under the All Writs Act. In these cases, our hypothetical software update service could unwittingly become a backdoor if the government chose to quietly control it without any disclosure to the user.

In the same way, for the manufacturer to be ordered to keep such capabilities a secret would be to turn the manufacturer into an arm of government for the express intent of creating a backdoor, and the manufacturer could be considered partially liable for the consequences of doing so. Those that control the mechanism dictate the intent, and so if the government is partially in control of the mechanism, then their intentions must become part of the overall test. In such a case, the functionality of the software would likely subvert the intent disclosed to the user. Consent would similarly become invalidated, resulting in a software update mechanism that qualifies as a backdoor by definition.

3.3    Three-Prong Test Applied to Apple File Relay

Shortly after the release of research in 2014[7], Apple’s file relay service became the subject of debate among the information security community, and whether or not it constituted a backdoor.

File Relay was an Apple service that ran without the user’s knowledge on millions of iOS devices. The service was a mechanism that bypassed the backup password encryption on versions of iOS 7 and below, and was accessible by either using an existing pair record from a desktop machine, or by creating one by pairing an unlocked device on-the-fly. It was not a backdoor into the device’s operating system, but rather I argued that it was an encryption backdoor; it provided a way to bypass a critical security boundary against data theft, and subverted the backup password’s own stated purpose: to allow paired devices to be better secured against data theft.

Of course, this doesn’t mean it was nefarious, or intended for objectionable purposes (although that didn’t stop law enforcement from taking full advantage of it); Apple later claimed its purpose was for collecting diagnostics information. Applying the three-prong test to this service, my arguments are as follows.

3.3.1 Intent

“Does the mechanism behave in a way that subverts purposes as disclosed to the computer owner?”

The actual use for the file relay service was unclear at the time, until Apple came out publicly stating its intention as a diagnostics tool, however none of its functionality was – at the time – disclosed to the user. It was also not known that by allowing a device to pair with a desktop machine, this enabled a capability to bypass backup encryption.

My counter-argument was that a diagnostic service constantly running on the device was arguably not within the scope of the user contract. All other known diagnostic services in iOS were on-demand (with user consent), whereas file relay was “always on”. Its purposes included defeating a security mechanism that was explicitly provided to the user. Once the research made the existence of this service known to the average user, Apple promptly disabled it in all future firmware updates, demonstrating that the user did not consider this part of Apple’s stated intent. Future versions of iOS have the service disabled by default, and iOS is more secure for it today, much to the credit of Apple for quickly addressing it.

3.3.2 Consent

“Is the mechanism, or are subcomponents of the mechanism, active on a computer without the consent of the computer’s owner?”

The existence of the file relay service was not disclosed to any user, and it was active on millions of iOS devices without the user’s consent. The user had no way to disable this mechanism, even after research made its existence known, so it could not operate with user consent.

3.3.3 Access

“Is the mechanism under the control of an undisclosed actor?”

The mechanism was subject to access by anyone with a pair record copied off of one of the user’s machines, or by generating one on-the-fly from the user interface; once paired, it could be accessed across a WiFi network. It did not require authentication by the manufacturer to be used (such as a special certificate), or any other means of access control.

Remaining undisclosed to the user, I argued that ignorance of this capability modified the user’s perception of pairing relationships and screen lock policies, since the user believed that backup encryption protected a paired device from dumping the device’s content. This may have caused some users to be more lax with which devices they chose to pair with (such as community devices, work devices, or family and friend devices), or screen access.

The service was available to any USB or WiFi connection capable of using or generating a pair record. I argued that by defeating the security mechanism explicitly given to the user (backup encryption), file relay extended what would have been more restrictive access practices through pairing, as well as physical access, where a pair record could be created on-the-fly.

3.3.4 Discussion

So was file relay a backdoor? It failed the consent test and, in my opinion, the access test. The real question is whether or not the mechanism satisfied the intent test, and this is where many such technical conversations will end up. To Apple, it likely was not their intention to create a mechanism that subverts their own security; this could have easily been a programming oversight, with bypassing backup encryption an unintended side effect. The intent of the security boundary is what matters most here, and the security boundary was access control to user content via backup encryption. The real question is whether or not defeating backup encryption was intentional (even if only intended for internal use), or a design error.

To law enforcement, file relay was exploited as if it were a backdoor, and this capability was integrated – without Apple’s participation – into a number of commercial forensic tools to bypass user encryption for prosecuting crimes – a capability the user was not expecting from Apple. Under this definition, however, the manufacturer’s intent is what’s important in satisfying this requirements, and not the third party’s misuse.

Depending on how you interpret the intent test, its broader intention to subvert encryption could make it a backdoor for engineering purposes, or its more narrow intention as a poorly designed diagnostics tool could conclude that it was just a bad day of engineering. One might argue that law enforcement forensics tools backdoored the file relay service, which would make it a backdoor belonging to such product manufacturers, and not Apple. That is more of a semantic argument than a technical one.

3.4    Three-Prong Test Applied to Clipper chip

The Clipper chip was a cipher chipset developed and endorsed by the United States National Security Agency[8]. The chip itself was designed to provide a key escrow enabling law enforcement the capability of decrypting any communication as a third party to any messages sent through the chip. It is considered by most to be the quintessential definition of a “backdoor”.

The three-prong test I have proposed here analyzes implementation in the context of a mechanism inside a computer. There are, therefore, different contexts to analyze the Clipper chip in. The standalone chip could be considered, as it incorporates a subcomponent that provides a “backdoor” key escrow through use of a Law Enforcement Access Field, however it is much more important to have a discussion about the Clipper chip in the context of a computer (such as the AT&T TSD-3600) it has been installed in, in order to correctly apply the three-prong test. Since the three-prong test deals specifically with the context of a mechanism inside a computer, the chip itself (the mechanism) arguably does not constitute a computer on its own. We will analyze the chip from the perspective of an installation inside the AT&T TSD-3600.

3.4.1 Background

The Clipper chip was designed as a drop-in replacement for the DES [NBS77] cryptographic chipset[8]. The only product that it made it into was the AT&T TSD-3600 Telephone Security Device. While the capabilities of the chip had become made known to the public, the AT&T TSD-3600 itself was never sold to the public. It was, however, used internally inside various government agencies.

The user manual itself only made vague disclaimers about cryptanalytic attacks and did not forwardly state in any way that one intent of the device included surveillance capabilities[9]. It also did not make any mention of the government, or any agencies, as actors that had control or access to the device[9].

In the analysis to follow, consider the three-prong test as applying from the perspective of a government employee who has been given a TSD-3600 to use, or a hypothetical end consumer (such as a CEO) who has purchased the device, had it been available for public consumption.

It’s important to note that any peripheral disclosure outside of that by the manufacturer is irrelevant to our purposes. Simply because the Clipper chip was known, by way of media, to allow for government surveillance does not satisfy the demands of disclosure in general; this, however, is a technical position to be argued outside of the scope of the backdoor test. For our purposes here, we will consider external factors such as the media irrelevant to disclosure.

3.4.2 Intent

“Does the mechanism behave in a way that subverts purposes as disclosed to the computer owner?”

Disclosed purposes, according to the TSD-3600 manual[9] did not include a surveillance mechanism for law enforcement that bypassed the user privacy boundary. The manual made no attempt to notify the user of this capability, and there is nothing on record to demonstrate that AT&T made this capability known in any other way to the end-user. The intent requirement is satisfied.

3.4.3 Consent

“Is the mechanism, or are subcomponents of the mechanism, active on a computer without the consent of the computer’s owner?”

In the case of the TSD-3600, the user did not give consent for the Clipper’s LEAF mechanism to be active. The user also does not have any ability to disable it. Therefore, the consent requirement is satisfied.

3.4.4 Access

“Is the mechanism under the control of an undisclosed actor?”

The mechanism was under the control (or accessible) by an undisclosed actor (namely any government agency capable of sending the correct signal to the device). The access requirement is satisfied.

3.4.5 Analysis

The TSD-3600 incorporated a chip designed to give surveillance capabilities to the government. While the chip itself, out of context, is merely a surveillance-backdoor “kit” of sorts, its undisclosed use in a product demonstrates its implementation as a backdoor. The “mechanism” in question here is the LEAF mechanism that subverts the user privacy contract.

One might argue that court proceedings that garnered attention on the national stage may constitute disclosure. In this scenario, a government employee who actively used the TSD-3600 being fully aware of its surveillance capabilities was not necessarily using what they considered a backdoored product, but to them it may better fit the definition of an internal surveillance device.

Had the implementation been different, it is possible that the Clipper could conceivably be used in ways that would not constitute a backdoor, but rather a disclosed surveillance tool, for much the same reasons that modern Mobile Device Management is not considered a backdoor. For example, if AT&T had published in the manual that the surveillance capabilities of the Clipper existed in the device, or made known that government actors had control/access to the cipher routines inside the TSD-3600, then the user would have been informed of the intent of the unit, and could have made an informed decision to use a different solution.

On the other hand, had legislation passed that mandated Clipper’s use in all secure telephony products, then the user arguably would have no way to revoke consent of the mechanism. Disclosure would then determine whether the Clipper was a secret backdoor, or a mass surveillance tool without user consent, had its purpose been plainly disclosed to all end-users. I will reiterate, the civil rights atrocity for a government to compel the use of a mass surveillance tool on its citizens goes far beyond the malicious intent of a backdoor, and should be looked upon with more serious consequences than a backdoor.

3.5    Three-Prong Test Applied to Computer Worms

In this section, we examine two of the most destructive worms in history: My Doom and Code Red II.  The My Doom worm affected one million machines and reported damages of $38 billion. Code Red II affected one million machines also, and reported damages of $2.75 billion[11]. One of these two worms’ mechanisms will satisfy the definition of a backdoor, while one will not.

3.5.1 Intent

“Does the mechanism behave in a way that subverts purposes disclosed to the computer owner?”

The My Doom payload was included in an email attachment that subverted the user’s expectations of being an attachment containing a copy of an undelivered message; it had not established a user contract of any kind, and delivered its payload by means of deception. It clearly satisfies this requirement for the definition of a backdoor.

The Code Red worm was not disclosed to the user, and therefore it could not have formed a user contract of any kind with the user. It clearly satisfies this requirement for the definition of a backdoor.

3.5.2 Consent

“Is the mechanism, or are subcomponents of the mechanism, active on a computer without the consent of the computer’s owner?”

The My Doom worm was transmitted via email, masquerading as a mail delivery error message. When the user clicked on the included attachment, the worm executed and resent the worm to all email addresses found in the user’s address book and other files. Here, different arguments about consent can be made. One may argue that the user granted consent by executing the file. On the other hand, one may also argue that the user never intended to execute a file, but open a mail attachment. The latter argument suggests consent was contingent upon their understanding of the intent of the attachment, which turned out to be misrepresented, and therefore consent was not given.

Most would defer to the latter argument as the most valid, however the requirement is flexible enough in that more complex cases involving consent may be effectively argued on both sides. For the purposes of this paper, we shall conclude that it satisfies this requirement for the definition of a backdoor.

The Code Red worm affected machines running Microsoft IIS web server. The worm spread by exploiting buffer-overflow vulnerabilities, using a long buffer of characters followed by exploit code. The Code Red worm did not obtain user consent, and had no interaction with the user; it invited itself into the computer system and executed without any access being granted to it. It clearly satisfies this requirement for the definition of a backdoor.

3.5.3 Access

“Is the mechanism under the control of an undisclosed actor?”

The purpose of the My Doom worm was believed to have been to launch a distributed denial-of-service attack against SCO Group by flooding the host with traffic. This functionality was baked into the worm, and variants of the worm analyzed, for the purposes of this paper, did not subject the computer to any kind of remote access by a hacker or other actor. Because the worm was completely self-contained, and could not be controlled by any outside factors, the My Doom worm does not satisfy the access requirement for the definition of a backdoor, and therefore does not satisfy the definition of a backdoor.

The first strains of Code Red were self-contained, however the Code Red II variant installed a mechanism allowing a remote attacker to control the infected computer[12]. This remote access mechanism satisfies the access requirement for the definition of a backdoor, and therefore with all three tests satisfied, Code Red II includes mechanisms that fall under the definition of a backdoor.

This requirement is also satisfied by download mechanisms in other worms as well. For example, the ILOVEYOU worm downloaded an executable from the Internet, which ran on infected computers. By changing this executable, one could effectively argue that the attacker had access to infected computers by means of the remote code.

4.0    Technical Definition of a Backdoor

If you take these three prongs and parse them into a single statement, the result is a reasonable technical definition of a backdoor: 

A backdoor is a component of a security boundary mechanism, in which the component is active on a computer system without consent of the computer’s owner, performs functions that subvert purposes disclosed to the computer’s owner, and is under the control of an undisclosed actor. 

With this definition, a mechanism can be said to be backdoored if it contains a component that is a backdoor.

It is the opinion of the author that this is sufficiently specific enough not to be misused. It is not so broad that it could describe every component of a computer system as a backdoor: non-purposeful software components fall under the intent disclosed to the user as well as informed or implied consent, and fall under normal operation of the software. Such services are also not under control of any party, but autonomous components running on the system (such as a task scheduling service).

The wording is also specific enough to apply to purposeful code that is intentionally performing tasks in violation of its user contract, but also under the control of an actor. This definition provides a solid construct but also enough room to allow for interpretation and counter arguments.

5.0    Conclusions

Defining a mechanism that has been presented in many forms is no easy task, and there may never be an entirely perfect black and white definition. This paper has described the taxonomy of backdoors so as to address their commonalities in a way that provides an adequate technical structure to analyze virtually any security boundary mechanism of a computer. A good definition of a backdoor must be able to contrast a backdoor from other classes of malware or legitimate security mechanisms, and we have done so here successfully. Only time and exposure to a number of technical challenges will determine the efficacy of this three-prong test, however applying it to numerous examples has thus far demonstrated it to be robust enough for consideration within the community.

6.0    Acknowledgments

Special thanks to peer reviewers Dino Dai Zovi and Wesley McGrew, whose insight helped add definition to these concepts.


[1] Wargames (June 3, 1983), Motion Picture. MGM Pictures.

[2] Brown, Scott (July 21, 2008). “WarGames: A Look Back at the Film That Turned Geeks and Phreaks Into Stars”. Wired.

[3] Stoll, Cliff (September 13, 2005). “The Cuckoo’s Egg: Tracking a Spy Through the Maze of Computer Espionage”.

[4] Zetter, Kim (December 23, 2015). “This Year’s 11 Biggest Hacks”. Wired.

[5] Haselton, Todd (February 7, 2014). “Target Hackers Used HVAC Credentials for Backdoor Access”. Techno Buffalo.

[6] Ryge, Leif (February 27, 2016). “Most software already has a “golden key” backdoor: the system update”. ArsTechnica.

[7] Zdziarski, Jonathan (January 26, 2014). “Identifying Backdoors, Attack Points, and Surveillance Mechanisms in iOS Devices”. International Journal of Digital Forensics and Incident Response.

[8] Blaze, Matt (August 20, 1994). “Protocol Failure in the Escrowed Encryption Standard”

[9] AT&T (September 30, 1992). “AT&T TSD User’s Manual Telephone Security Device Model 3600”. Archived at

[10] National Geographic. “Doomsday Preppers”. Television series.

[11] Pudwell, Sam (October 24, 2015). “The most destructive computer viruses of all time”

[12] CERT (August 6, 2001). “Code Red II: Another Worm Exploiting Buffer Overflow in IIS Indexing Service DLL”.

[13] Oxford Dictionary. Definition 1.1.

[14] Linux Information Project (LINFO). Definition. Archived from