In a world of growing dependence on technology, consumers of information and communications technology (ICT) goods face an increasingly important question of provenance: How, if at all, can users be confident that the systems on which they rely will function as they are supposed to? How can they be sure that products and systems have not been altered in the supply chain?

The issue is complex. These questions vary across many dimensions, but broadly speaking the issues can be broken down into three categories.

First, to some degree, they implicate questions of technical capacity and security: How are we to know that the manufacturers of a hardware or software system have designed and built that system in a way that is secure against error, mistake, natural disruption or deliberate external misconduct? In other words, has the manufacturer performed competently?

Second is the question of corporate intent: How are users to be assured that manufacturers have not constructed and marketed a system that affords the manufacturer privileged access and control? In other words, is the software or hardware intended to benefit the end user, or does the manufacturer see a value to be gained for itself from the design?

On yet a third axis of inquiry, trust is also a question of politics and law: What protections exist against state-level intervention into the manufacture or operation of an ICT system? Are the flaws in the system such that some third party, for either well-meaning or malicious reasons, can benefit from the gaps in construction?

These questions can only be answered with a combination of technology, process, law, and policy.

Needless to say, the issue resonates today. As the global supply chain for ICT products expands, new producers enter the field, bringing with them novel and different risks to the security of the products they create. The ongoing discussion regarding the use of Chinese products in Western systems is but one example of a much broader and deeper problem: How do we assess the degree of trustworthiness or lack thereof in ICT products?

Trust is always a question of degree. It is the nature of ICT systems that risks of compromise can never be fully eliminated. But they can, with effort, be mitigated. At the same time, investments aimed at mitigating risk often serve perception, rather than actually reducing the risk. Thus, risk assessment can often be a matter of perception, rather than evidence.

Trust and risk are also context driven. The risk to one user may be an opportunity for another. Product differentiation, market fragmentation and the context of deployment are tied to these questions of trustability.

The simple reality is that a baseline for trustworthiness has yet to be defined and is likely to be differentiated by technology, context of use and capabilities. Increased trustworthiness can come with increased costs. Commonly adopted solutions may add to the comfort of the system owners and customers but may not alter the objective trustworthiness of the system in question. In addition to the need for objective metrics, which have not yet been developed, technology users and customers have differing perceptions of trade-off calculations based on their risk preferences and their business models.

That, in turn, leads to a fundamental problem: We do not know how to assess and evaluate trustworthiness or trustability based on evidence. We lack a concrete description of acceptable systems behavior and agreed-upon metrics for assurance. Our political system has yet to reach consensus on a cross-domain definition of trustworthiness. How, then, are ICT manufacturers to provide assurances of their trustworthiness to skeptical consumers? If a producer of ICT goods wishes to differentiate its product by providing convincing assurances, how can it do so? And where may consumers or customers turn if they wish to evaluate the trustworthiness of the product they are selecting? What are the key characteristics of a framework that permits us to answer these questions?

No generally accepted answer to these questions exists. Because of the large number of variables contributing to trust, a fully measurable answer to this question may not be possible. Instead, the technology, legal, and policy community are trying to segment the answer, by limiting the inquiry to individual constituent domains.

But it should not be impossible to develop a set of principles, some of them evidence based, that can guide an overall assessment of trustworthiness in hardware and software. Even without the prospect of a precisely assessable level of trustworthiness, we hypothesize that a framework for relatively comprehensive assessments can be made with a relatively high degree of confidence.

The value of such a coherent framework based on agreed-upon trustworthiness principles should be evident. Using these principles, as well as acceptable evidence as a guideline, ICT manufacturers and consumers could engage in a structured analysis of comparative risks and make more reasoned risk-benefit and resource allocation decisions.

To that end, Lawfare has convened a working group with the goal of articulating and justifying such a set of trustworthiness principles. This group, however, does not work off of a blank slate. To paraphrase Newton (and before him, John of Salisbury), our work stands on the shoulders of others who have gone before us. Indeed, the seminal paper by Lee M. Molho, “Hardware Aspects of Secure Computing,” is now 50 years old. Even then, students of the problem recognized that hardware problems have security implications. Today we are still building on that lesson.

In the course of the working group’s examination of the problem of trustworthiness, we assembled this annotated bibliography, which we thought would be useful to make public. In this partial bibliography, we attempt to compile a baseline of existing works on the evaluation of trustworthiness. We have sought both to summarize the existing field and to characterize it, as a jumping off point for other efforts. We emphasize at the outset that our intent here is neither comprehensive nor overly technical in nature. We do not purport to have fully defined the field; nor have we tried to plumb the depths of technical intricacy. Our goal, rather, is to provide a systematic overview of the field that is both technically literate and of use to decision-makers in the public and private sectors.

This is a living document, and we expect additions and modifications as the working group moves further along.

Our preliminary results reveal an unsurprising finding: that consideration of the question of trustworthiness is stove-piped into subcategories. One goal of the working group may well be an effort to recharacterize the field in a way that allows for cross-connections between existing categories. But, for now, we take the field as we find it, including sources related to four categories:

Political and Legal Criteria

CSIS Working Group on Trust and Security in 5G Networks, Criteria for Security and Trust in Telecommunications Networks and Services (Washington, D.C.: Center for Strategic & International Studies, May 2020).

This report of the CSIS Working Group on 5G Security and Trust was requested by the State Department. Criteria are designed to complement the Prague Proposal and the European Union’s 5G Toolbox, and they rely primarily on publicly available information. The criteria are broken into categories as follows: 10 political and governance criteria (for example, suppliers are more trustworthy if headquartered in democracies with an independent judiciary and the rule of law); seven business practices assessment criteria (for example, suppliers are more trustworthy if they are transparently owned and publicly traded); 10 cybersecurity risk mitigation criteria (for example, the supplier has passed independent, credible third-party tests, the technology uses open and consensus-based standards, the supplier has a record of patching systems in reasonable time); and four government actions to increase confidence in the choice of a supplier (for example, selection of diverse suppliers, government and private-sector ability to regularly conduct vulnerability tests and risk assessments).

Levite, Ariel, ICT Supply Chain Integrity: Principles for Governmental and Corporate Policies, (Washington, D.C.: Carnegie Endowment for International Peace, October 2019). [also Corporate Governance]

This paper proposes several measures for governments and corporations to undertake in order to increase trust in the integrity of information, communications and operational technology supply chains. For example, it calls on governments to refrain from systemic interventions in supply chains and establish interagency processes to consider the equities of potential interventions. The proposed corporate obligations include not supporting systemic interventions in their supply chain; protecting products and services throughout their life cycles; and accommodating reasonable, lawful requests for information. The paper goes on to outline how these obligations could be transformed into a binding normative framework through formal agreements or other arrangements that incentivize compliance. Finally, the paper explores approaches to verifying government and corporate compliance with the proposed obligations. Some of the proposals in the paper are not pragmatic, for example, the processes suggested for the resolution of issues, and technology controls are not well covered, but the paper provides a broad overview of the international aspects of the problem.

Moran, Theodore H., CFIUS and National Security: Challenges for the United States, Opportunities for the European Union (Washington, D.C.: Peterson Institute for International Economics, 2017).

This paper analyzes the Committee on Foreign Investment in the United States (CFIUS) approach to evaluating the risk of foreign investments. The CFIUS takes a narrow approach to identify threats within sectors rather than issue blanket bans, and it focuses on security rather than economic effects (positive or negative). The paper points out that Trump’s strategy of increasing protectionism opens the door to reciprocal retaliation. It discusses historical politicization of the CFIUS process and its impact on investment reviews. It also discusses early-day references to “national security” without definition and to foreign “control” with only a vague definition. The paper also suggests a “three threat” framework to approach CFIUS reviews: the possible leakage of sensitive tech to foreign company or government in ways that could harm U.S. interests; the ability of foreign acquisitors to delay, deny or place conditions on outputs from newly acquired producers; and the potential that acquisition could allow a foreign company or its government to penetrate systems for monitoring, surveillance or planting of malware. Then it discusses several examples that fit into this framework. A critical recommendation is for the CFIUS to preclude foreign acquisitions from certain countries across entire sectors rather than evaluate national security threats within sectors. Lessons from the CFIUS strategy can be applied to the evaluation of imported technology, in terms of both security and political considerations.

Corporate Governance Criteria

Boyson, Sandor, Thomas Corsi, Hart Rossman, and Matthew Dorin, Assessing SCRM Capabilities and Perspectives of the IT Vendor Community: Toward a Cyber-Supply Chain Code of Practice (College Park: University of Maryland, Robert H. Smith School of Business, 2011).

The project surveyed the cyber-supply-chain risk management (SCRM) capabilities of 131 firms, using a questionnaire designed by the authors based on contributions from a variety of public- and private-sector agencies. The study found that companies of all sizes under-manage cyber-SCRM, an especially dangerous trend as companies are increasingly complicating their own supply-chain risk profiles by working across one or more product/service boundaries (software, hardware, telecom/data networking, etc.). The study also found that companies of all sizes can be given incentives to improve cyber-SCRM management.

Charney, Scott, and Eric T. Werner, Cyber Supply Chain Risk Management: Toward a Global Vision of Transparency and Trust (Redmond, WA: Microsoft, 2011). [also Political and Legal]

This paper argues in favor of a risk management approach to ensuring trustworthy hardware and software that addresses national security imperatives without threatening the vitality of the global ICT sector. In particular, the authors argue, efforts to enhance trust in ICT supply chains must embrace four principles: First, they must be risk based and utilize collaboratively developed standards. Second, they should promote transparency by both vendors and governments. Third, they must be flexible, allowing for suppliers to implement different types of controls and mitigations based on the technology they provide. Finally, participants (especially governments) must acknowledge that closing markets based on supply-chain concerns will lead to reciprocal behaviors, threatening the global ICT sector.

Fan, Yiyi, and Mark Stevenson, “A Review of Supply Chain Risk Management: Definition, Theory, and Research Agenda,” International Journal of Physical Distribution & Logistics Management, vol. 48, no. 3, April 2018.

This literature review systematically examines the range of scholarship in supply-chain risk management (SCRM) to develop a new comprehensive definition of SCRM, present current research on the four stages of SCRM (risk identification, assessment, treatment and monitoring), and, in particular, understand the use of theory in SCRM. The review identifies 10 gaps in the SCRM literature. Notable findings include the following: There are few attempts at studying the SCRM process holistically, a holistic framework is necessary for categorizing risks, risk monitoring is understudied, the field must develop SCRM strategies that provide guidance for practitioners, theories must be used more appropriately to deepen understanding of SCRM, the literature pays little attention to SCRM in developing country contexts, and more research is needed from the supplier perspective.

Technical Criteria for Hardware

Defense Science Board, Defense Science Board Task Force on Cyber Supply Chain (Washington, D.C.: U.S. Department of Defense, April 2017).

This report was issued by the task force charged with assessing organizations, missions, and authorities related to microelectronics and components used in Defense Department weapons systems. The report notes the rising complexity of microelectronics and that the Defense Department “has become a far less influential buyer in a vast, globalized supplier base.” It breaks down the supply chain into the Defense Department acquisition supply chain, the Defense Department sustainment supply chain and the global commercial supply chain. It also notes flaws in tracking threats to those supply chains, including inadequate Defense Department tracking of inventory obsolescence and vulnerabilities, foreign ownership and competition that reduces the department’s influence, and the lack of formal mitigation processes. Key research recommendations are to develop formal language to describe the scope of means for defense “given some assumed attack classes and capabilities for attackers” and to devise algorithms to automatically assess those possible defenses. Recommendations for the department divide into three assurance types: axiomatic (purchase from wide set of suppliers), synthetic (use tamper-proof packing and unforgeable marking), and analytic (record provenance and assign trust based on that, and collect sample measurements of the system to test how a single instance runs). Additional research, especially in the area of cryptography-based integrity, is recommended.

Nissen, Chris, John Gronager, Robert Metzger, and Harvey Rishikof, Deliver Uncompromised: A Strategy for Supply Chain Security and Resilience in Response to the Changing Character of War (McLean, Virginia: MITRE Corporation, August 2018). [also Technical Criteria for Software]

The Defense Department and the intelligence community are “generally aware” of supply-chain threats but don’t adequately share knowledge and coordinate approaches. A compliance-focused approach prioritizes meeting minimums. Instead, eight lines of effort at the enterprise level (for the department and its contractors) are needed for the department to “deliver uncompromised”: elevate, educate, coordinate, reform, monitor, protect, incentivize and assure. This approach is coupled with 15 courses of action, including elevating security as a primary metric in Defense acquisition and sustainment; forming a whole-of-government National Supply Chain Intelligence Center; identifying and empowering the chain of command for supply-chain security and integrity accountable to the deputy secretary of defense; centralizing the Supply Chain Risk Management-Threat Assessment Center (SCRM-TAC) with an industrial security/counterintelligence mission owner under the Defense Security Service (DSS) and extending DSS authority; establishing independently implemented, automated, continuous monitoring of Defense Innovation Board software; extending National Defense Authorization Act Section 841 authorities for “Never Contract with the Enemy”; and instituting industry-standard information technology practices in all software developments (including possibly a software bill of materials).

Sacks, Samm, and Manyi Kathy Li, How Chinese Cybersecurity Standards Impact Doing Business in China (Washington, D.C.: Center for Strategic & International Studies, August 2018). [also Technical Criteria for Software]

This brief outlines the system of security laws and regulations adopted by China to control the importation of foreign technology, including the Multi-Level Protection Scheme and a new Cybersecurity Law governing critical information infrastructure. Rather than establishing one set of clear legal requirements, the Chinese system is made up of a number of layers with gray areas regarding jurisdiction and compliance requirements, seemingly designed to allow officials to apply the law as they see fit. Most provisions are characterized as “recommended” but are in fact required when used in state procurement requirements. The Chinese government deliberately uses vague language in standards to avoid highlighting problematic issues externally (for example, in the World Trade Organization), while retaining for itself maximum flexibility for internal application of the law. The brief includes an appendix of more than 300 translated Chinese cybersecurity legal provisions.

Sethumadhavan, Simha, Adam Waksman, Matthew Suozzo, Yipeng Huang, and Julianna Eum, Trustworthy Hardware From Untrusted Components (Association for Computing Machinery, September 2015).

This paper outlines a three-system approach to security aimed at making any attack as expensive as possible. In the first system, the design is checked for backdoors. In the second system, inputs to hardware circuits are altered to prevent any triggers from activating the backdoor. In the third system, on-chip monitoring detects if a backdoor has turned on, allowing for it to be disabled or gracefully shut down. In both practical and theoretical domains, this paper makes a great number of unique points.

Technical Criteria for Software

Cabinet Office, Supplier Assurance Framework: Good Practice Guide (London: United Kingdom Cabinet Office, May 2018).

This report offers a straightforward, proportionate and transparent government approach to supplier information assurance when operating at OFFICIAL and OFFICIAL-SENSITIVE levels. (Contracts at SECRET and above are covered by List X criteria.) Common Criteria for Assessing Risk (CCfAR) is intended to be a continually updated risk assessment of suppliers that contains 20 criteria, nine of which are defined as critical and 11 as significant, and is used to broadly group suppliers into high, medium and low risk based on a scoring sheet. Criteria span the sensitivity of stored data, methods for storing data, the number of transfers of data in the contractor’s supply chain, and the possibility of certifying the contractor’s information security controls. The report provides frameworks for integrating CCfAR into assessments of existing contracts and procurements of new ones. It also notes that the Security Policy Framework requires U.K. government agencies to reassess contracts every year for compliance with existing rules and to report risks to the Cabinet Office.

HCSEC Oversight Board, Huawei Cyber Security Evaluation Centre (HCSEC) Oversight Board Annual Report (Banbury, United Kingdom: HCSEC Oversight Board, March 2019). [also Technical Criteria for Hardware]

The Huawei Cyber Security Evaluation Centre (HCSEC) Oversight Board’s annual report provides details relating to the board’s two-part mandate: to report on the HCSEC’s assessment of Huawei’s U.K. products as relevant to U.K. national security and to evaluate the competence and independence of the HCSEC in relation to that mission. The board provides only limited assurance that the long-term security risks posed by Huawei equipment currently installed in the U.K. can be managed. Moreover, the board is not confident in Huawei’s ability to meaningfully improve its software engineering and cybersecurity processes, which are the sources of vulnerabilities in Huawei equipment. The board finds the HCSEC both competent and independent based on independent audit.

The HCSEC received four products from Huawei to test binary equivalence, validation of which was still ongoing at the time of the report but whose process exposed wider flaws in the build process. The HCSEC also looked at configuration management, operating system use, and lifecycle management and performed a software engineering analysis comparing subsequent major versions of Huawei products to look for major improvements (an analysis that found major defects in new versions).

This model of oversight is highly controversial, viewed as ineffective by some, and in the process of reevaluation by the U.K. government.

Herr, Trey, June Lee, William Loomis, and Stewart Scott, Breaking Trust: Shades of Crisis across an Insecure Software Supply Chain (Washington, D.C.: Atlantic Council, July 2020).

This report examines the exploitation of technology supply chains by nation-states that have far-reaching impacts on the public, government and industry. It also addresses what steps industry, the Defense Department and other government entities can take to mitigate persistent vulnerabilities in software supply chains. The report analyzes more than 110 cases of software supply-chain attacks and disclosures, showcasing critical vulnerabilities and trends in state and non-state attacks, and offering actionable recommendations for securing trust in the software supply chain.

Jackson, Daniel, Martyn Thomas, and Lynette I. Millett, eds., Software for Dependable Systems: Sufficient Evidence? (Washington, D.C.: National Academies Press, 2007).

This report is directed to the question of whether or not direct observation of a system can provide better assurance of its trustworthiness than the credentials of its production method. It recognizes that the complexity of software systems, as well as the discontinuous way they behave, renders them extremely difficult to analyze. In the end the report outlines a proposed approach for what it calls “explicit dependability claims” that advance the idea of a system for making software dependable in a cost-effective manner.

National Academy of Sciences, Summary of Workshop on Software Certification and Dependability (Washington, D.C.: National Academies Press, 2004).

This report summarizes a workshop on software certification and dependability convened by the National Academy of Sciences. While wide ranging, the workshop identified at least two particularly salient conclusions. First, while following particular processes cannot alone guarantee certifiably dependable software, comprehensive engineering processes are nevertheless important to achieving this goal. And second, the process of certification may add value in a collateral fashion because attention must be paid to issues that might not receive it otherwise; given that software and its uses and contexts change over time, any value that certification has decays over time as well.

Neumann, Peter G., Fundamental Trustworthiness Principles (Menlo Park, CA: SRI International, March 2017). [also Technical Criteria for Hardware]

This paper enumerates principles for designing hardware and software architecture and for assuring that the systems can satisfy mission needs. It then assesses whether and how the current Capability Hardware Enhanced RISC Instructions (CHERI) successfully embrace those principles to create functional and trustworthy software. The paper describes two sets of principles: Saltzer-Schroeder-Kaashoek Security Principles and the author’s own principles for system development. In general, the two sets share a focus on developing sound architecture by simplifying where possible and adhering to the principle of least privilege. They also reject the notion that secrecy of design enhances security, advocating for open design methods. The author, however, cautions against blindly applying these principles, noting that individual principles can be in tension with each other in certain circumstances. The paper notes that CHERI architecture uses most of these principles constructively.

Featured Publications