Introduction to AI FactSheets


AI FactSheets 360


The goal of the FactSheet project is to foster trust in AI by increasing an increased understanding of how AI was created and deployed and enabling the ability to control how AI is created and deployed. Increased transparency provides information for AI consumers to better understand how the a program component that is generated by learning patterns in training data to make predictions on new data, such as a loan application. or an executable program, deployed behind an API, that allows it to respond to program requests from other programs or services* was created. This allows a consumer of the model to determine if it is appropriate for their situation. AI Governance enables an enterprise to specify and enforce policies describing how an AI model or service should be constructed and deployed. This can prevent undesirable situations, such as a model training with unapproved datasets, models having biases, or models having unexpected performance variations.

* We use the term "service" to mean a service or application.

Increasing AI transparency and improving AI Governance are two important use cases for the FactSheet project. We now focus on key ideas that are common to both use cases.


What is a FactSheet?


A FactSheet is a collection of relevant information (facts) about the creation and deployment of an AI model or service. Facts could range from information about the purpose and criticality of the model, measured characteristics of the dataset, model, or service, or actions taken during the creation and deployment process of the model or service. Such models are created by various roles in a the collection of steps used to develop and deploy a machine learning model, such as a business owner, a a person who collects and transforms datasets and machine learning algorithms to construct a machine learning model, a a person who tests, or validates, an AI model developed by a data scientist, and a person who deploys and monitors an AI model so that it can be called by others. Each of the many roles in the lifecycle contribute facts about how the model was created and deployed. For example, the business owner can provide the intended use for the model. The data scientist can describe various data gathering and data manipulation activities. A model tester can describe key testing measurements, and model operator can provide key performance metrics. Together, these facts provide a rich story about the construction of the model, similar to how a school transcript or resume can provide more insight about a student or a job applicant.

A FactSheet is modeled after a supplier’s declaration of conformity (SDoC), which is used in many different industries to show a product conforms to a standard or technical regulation . However, FactSheets can be rendered in many different formats, not just printed documents.

Recent work has outlined the need for increased transparency in AI. Although details differ, all are driving towards a common set of attributes that capture essential "facts" about a model. FactSheets take a more general approach to AI transparency than previous work in several ways (read more here):

  • FactSheets are tailored to the particular AI model or service being documented, and thus can vary in content,
  • FactSheets are tailored to the needs of their target audience or consumer, and thus can vary in content and format, even for the same model or service,
  • FactSheets capture model or service facts from the entire AI lifecycle,
  • FactSheets are compiled with inputs from multiple roles in this lifecycle as they perform their actions to increase the accuracy of these facts.

FactSheets have the ability to document the final AI service in additional to an individual model for 3 reasons:

  1. AI services constitute the building blocks for many a program that contains one or more AI models. Developers will query the the interface of an AI service that software communicates with to use the functionality of the service and consume its output. An AI service can be an amalgam of many models trained on many a collection of information that a machine learning algorithm uses to "learn" patterns to use for its predictions. Thus, the models and datasets are (direct and indirect) components of an AI service, but they are not the interface to the developer. 
  2. There is often an expertise gap between the producer and consumer of an AI service. The production team relies heavily on the training and creation of one or more AI models and hence will mostly contain data scientists. The consumers of the service tend to be developers. When such an expertise gap exists, it becomes more crucial to communicate the attributes of the artifact in a standardized way, as with food nutrition labels or energy ratings for appliances.
  3. Systems composed of trusted models may not necessarily be trusted, so it is prudent to also consider transparency and accountability of services in addition to datasets and models. In doing so, we take a functional perspective on the overall service and can test for performance, safety, and security aspects, such as the level of accuracy of a machine learning model on new data, i.e, that that was not part of the training dataset, the ability to explain the reasons for a machine learning model prediction, and the ability of a machine learning model to function correctly when confronted with an external agent trying to fool it.


Templates: One size does not fit all


We believe that a single standard FactSheet for all use cases is not feasible because the context, industry domain, and target consumer will determine what facts are needed and how they should be rendered. We advocate the use of a FactSheet template to capture this information: which facts are of interest and how should they be rendered. For example, one would expect higher-stakes applications will require more comprehensive templates. 

FactSheets vary based on communication purpose and audience. The examples section presents several example FactSheets, with the ability to display the same content in different document formats such as a full report style for a comprehensive public catalog, a short tabular view for quick internal use, and slides for stakeholder presentations during model development.

Full Report Format
Full Report Format
Tabular Format
Tabular Format
Slide Format
Slide Format


The need for transparency


AI models and services are being used in a growing number of high-stakes areas such as financial risk assessment, medical diagnosis and treatment planning, hiring and promotion decisions, social services eligibility determination, predictive policing, and probation and sentencing recommendations. For many models there will be risk, compliance, and/or regulatory needs for information covering the nature and intended uses of the model, its overall accuracy, its ability to explain particular decisions, its fairness with respect to protected classes, and at least high-level information about the provenance of training data and assurances that suitable privacy protections have been maintained. In reviewing models within a catalog for suitability in a particular application context, there may be an additional need to easily compare multiple candidates.


Work in Progress


We see this work as an ongoing conversation as there needs to be a multi-stakeholder dialog among all relevant parties: AI developers, AI consumers, regulators, standards bodies, civil society, and professional organizations.


Going deeper


A key aspect of creating FactSheets is determining the exact information it should contain. The Methodology section highlights our methodology for determining what template (collection of Facts) is appropriate for a particular model/service, use case, and target audience. The Governance section provides further details on the collection of facts during the AI lifecycle and gives several examples of how these facts can be leveraged by different roles in the lifecycle to increase overall effectiveness. The Examples section provides concrete example FactSheets for several publicly available models, including information on bias, explainability, and adversarial robustness, and insights into some of the challenges in completing these examples. The Resources section provides a collection of information for further learning. It includes links to our papers, videos, and events, as well as links to related work, a slack community where interested parties can collaborate dynamically, a glossary, and a FAQ to help provide additional information on common questions.