A recent speech by Governor Lael Brainard of the US Federal Reserve emphasized the importance of machine learning governance. Read more in this blog post.

Why you need machine learning governance: Governor Brainard’s recent speech about responsible AI

Governor Lael Brainard of the US Federal Reserve recently gave a speech highlighting the importance of equity and responsible governance when using machine learning to make consumer-facing decisions. While the speech was given as part of a symposium on the responsible use of ML in financial services, she mentioned that building explainability and responsibility into models is paramount across all domains in which machine learning is used.

The speech by Gov. Brainard underscores the importance of implementing governance for all your machine learning efforts, especially—but not limited to—those in highly-regulated industries like banking, insurance, and healthcare.

Learn the 7 steps to implement an effective governance strategy. Get the whitepaper.

One of the key takeaways of the speech is that ML represents a new frontier for automated decision-making, and regulators are quickly reconsidering and adapting their policies in light of this paradigm shift. She mentioned that over 45 million Americans either do not have a credit score or do not have sufficient recent personal data to allow an updated score to be generated. According to the Consumer Financial Protection Bureau, the majority of Americans that fit this profile are Black or Hispanic. Due to this imbalance in the existing system, she continues, it is crucial that models are generated so as to remedy, or at the very least not worsen, this existing bias. This is a point that regulators are paying attention to and do not take lightly. An effective model must take into account the biases upon which it is trained.

However, many models fail to do just that. Gov. Brainard mentioned that the complex modern models of deep learning, such as neural networks, often operate as “black boxes.” That is, they  make decisions by taking some data in and spitting some predictions out. Yet, there is often no insight into the process by which they do this unless specific models designed for explainability are used. In the case where decisions are being made that impact people’s lives on the basis of a model’s output, Gov. Brainard noted that having insight into the decision-making process is crucial and sometimes even legally mandated. For example, “consumer protection laws require lenders who decline to offer a consumer credit—or offer credit on materially worse terms than offered to others—to provide the consumer with an explanation of the reasons for the decision.”

It is not just the opacity of ML models that can lead to trouble for businesses. Often, models exhibit bias as a result of the data they are trained on, especially when that data is reflective of preexisting, but not immediately obvious, human biases. For this reason, it’s important to constantly review the data being used to train a model and adjust it in such a way as to counter bias. This can often mean training on different iterations of a dataset as part of the MLOps process, for which being able to keep track of different data and model versions is crucial. Only by focusing on minimizing bias in models can truly equitable outcomes be achieved for the consumer.

Gov. Brainard also mentioned that objective issues of model quality can introduce errors into predictions that affect the end user. Inaccurate models resulting from model drift, poor generalization, or other discrepancies can lead to poor decisions that adversely affect consumers. Once again, effective and responsible ML governance is key to catching these errors before they occur.

The governor concluded her speech by allowing that there is room for improvement on both sides of the aisle. For their part, regulators and federal agencies such as the Treasury and Federal Reserve Board will work to provide additional supervisory clarity on best practices and criteria for responsible use of ML models. At the same time, it is also expected that businesses which facilitate important decisions on behalf of the consumer will implement robust, resilient, and effective policies that safeguard against model failure, opacity, bias, and inaccuracy.

This speech has come at a timely moment in the industry; as our 2021 enterprise trends in machine learning report revealed, year-on-year, 83% of organizations have increased their AI/ML budgets year-on-year and the average number of data scientists employed has increased by 76%. It is imperative that, as more companies invest in ML, they also implement effective governance processes. This will help companies produce positive outcomes from ML, while minimizing risks to the business.

To learn how to implement a comprehensive ML governance strategy for your enterprise, download our new whitepaper on the 7 steps to effective governance.

Learn the 7 steps to implement an effective governance strategy. Get the whitepaper.

More from the AI/ML governance blog series

Diego Oppenheimer