Why AI governance is essential to creating reliable and explainable AI

0

Content provided by IBM and TNW

The dangers of robots evolving beyond our control are well documented in science fiction movies and on television — She, Black Mirror, the surrogates, I robotshould we continue?

While it may seem like a distant fantasy, FICO’s 2021 State of Responsible AI report found that 65% of companies can’t actually explain how specific AI model decisions or predictions are made. .

Greetings, humanoids

Subscribe to our newsletter now for a weekly roundup of our favorite AI stories delivered to your inbox.

While AI is undeniably helping to propel our businesses and society at lightning speed, we have also seen the negative impacts that a lack of oversight can cause.

Numerous studies have shown that AI-based decision-making can potentially lead to biased outcomes, from racial profiling in predictive policing algorithms to gender-biased hiring decisions.

As governments and businesses adopt AI tools at a rapid pace, the ethics of AI will affect many aspects of society. Yet according to the FICO report, 78% of companies said they were “ill-equipped to ensure the ethical implications of using new AI systems,” and only 38% had detection and mitigation measures. data biases.

As usual with disruptive technologies, the speed of AI development has quickly exceeded the speed of regulation. But, in the race to adopt AI, what many companies are starting to realize is that regulators are now catching up. A number of lawsuits have already been filed against companies for developing or simply using biased AI algorithms.

Companies are feeling the heat of AI regulation

This year, the EU unveiled the AI ​​Liability Directive, a bill that will make it easier for companies to sue for damages, as part of a broader push to prevent companies to develop and deploy harmful AI. The bill adds an extra layer to the proposed AI law, which will require additional controls for “high-risk” uses of AI, such as police, recruitment or healthcare use. health. Unveiled earlier this month, the bill is expected to become law within the next few years.

While some worry that the AI ​​Liability Directive will stifle innovation, the goal is to hold AI companies accountable and require them to explain how their AI systems are built and trained. Tech companies that fail to comply will face Europe-wide class action lawsuits.

While the United States has been slower to adopt protective policies, the White House also released a draft AI Bill of Rights earlier this month, which outlines how consumers should be protected against harmful AI:

  1. Artificial intelligence must be safe and effective
  2. Algorithms must not discriminate
  3. Data confidentiality must be protected
  4. Consumers need to know when AI is being used
  5. Consumers should be able to opt out of using it and talk to a human instead

But there is a catch. “It is important to realize that the AI ​​Bill of Rights is not binding legislation,” writes Sigal Samuel, senior journalist at Vox. “It’s a set of recommendations that government agencies and tech companies can voluntarily follow — or not. That’s because it was created by the Office of Science and Technology Policy, a White House body that advises the president but cannot advance existing laws.

With or without strict AI regulations, a number of US-based companies and institutions have already faced lawsuits for unethical AI practices.

And it’s not just legal fees that businesses need to worry about. Public confidence in AI is declining. A Pew Research Center study asked 602 technology innovators, developers, business leaders and policy makers: “By 2030, will most AI systems used by organizations of all kinds use ethical principles primarily on the public good? 68% don’t think so.

Whether or not a company loses a legal battle over allegations of biased AI, the impact that incidents like this can have on a company’s reputation can be just as damaging.

While this casts a dreary light on the future of AI, all is not lost. IBM’s Global AI Adoption Index found that 85% of IT professionals agree that consumers are more likely to choose a company that is transparent about how their AI models are created, managed and used.

Companies that take steps to adopt ethical AI practices could reap the rewards. So why are so many people delaying the leap?

The problem may be that while many companies want to adopt ethical AI practices, many don’t know where to start. We spoke to Priya Krishnan, who leads the data and AI product management team at IBM, to find out how creating a strong AI governance model can help.

AI governance

According to IBM, “AI governance is the process of setting policy and establishing accountability to guide the creation and deployment of AI systems in an organization.”

“Before governance, people went straight from experiments to production in AI,” says Krishnan. “But then they realized, ‘Well, wait a minute, that’s not the decision I expect the system to make. Why is this happening?’ They couldn’t explain why the AI ​​made certain decisions.

AI governance is really about making sure companies are aware of what their algorithms are doing – and have the documentation to back it up. This means tracking and recording how an algorithm is trained, the parameters used in training, and all metrics used during testing phases.

This set-up makes it easy for companies to understand what’s going on beneath the surface of their AI systems and allows them to easily extract documentation in the event of an audit. Krishnan pointed out that this transparency also helps break down knowledge silos within a company.

“If a data scientist leaves the company and you don’t have the past information connected to that hook in the processes, it’s very difficult to manage. Those who examine the system will not know what happened. So this documentation process just provides a basic common sense about what is going on and makes it easier to explain it to other departments in the organization (like risk managers).

While regulations are still being drafted, adopting AI governance is now an important step towards what Krishnan calls “future-proofing”:

“[Regulations are] come fast and strong. Now people produce manual documents for audit purposes after the fact,” she says. Instead, starting to document now can help companies prepare for any upcoming regulations.

The innovation vs governance debate

Companies may face increasing competition to innovate quickly and be first to market. So won’t taking time for AI governance slow down this process and stifle innovation?

Krishnan argues that AI governance doesn’t stop innovation any more than brakes stop someone from being able to drive: “There’s traction control in a car, there’s brakes in a car. All these elements are designed to make you go faster, in complete safety. This is how I would think of AI governance. It’s really about getting the most out of your AI, while ensuring that there are safeguards in place to help you innovate. »

And that aligns with the main reason to embrace AI governance: it simply makes business sense. No one wants faulty products and services. Setting clear and transparent documentation standards, checkpoints, and internal review processes to mitigate bias can ultimately help companies build better products and get them to market faster.

Still don’t know where to start?

This month, the tech giant launched IBM AI Governance, a one-stop solution for companies struggling to better understand what’s going on beneath the surface of these systems. The tool uses automated software to work with the enterprises data science platform to develop a consistent and transparent algorithmic model management process, while tracking development time, metadata, post-deployment monitoring and custom workflows. This takes the pressure off data science teams, allowing them to focus on other tasks. The tool also helps business owners always have an overview of their models and supports proper documentation in case of an audit.

It’s a particularly good option for companies that use AI across the organization and don’t know what to focus on first.

“Before you buy a car, you want to test drive it. At IBM, we’ve invested in a team of engineers who help our clients test AI governance to get them started. In just a few weeks, the IBM Client Engineering team can help teams innovate with the latest technologies and AI governance approaches using their business models and data. It’s an investment for our clients to co-create quickly using IBM technology so they can get started quickly,” says Krishnan.

Share.

Comments are closed.