CIOs turn to NIST to tackle generative AI’s many risks


Discover Financial Services is taking a calculated approach to generative AI. 

From experiments and pilots to use cases across the business, the financial institution evaluates how to best use generative AI by assigning specific guardrails based on risk. The process enables adoption with an unobscured lens to better identify value and prioritize projects, whether the technology is customer-facing or intended for back-office tasks. 

The approach also grants Discover more protection from the outsized risks generative AI brings.

“All of that is meeting our standards, expectations and our policies around that, but it’s still ‘human in the loop,’” Discover CIO Jason Strle told CIO Dive. “That’s a really big part of how we mitigate that risk, [and] that will last for a certain period of time.”

Discover CIO Jason Strle.

Discover CIO Jason Strle.

Permission granted by Discover

 

Discover’s risk reduction strategy closely follows the guidance laid out by the National Institute of Standards and Technology, which released a draft of its generative AI risk management framework in July. 

“The NIST AI risk management framework is very, very consistent with financial risk management, non-financial risk management or the operational risk management that banks need to do,” Strle said. “The pattern is very familiar.” 

As enterprises approach generative AI with caution, NIST’s risk mitigation guidance is a jumping-off point for businesses trying to determine the best place to start as the technology rapidly evolves. Even as leaders are eager to reap the potential rewards of wide-reaching, large-scale generative AI integration, they are prioritizing efforts to avoid missteps and shape holistic adoption plans. 

The popularity of the NIST framework is not coincidental. The government agency has worked for years to fortify standards for cybersecurity, which are recognized broadly, and is now setting the stage to become the standards body for generative AI, too. 

An abundance of options

For Discover, Strle distilled NIST’s voluntary framework into three steps: 

  • Identify where capabilities create risk.
  • Prove the organization understands how to quantify and mitigate the risk.
  • Monitor on a daily basis.  

The final version of NIST’s text, which was the result of President Joe Biden’s executive order last October, offers just over 200 risk-mitigating actions for organizations deploying and developing generative AI. It’s a slimmed down version of the 400 steps in the initial iteration published in April.  

The NIST AI guidance focuses on a set of a dozen broad risks, including information integrity, security, data privacy, harmful bias, hallucinations and environmental impacts. The framework provides organizations with ways to contextualize and mitigate risks. 

To prevent incorrect generated outputs, for example, NIST provides around 19 different actions enterprises can take, such as establishing minimum thresholds for performance and review as part of deployment approval policies. 

NIST is not alone in its effort to provide generative AI adoption guidance.

As vendors rushed to embed generative AI into solutions, industry groups and advocacy agencies worked to clear the confusion around model evaluations, risk management and responsible processes. 

Those efforts have resulted in an abundance of guidelines, policy recommendations and guardrail options, but no single source of truth. 

The International Organization for Standardization released an AI-focused management system standard in December. MIT launched an AI risk database calling attention to more than 700 threats in August and several professional services firms have created governance frameworks.

Whether the growing list of options made the waters murky for CIOs or is actually helpful, depends on who you ask. 

“I don’t think it’s a straightforward answer,” Strle said. Having more ways to mitigate threats is not always inherently productive, so it’s up to enterprise leaders to decipher what the business needs to be protected. 

Standing on the sidelines is only an option for so long. 



Source link