Newswise — COLUMBUS, Ohio – Corporate efforts to use artificial intelligence in a more socially responsible way have a surprising benefit – they can often improve product quality, according to a national survey of company officials.
The officials surveyed ranked product quality as the area of their businesses that received the most value from implementing responsible AI management (RAIM) practices – even above more obvious choices, such as reducing regulatory and legal risk.
That answer was surprising, said one of the leaders of the survey, Dennis Hirsch, faculty director of The Ohio State University’s Program on Data Governance.
“We did not expect that the primary response for how AI governance would create value would be by improving product quality,” Hirsch said. “That’s very interesting and encouraging.”
That result was one of several important findings from the survey, which were revealed in the report Responsible AI Management: Evolving Practice, Growing Value.
The report was produced by the Program on Data and Governance, which is part of Ohio State’s Moritz College of Law and Translational Data Analytics Institute.
The corporate rush to use AI has raised alarm about potential harm and misuse, such as privacy violations, discrimination and misinformation.
The survey, sent out in early 2023, probed RAIM practices at businesses that develop and use AI. The survey was emailed to individuals identified as data governance officials at U.S. companies. Completed surveys came back from 75 people, most of whom worked at large companies with more than 1,000 employees and $10 million or more in annual revenue.
Many business sectors were represented among survey respondents, such as information technology, financial, health care and consumer goods.
Hirsch said the relatively low response rate to the survey, and the fact that most responses came from large companies, suggests that few companies today have meaningful RAIM programs in place.
“We think the largest companies have the most resources and are most engaged in AI governance,” Hirsch said.
So what exactly are the responsible AI management practices that companies are using?
The study found the most commonly reported RAIM activities included evaluating regulatory risk, identifying risk to stakeholders, building a RAIM management structure and adopting standards such as AI ethics principles and RAIM policies.
Results showed that 68% of respondents said that RAIM was either important or extremely important to their company. However, even among the large companies that responded, implementation of RAIM programs significantly lagged enthusiasm. Most respondents said that their RAIM programs were still at an early stage.
This survey was done just before the explosion of generative AI and the broad use of tools like ChatGPT, so the situation may be changing, according to Hirsch. More companies now probably understand that they need to govern the use of AI, but it is still not as widespread as it probably should be, he said.
More companies might be invested in RAIM if they knew of the experience of these larger companies and the value that they thought it brought to their businesses, he said.
Nearly 40% of those surveyed reported that their company gets “a lot” or “a great deal” of value from their responsible AI management programs. Another 38% said it produced “a moderate amount” of value. None said they got no value.
What may be most striking, though, was the fact that survey participants thought product quality was the area where RAIM brought the most value. This survey didn’t ask how RAIM improved product quality, and this finding will need more study, Hirsch said.
“Our preliminary take is that it improves product quality by promoting AI innovation and better meeting customer expectations,” he said.
A 2018 study by the Program on Data and Governance, which involved interviews with officials involved in corporate AI governance, may explain how this works.
People think of data governance as inhibiting innovation, because it restricts what people can do. But it may be the opposite, according to those interviewed in the 2018 study.
“If employees have standards and policies and guidelines about how they can use AI, they can innovate with a lot more confidence. It can actually unleash innovation, rather than dampen it,” Hirsch said.
The new report says, “These results suggest an important, new way of thinking about AI management – as a source of value and competitiveness, and not just a way of mitigating risks and costs.”
While that’s good news for companies that are instituting responsible AI management practices, Hirsch emphasized that this survey included mostly very large firms.
“I think if we looked more broadly at businesses around the country that use AI, you wouldn’t see such an optimistic picture of the view of the importance of AI governance,” he said.
AI management is still in its infancy in most businesses.
According to Hirsch, more companies need to be undertaking algorithmic impact assessments to determine whether their use of AI could harm customers or others. They also need to build a management structure and substantive policies that can help their employees determine how they can use AI.
“We need to do a lot more with companies to help them understand how to responsibly use AI,” Hirsch said.
Journal Link: Responsible AI Management: Evolving Practice, Growing Value