search

UMD     This Site





Artificial intelligence (AI) systems are becoming ubiquitous from financial trading to transportation, and cybersecurity to medical diagnostics.

However, high-profile accidents and problems with AI have become increasingly visible, making it critical to ensure the safety of these systems and the public’s trust in them.

A multidisciplinary and international team—comprised of workers from government, industry and academia across six countries—has proposed independent auditing methods as a mechanism to enable reliable, safe and trustworthy AI systems.

Their proposal is outlined in “Governing AI safety through independent audits,"(link is external) which was recently published in the journal Nature Machine Intelligence. The lead author is Gregory Falco,(link is external) an assistant research professor at John Hopkins University. Ben Shneiderman, a Distinguished University Professor Emeritus with an appointment in the University of Maryland Institute for Advanced Computer Studies, is the paper’s second author.

“Since enforceable principles must capture a range of cases and risk considerations, our research team represents interdisciplinary fields of study and practice, including computer science, systems engineering, law, business, public policy and ethics,” says Shneiderman, whose added expertise lies in human-computer interaction.

The team provides a three-prong approach: conduct risk assessments to proactively identify and catalogue potential threats to public safety, design audit trails that capture the context of failures with high-fidelity data to support retrospective forensic analyses, and enforce adherence to safety requirements in diverse operating environments and legal jurisdictions.

Their independent audit model is based on financial auditing and accounting models, like the U.S. Securities and Exchange Commission’s. The researchers envision internal assessments and audits of AI systems as an annual process embedded in corporations, with courts and government agencies having the ability to institutionalize audits and expand requirements into law, just as they do in the financial sector.

Audit trails will provide high-fidelity data to identify system weaknesses and improve them, similar to how a flight data recorder enables aviation analysts to understand system failures and the actions that were taken to address them.

The researchers suggest using insurance companies to enforce requirements, since they have the ability to pressure large corporations by setting lower premiums for self-driving cars with documented safety records, for example. They also recommend that the courts enforce safety requirements for AI systems by issuing decisions that clarify the responsibilities of stakeholders.

“Although our framework will require testing and refinement, we anticipate that it will have the benefit of encouraging the ethical use of AI systems that are aligned with the users' values, and promote accountability that clarifies who is responsible for failures,” says Shneiderman.

Shneiderman's upcoming book on Human-Centered AI will expand on these themes, including 15 practical recommendations for implementation in commercial and research systems.

ISR would like to thank UMIACS and Maria Herd, who wrote this story.



Related Articles:
CSRankings places Maryland robotics at #10 in the U.S.
'OysterNet' + underwater robots will aid in accurate oyster count
Manocha Receives 2022 Verisk AI Faculty Research Award
Exploring the 'rules of life' of natural neuronal networks could lead to faster, more efficient computers
Fermüller, ARC Lab create app to improve violin instruction
How tech can fill gaps in mental health care
Manocha talks AI on Federal News Network podcast
A new way to monitor mental health conditions
ISR faculty leading, playing key roles in ARL cooperative agreement
New 'Age of Information' approach to the distributed machine learning problem of 'straggling' worker machines

September 2, 2021


«Previous Story  

 

 

Current Headlines

Khaligh Honored With Linda Clement Outstanding Advisor Award

UMD Launches Institute Focused on Ethical AI Development

Remembering Rance Cleaveland (1961-2024)

Dinesh Manocha Inducted into IEEE VGTC Virtual Reality Academy

ECE Ph.D. Student Ayooluwa (“Ayo”) Ajiboye Recognized at APEC 2024

Balachandran, Cameron, Yu Receive 2024 MURI Award

UMD, Booz Allen Hamilton Announce Collaboration with MMEC

New Research Suggests Gossip “Not Always a Bad Thing”

Ingestible Capsule Technology Research on Front Cover of Journal

Governor’s Cabinet Meeting Features Peek into Southern Maryland Research and Collaboration

 
 
Back to top  
Home Clark School Home UMD Home