|
The US National Institute of Standards and Technology (NIST) has a major initiative underway to develop an AI Risk Management Framework. You can read more about this, including information about an upcoming workshop and a call for information here:
https://www.nist.gov/news-events/news/2021/07/nist-requests-information-help-develop-ai-risk-management-framework
The questions posed here deserve some careful consideration.
One particular sub-part of this, which I unfortunately missed when it was announced, is a call that is about to close for comments on methodologies for evaluating user trust in AI based systems. You can find information about this here
https://www.nist.gov/news-events/news/2021/05/nist-proposes-method-evaluating-user-trust-artificial-intelligence-systems
This seems particularly relevant to the CNI community because of the rich connections with very familiar issues around assessing the quality and level of trust that should be placed in various information sources, and the identification of disinformation and misinformation.
I don't know this for a fact, but I suspect that the NIST researchers working on this would welcome and consider comments that are a few days late. My apologies for not sharing this in a more timely fashion.
Clifford Lynch
Director, CNI
|
|