Artificial Intelligence (AI) is one of the most powerful tools developed in recent decades. And, as the saying goes, with great power comes great responsibility.
This power raises concerns about bias and the spread of misinformation through AI-generated content. For businesses, those are just some of the risks they need to address when using AI products. Businesses that fail to address these risks, could damage their reputation and lose consumer trust.
So, they need tools to recognize these risks early. That’s where we come in. We’ve created three prototypes to help organizations use AI responsibly. Through our partnership with Red Marble AI, we are currently testing these prototypes with businesses.
AI discovery
From marketing emails to supply chain predictions, AI is now part of virtually all contemporary technologies. However, not all AI is created equal. Before you can assess the quality of your AI products, you need to know where they are. That’s why our prototype focuses on AI discovery.
Enterprise applications are comprehensive software platforms designed to operate within large organizations, managing and integrating critical business processes and data across various departments. We’ve created a technology to find and understand the underlying functions of AI within such applications. This is a crucial step towards transparency and understanding the role of AI in our digital landscape.
Measuring AI risk
Managing risk is an essential part of any business, and with AI new considerations arise. Most businesses don’t have responsible AI processes to address the potential risks associated with poorly implemented AI systems. To bridge this gap, we created a Responsible AI Question Bank and a Responsible AI Metrics Catalogue. They provide a comprehensive repository of questions and metrics to guide businesses in conducting thorough and concrete risk assessments of their AI systems.
AI Trustmark
Now you have identified and measured the potential risk of your business’ AI. But how do you demonstrate to your directors and customers that it’s fair, equitable, and low risk? Our third prototype, the AI Trustmark, addresses this challenge. We’re creating a responsible AI metrics catalog. It is a set of measures aligned with Australia’s eight AI ethics principles, to quantify and score AI risk. This gives businesses the evidence and reports to show their AI is responsible.
So, what’s next? We’re working with Red Marble AI to identify businesses that would benefit from testing these prototypes. This collaboration ensures our cutting-edge research translates into practical tools to help Australian businesses use AI safely and effectively.