Guest contributor on #AIFightsBack last week was Technologist Clare Dillon. She presented A Journey to Trustworthy AI from an economic, social, political, and cultural lens. This thought-provoking session demonstrated clearly why we all need to build trust in both building and buying AI solutions. It was backed up nicely with plenty of use cases and several major AI bloopers out there.
With AI predicted to add a staggering $13trillion to current global economic output by 2030 concerns around unethical AI are very real. Key failings involve the use of facial recognition technology by the military, built-in AI deception in kids' toys, automated decision-making systems to track employee productivity and the shocking impact of AI on climate. Clare talked about the ART of AI
and striped these points back to the fundamentals of The Ethics Continuum of legal versus ethical. Right now, what is often legal and compliant with reference to AI is increasingly changing. This presents a complex and risky situation for business.
She also offered practical tips on how business can consider the ethical impact of AI starting with planning through to implementation.
- State where you are on the Ethical Continuum
- Connect AI implementation to a valid business case
- Include all relevant stakeholders
- Determine the need for Open or Explainable AI (XAI)
- Hire a diverse team
- Build a risk mitigation plan
- Track datasets
- Keep testing
- Monitor Usage Scenarios
- Be transparent
We should care about Trustworthy AI
It's not a question of why. The bottom line is that if people don’t trust AI solutions, they simply won’t use them, and we will not be able to advance AI for the greater good. The race, gender and age bias examples shared during the talk explicitly demonstrate why AI Ethics matters and why we all have a part to play.
Clare included several important reference points advancing the area of AI Ethics including The EU Commission Guidelines on AI, AI Institute in New York, The Moral Machine at MIT Media Labs, The Ethics Canvas as developed by the SFI funded Adapt Centre.
AI Institute in New York. Read more
MIT Media Lab The Morale Machine. Read more
The Ethics Canvas is adapted from Alex Osterwalder’s Business Model Canvas. Read more
Ethics Canvas ADAPT Center, Trinity College Dublin & Dublin City University Read More
EU Commission The Ethics Guidelines for Trustworthy Artificial Intelligence (AI) Read more
#AIFightsBack AI Readiness and Back to Work Readiness
Our next three sessions help business think about some of the larger concerns about using AI in the company. From an overall AI Readiness perspective, Ashwini Mathur (Novartis) talk about building trusted AI products and embedding this capability throughout the organisation. Then Matt Macdonald-Wallace (Mockingbird Consulting) and Dr. Iain Keaney (Skellig.ai) look at the use of IoT and privacy respecting data science to help businesses operate in the post-COVID19-lockdown world.