A Realists’s Guide to AI Risks
Guest: Alex Engler
ChatGPT has highlighted the excitement and fear about the potential consequences of AI for humanity, and in doing so has pushed forth the need to examine if and how to regulate AI. However, we currently lack a coherent and global roadmap on how we should address issues such as fairness, transparency, standards and innovation. It is reasonable that like-minded governments such as the US, UK and EU, should find ways to cooperate on AI innovation and regulation, with an approach that promotes shared values such as respect for human rights, inclusion, non-discrimination and protection of privacy and personal data. However, this is easier said than done. So is there a realistic guide to managing AI’s risks and promises?
Please join us for a conversation with Alex Engler, fellow at the Brookings Institution, for a review of current regulatory perspectives and how to meaningfully collaborate on standardization, oversight and/or regulation, and innovation.
Hosted by: Alexa Raad and Leslie Daigle.
Further reading:
- Calls to regulate AI are growing louder. But how exactly do you regulate technology like this?
- ChatGPT banned in Italy over privacy concerns
- 70% of Workers using ChatGPT at Work Are Not Telling Their Bosses; Overall Usage Among Professionals Jumps to 43%
- AI Bill of Rights makes uneven progress on algorithmic protections
- The Eu and U.S. diverge on AI regulation: A transatlantic comparison and steps to alignment
- NIST AI Risk Management Framework
- AI Standards Hub (UK)
- Submission to the EC White Paper on Artificial Intelligence (AI) The importance and opportunities of transatlantic cooperation on AI
The views and opinions expressed in this program are our own and may not reflect the views or positions of our employers.
Podcast: Play in new window | Download
Subscribe: Apple Podcasts | Spotify | Android | Blubrry | RSS