AI — Not your average governance challenge
Guest: Carl Gahnberg
The field of AI ethics is relatively new, mostly because we are just starting to grasp the implications of advances in AI and therefore decisions that may need to be made by the machines then, or by us now in coding and training said machines. AI ethics are also defined in large part by what we view as risks. While we can define some risks now, history suggests that some of these risks/concerns will turn out to be patently wrong. Some will be broadly correct and moderately relevant and some will be broadly correct and deeply relevant, for example the fear that robots will take over many jobs and render some occupations obsolete.
How do we view ethics in light of decisions that would be made by us in programming AI or that AI will make in the course of performing its functions?
Among other things, our guest, Carl Gahnberg, is a PhD candidate at the Graduate Institute of International and Development Studies (IHEID) in Geneva. Carl studies the emergence of AI governance systems and its surrounding politics. His research is focused on understanding the fundamentals: what exactly constitutes governance rules for AI? Why and how do they emerge?
Hosted by: Alexa Raad & Leslie Daigle
Podcast: Play in new window | Download
Subscribe: Apple Podcasts | Spotify | Android | Blubrry | RSS