Academics and industry unite for responsible AI

Philosophy Professor, Shannon Vallor, is co-directing a national research programme inviting UK-based researchers to apply for collaborative projects across the AI ecosystem.

Image
Shannon Vallor

The projects are part of a new fellowship programme designed to drive responsible innovation in artificial intelligence with input from arts and humanities research. UK-based researchers are invited to apply to undertake collaborative projects across the AI ecosystem in a new fellowship programme that is designed to drive responsible innovation in artificial intelligence with input from arts and humanities research.

Successful researchers will be funded for either 12 or 18 months to work on current AI challenges with one of our participating public, private and third sector stakeholders including the BBC, Diverse AI, Inter-Ikea, Microsoft Research and the Scottish Law Society, or to match their own research with their own nominated non-academic stakeholder.

Challenges set include how to create joyful and empowering experiences with AI (Microsoft Research); how company culture and values affect AI implementation (Inter-Ikea) and how MSPs can be supported to respond to responsible AI challenges in the national context (Scottish Parliament).

Funded by the Arts and Humanities Research Council, the BRAID (Bridging Responsible AI Divides) Fellowships programme is led by the University of Edinburgh (with support from Edinburgh Innovations) in partnership with the BBC and the Ada Lovelace Institute. BRAID is dedicated to integrating arts and humanities research more fully into the Responsible AI ecosystem, as well as bridging the divides between academic, industry, policy and regulatory work on responsible AI.

We are reaching a critical point in society where businesses and the public sector are alive to the opportunity offered by artificial intelligence – to automate processes, increase efficiencies, unlock the molecular code of protein structures, create virtually any kind of image, even write new computer code for other machines.

But history is littered with examples of the perils of prioritising technology over human values and wisdom, from climate change as a consequence of industrialisation, to the experts who designed seat belts without first testing them on women.

The BRAID fellowships aim to bring together researchers with industry and the public sector to help bridge a potential divide between ‘humane’ knowledge and human technical capability, to ensure that the benefits of AI are realised for the good of us all.

Professor Shannon Vallor

Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence at the School of Philosophy, Psychology & Language Sciences and Edinburgh Futures Institute (EFI)

Associated links

Image credit – Joseph Turner, BBC R&D

Tags

2024