From smart home hubs not understanding commands in regional accents to automated vehicles failing to respond to female voices orbiased facial recognition software, it is increasingly evident that automated systems, algorithms and machine learning are in danger of replicating and enhancing prejudice and discriminatory practices. Bias is embedded, unconsciously or otherwise, in the code. What are the potential implications in the criminal justice system, in education or the financial system? How can we prevent AI systems from replicating and exacerbating the inequalities of society, and ensure it is more broadly representative of humanity in terms of gender, race and nationality? Can we set out to design with diversity, rather than after the fact? How can we work to ensure gender balance throughout the AI ecosystem, including academia, industry and ethics boards?
Open to Leader Pass, Executive Pass, Forum Pass, and Media Pass holders