Whether it fills you with terror or excitement, it seems that, at this point, the rise of AI is inevitable. In the decades to come, AI could become part of our daily lives. You could find yourself using it at work, or, you might even end up using it as part of your social life.
This week, prime minister Rishi Sunak, along with global tech leaders and politicians, visited Bletchley Park to discuss the rise of AI — but groups like Refuge are already raising concerns about what areas of AI are being neglected at the summit.
What is the AI Safety Summit?
The AI Safety Summit takes place on 1 and 2 November 2023 at Bletchley Park. The summit aims to bring together teachers in tech and government to discuss the rise of AI and how the rapid developments of this technology have the potential to both threaten and improve our safety.
“The summit will focus on understanding the risks such as potential threats to national security right through to the dangers a loss of control of the technology could bring,” reads a government statement. “Discussions around issues likely to impact society, such as election disruption and erosion of social trust are also set to take place.”
Secretary of State for Technology Michelle Donelan MP added: “The risks posed by frontier AI are serious and substantive and it is critical that we work together, both across sectors and countries to recognise these risks.
“This summit provides an opportunity for us to ensure we have the right people with the right expertise gathered around the table to discuss how we can mitigate these risks moving forward. Only then will we be able to truly reap the benefits of this transformative technology in a responsible manner.”
Along with UK prime minister Rishi Sunak and Michelle Donelan, other attendees include global politicians such as US vice president Kamala Harris, European Commission president Ursula von der Leyen, and Italian prime minister Giorgia Meloni.
Tech big shots like Elon Musk as well as representatives from Google DeepMind, OpenAI and Meta are also in attendance.
Why is it so important to include VAWG organisations in conversations about AI?
Although it’s vital to examine how we can ensure new AI products don’t pose a threat to national security as a whole, many feel there’s a glaring omission from the proceedings at Bletchley. As Refuge (a UK charity dedicated to supporting women and children who are victims of domestic abuse) noted in a statement about the summit, concerns about women and girls’ safety in the era of AI aren’t being addressed.
“Leaders and technological experts from around the world will gather in Bletchley this week to discuss the risks of AI but Refuge are deeply concerned that women and girls’ safety is being forgotten from this vital discussion,” said Refuge Research Lead, Dr Michaela Bruckmayer in a press release.
The summit doesn’t have any Violence Against Women and Girls (VAWG) organisations in attendance. According to Refuge, they deserve a spot at the table.
After all, the rise of AI has brought with it a number of safety concerns for women and girls, including deepfakes.
Deepfakes are AI-generated video clips that use images superimposed on video to create realistic footage that effectively shows someone doing something they’ve never done. These have become particularly dangerous when used in pornographic videos.
As Refuge notes, “the most common deepfakes shared on the internet are non-consensual sexual depictions of women.”
Deepfakes pose a massive threat to the safety of women and girls online, and it’s high time that the government and big tech leaders spent some time exploring how this area of AI can be properly regulated — and a great first step would be inviting a VAWG organisation to the AI Safety Summit.