Google’s seven year ban on the use of artificial intelligence (AI) for developing weapons and surveillance systems has been ended by parent Alphabet.
In a blog post on Tuesday about “responsible AI”, written by James Manyika (SVP, Research, Labs, Technology & Society) and Demis Hassabis (CEO and co-founder, Google DeepMind), the firm dropped the promise not to use AI for weaponry, but said it would “work together to create AI that protects people, promotes global growth, and supports national security.”
Specifically, Alphabet’s ethical guidelines around AI no longer refer to not pursuing technologies that could ‘cause or are likely to cause overall harm’.
It should be remembered that Google had in June 2018 published its AI principles, following staff protests against the company’s involvement in developing artificial intelligence for military drones for the Pentagon.
That came a week after Google told its staff that it would not renew a contract with the US Department of Defence when it expired in 2019.
Those developments happened seven years ago after almost 4,000 Google staffers had signed an internal petition asking Google to end its participation in Project Maven. They felt the project would “irreparably damage Google’s brand and its ability to compete for talent.”
Project Maven was a Pentagon project that used artificial intelligence (AI) to process data and identify targets for military use.
At least dozen staffers at Google had resigned over the matter, as they felt involvement clashed with Google’s “don’t be evil” ethos – a motto first touted when Google was floated back in 2004.
But this “don’t be evil” motto was later downgraded in 2009 to a “mantra”, and was not included in the code of ethics of Alphabet when the parent company was created in 2015.
In the blog post, both Manyika and Hassabis pointed out that Google was “among the first organisations to publish AI principles in 2018 and have published an annual transparency report since 2019, and we consistently review our policies, practices and frameworks, and update them when the need arises.”
But they then provided an update to the firm’s “AI Principles.”
“Since we first published our AI Principles in 2018, the technology has evolved rapidly,” they wrote. “Billions of people are using AI in their everyday lives. AI has become a general-purpose technology, and a platform which countless organisations and individuals use to build applications. It has moved from a niche research topic in the lab to a technology that is becoming as pervasive as mobile phones and the internet itself; one with numerous beneficial uses for society and people around the world, supported by a vibrant AI ecosystem of developers.”
But they then admitted that “there’s a global competition taking place for AI leadership within an increasingly complex geopolitical landscape. We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights. And we believe that companies, governments, and organisations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.”
Against that backdrop, the firm said it is updating its own AI Principles to focus on three core tenets:
Google’s full AI Principles can be found at AI.google.
Haatch has secured an additional £10m investment from British Business Investments, a subsidiary of the British Business Bank, as it doubles down suppo
This week’s UK tech funding deals include smart electricity startup Ionate, investigation software provider Blackdot Solutions and more. UKTN tracked £40
Shares in London-listed cloud data centre company Iomart fell by around 27% on Friday after reporting an increase in customer churn. The Glasgow-founded c
Ari Last is the founder and CEO of Bubble, a childcare platform connecting nannies and babysitters with families. In this week’s Founder in Five Q&A, Las