views
Google’s DeepMind division has got a stern warning from its employees about its business. The firm is experiencing major internal turmoil after over 200 of its employees signed an open letter opposing the company’s relationships with military organisations. According to Time Magazine, a letter dated May 16 highlighted concerns about the use of DeepMind’s AI technology for combat objectives. Reportedly, these contracts violate Google’s AI standards.
The letter calls DeepMind’s leadership to investigate reports of military usage of Google Cloud services, remove military access to DeepMind technology, and establish a new governance framework to prevent future technologies from being utilised for military reasons. According to Time, as of August 2024, Google had yet to respond to these concerns significantly.
“We have received no meaningful response from leadership, and we are growing increasingly frustrated,” The Time Magazine reported a Google DeepMind employee as saying.
The open letter addressed Project Nimbus, a Google defence contract with the Israeli military in particular. The use of AI in Gaza for target selection and mass surveillance is another issue that the letter focuses on. Furthermore, the open letter’s signatories made it clear that maintaining AI’s ethical standards was their main priority.
“Any involvement with military and weapon manufacturing impacts our position as leaders in ethical and responsible AI, and goes against our mission statement and stated AI Principles,” the letter read, as per Time Magazine.
“We believe (DeepMind’s) leadership shares our concerns,” it added.
A Google representative defended the company’s practices during a byte to Times magazine, saying, “We adhere to our AI Principles, which set forth our commitment to developing technology responsibly when we develop AI technologies and make them available to customers.” This internal conflict follows similar demonstrations against Project Nimbus, which resulted in the dismissal of dozens of Google employees earlier this year.
Google acquired DeepMind in 2014 when it promised that the lab’s technology would only be used for non-military, non-surveillance purposes. The creation of technologies that could lead to “overall harm” or contribute to the development of dangerous technology is prohibited by DeepMind’s AI tenets. However, Google's Cloud business has contracts to sell its services, including AI developed inside DeepMind, to several governments and militaries including those of Israel and the United States.
Comments
0 comment