Tech giant Google has just gotten rid of a pledge that vowed to keep the company away from using AI for dangerous applications. This includes the likes of surveillance and weapons.
The latest changes come under the Android maker’s AI Principles. A previous version spoke about the company not using weapons or technology for specific purposes like implementation that results in fatal injury or harm. This also includes violating users’ rights to privacy through surveillance.
Currently, there happens to be a global competition arising in terms of AI leadership inside a very complex landscape, Google shared. It continues to speak about how it needed to lead the AI development forefront with core values such as freedom, respect, and equality for all human rights.
The latest update displays the firm’s growing ambitions linked to offering AI tech to a wider audience such as governments. Furthermore, this change might be related to the rise in the current race between China and the US to see who comes out on top.
The last version of the organization’s AI principles explained how Google will be taking into account a wide array of social as well as economic factors. However, now the principles were amended to include benefits going above and beyond the risks and downfalls.
Google shared more on the matter through a blog post published on Tuesday. It hoped to be more consistent with a wide number of principles linked to international law and human rights. They will continue to evaluate certain work by assessing what benefits outweigh those risks.
The latest AI principles were shared by the Washington Post on Tuesday. It was right before the company’s Q4 earnings report. All those results missed the expectations projected by the WSJ in terms of revenue with shares dropping 9% during trading hours.
All of these AI Principles were established in the year 2018 after it declined to renew the Project Maven contract of the government. This was created to better interpret and analyze videos of drones through AI. Before the deal came to an end, there were thousands of employees signing petitions against this contract while others resigned due to Google’s involvement. We even saw the company drop out of this bidding for a staggering $10M because it was not sure about aligning with AI principles at the time.
Ever since the launch of AI on a wider scale, the leadership under Pichai has worked aggressively to pursue contacts with the federal government. This led to more strained relations inside the workforce who are very outspoken. In the last year, Google fired more than 50 workers after so many protests against its Project Nimbus.
Exclusives kept mentioning how the contract failed to violate any AI principles. The agreement gave Israel so many AI tools including image categorization, tracking of objects, and provisions for weapons owned by the state. As per the NYT, Google officials shared concerns with the deal’s signing. They felt it was violating human rights.
We’ve seen the organization crackdown against internal discussions on controversial subjects like the war in Gaza. The company updated guidelines for the internal forum at Memegen and it.
Image: DIW-Aigen
The latest changes come under the Android maker’s AI Principles. A previous version spoke about the company not using weapons or technology for specific purposes like implementation that results in fatal injury or harm. This also includes violating users’ rights to privacy through surveillance.
Currently, there happens to be a global competition arising in terms of AI leadership inside a very complex landscape, Google shared. It continues to speak about how it needed to lead the AI development forefront with core values such as freedom, respect, and equality for all human rights.
The latest update displays the firm’s growing ambitions linked to offering AI tech to a wider audience such as governments. Furthermore, this change might be related to the rise in the current race between China and the US to see who comes out on top.
The last version of the organization’s AI principles explained how Google will be taking into account a wide array of social as well as economic factors. However, now the principles were amended to include benefits going above and beyond the risks and downfalls.
Google shared more on the matter through a blog post published on Tuesday. It hoped to be more consistent with a wide number of principles linked to international law and human rights. They will continue to evaluate certain work by assessing what benefits outweigh those risks.
The latest AI principles were shared by the Washington Post on Tuesday. It was right before the company’s Q4 earnings report. All those results missed the expectations projected by the WSJ in terms of revenue with shares dropping 9% during trading hours.
All of these AI Principles were established in the year 2018 after it declined to renew the Project Maven contract of the government. This was created to better interpret and analyze videos of drones through AI. Before the deal came to an end, there were thousands of employees signing petitions against this contract while others resigned due to Google’s involvement. We even saw the company drop out of this bidding for a staggering $10M because it was not sure about aligning with AI principles at the time.
Ever since the launch of AI on a wider scale, the leadership under Pichai has worked aggressively to pursue contacts with the federal government. This led to more strained relations inside the workforce who are very outspoken. In the last year, Google fired more than 50 workers after so many protests against its Project Nimbus.
Exclusives kept mentioning how the contract failed to violate any AI principles. The agreement gave Israel so many AI tools including image categorization, tracking of objects, and provisions for weapons owned by the state. As per the NYT, Google officials shared concerns with the deal’s signing. They felt it was violating human rights.
We’ve seen the organization crackdown against internal discussions on controversial subjects like the war in Gaza. The company updated guidelines for the internal forum at Memegen and it.
Image: DIW-Aigen