The recently held VivaTech conference in the French capital city of Paris had big names from the tech world in attendance. Amongst those included Google’s former CEO who had plenty to share.
Ever since his exit from the search engine giant, he’s not been sitting idle. In fact, he’s been very busy investing in a wide array of AI startup firms. According to him, all kinds of AI regulation must strike a balance and make sure innovation isn’t impacted.
He further went on to confirm how the creation of AI has paved the way for more kinds of dangers but some of the huge threats are yet to arrive. But when they do come into play, he strongly feels the world must be equipped to deal with just that.
Schmidt made it a very interactive conference where he says that it’s time we understand that computers that have the freedom to do whatever they want is never a great thing. People are just going to start unplugging them but who and what gets unplugged and when is another debate.
The mere thought of going on to replace AI systems in one go after they are equipped to do anything on their own is something that’s now raising some serious questions. It’s not a very comforting feeling but as per Schmidt, researchers have carried out a host of detailed trials on this front in terms of what dangers AI possesses and that the danger would soon come and we’d be able to witness it.
It’s worth mentioning how the ex-Google CEO has invested in a lot of things to better combat risks linked to the world of AI. Watching him partner with the likes of the makers of ChatGPT to roll out a massive program worth $10 million that supports tech research with OpenAI’s superalignment team cannot be ignored.
It’s linked to managing risks related to AI but it is said that OpenAI was forced to disband the team recently due to the main leads dropping out as a surprise. However, the company vowed to move ahead with another grant program while reassuring the public that everything was under control and that a backup option was present.
For now, the former Google CEO says that the AI we have in front of us isn’t dangerous, other than the fact that it’s full of disinformation. It’s not in many people’s control and really has a serious issue attached.
Today, disinformation has really turned into a massive issue than what many had predicted in the past. Major research on the likes of systems offered by tech giants like Meta showed how systems were automatically learning to roll out fake beliefs to accomplish a certain outcome that’s far from the truth.
So many Deepfakes are turning into a bigger problem as well. There’s so much more explicit content on the rise that’s designed using AI where real personalities are targeted such as celebrities and political leaders.
Talking about misinformation in another interview to Noema Magazine, Schmidt explained that.
"I think it's largely unsolvable and the reason is the code generate misinformation is essentially free. Any person, whether good or bad, has access to them. It doesn't cost anything and they produce very, very good images. There are regulatory solutions to that but the important point is that the cat is out of the bag or whatever metaphor you want. It's important that these more powerful systems, especially as they get closer to general intelligence, have some limits on proliferation and that problem is not yet solved yet."
Fake messages arising from big names from the industry are put into action and in the past, so many fraudsters were also arrested for the crime like those that tried to persuade voters in a certain direction during the elections.
As per the ex-Google CEO, the actual dangers had to do with big language models that were taking part in cyber and some biological attacks. They are not here yet but within a timeframe of three to five years, it should arrive.
Image: Noema Magazine / YT
Read next: The Growing Impact Of AI In Search: SEO Job Listings Experience 37% Decline In Q1 Of 2024
Ever since his exit from the search engine giant, he’s not been sitting idle. In fact, he’s been very busy investing in a wide array of AI startup firms. According to him, all kinds of AI regulation must strike a balance and make sure innovation isn’t impacted.
He further went on to confirm how the creation of AI has paved the way for more kinds of dangers but some of the huge threats are yet to arrive. But when they do come into play, he strongly feels the world must be equipped to deal with just that.
Schmidt made it a very interactive conference where he says that it’s time we understand that computers that have the freedom to do whatever they want is never a great thing. People are just going to start unplugging them but who and what gets unplugged and when is another debate.
The mere thought of going on to replace AI systems in one go after they are equipped to do anything on their own is something that’s now raising some serious questions. It’s not a very comforting feeling but as per Schmidt, researchers have carried out a host of detailed trials on this front in terms of what dangers AI possesses and that the danger would soon come and we’d be able to witness it.
It’s worth mentioning how the ex-Google CEO has invested in a lot of things to better combat risks linked to the world of AI. Watching him partner with the likes of the makers of ChatGPT to roll out a massive program worth $10 million that supports tech research with OpenAI’s superalignment team cannot be ignored.
It’s linked to managing risks related to AI but it is said that OpenAI was forced to disband the team recently due to the main leads dropping out as a surprise. However, the company vowed to move ahead with another grant program while reassuring the public that everything was under control and that a backup option was present.
For now, the former Google CEO says that the AI we have in front of us isn’t dangerous, other than the fact that it’s full of disinformation. It’s not in many people’s control and really has a serious issue attached.
Today, disinformation has really turned into a massive issue than what many had predicted in the past. Major research on the likes of systems offered by tech giants like Meta showed how systems were automatically learning to roll out fake beliefs to accomplish a certain outcome that’s far from the truth.
So many Deepfakes are turning into a bigger problem as well. There’s so much more explicit content on the rise that’s designed using AI where real personalities are targeted such as celebrities and political leaders.
Talking about misinformation in another interview to Noema Magazine, Schmidt explained that.
"I think it's largely unsolvable and the reason is the code generate misinformation is essentially free. Any person, whether good or bad, has access to them. It doesn't cost anything and they produce very, very good images. There are regulatory solutions to that but the important point is that the cat is out of the bag or whatever metaphor you want. It's important that these more powerful systems, especially as they get closer to general intelligence, have some limits on proliferation and that problem is not yet solved yet."
Fake messages arising from big names from the industry are put into action and in the past, so many fraudsters were also arrested for the crime like those that tried to persuade voters in a certain direction during the elections.
As per the ex-Google CEO, the actual dangers had to do with big language models that were taking part in cyber and some biological attacks. They are not here yet but within a timeframe of three to five years, it should arrive.
Image: Noema Magazine / YT
Read next: The Growing Impact Of AI In Search: SEO Job Listings Experience 37% Decline In Q1 Of 2024