‘Godfather of AI’ Leaves Google, Cites Several Ethical Issues Related to AI and Its Unregulated Use

'Godfather of AI' Leaves Google, Cites Several Ethical Issues Related to AI and Its Unregulated Use

Over the past couple of months, we have heard that Google has decided to fire on all fours as far as the development is concerned, and considering how the Google I/O 2023 is just a few days away, we are most likely going to hear more about what the search engine giant has planned. After all, the company wants to take the fight to Bing AI, ChapGPT, and other models. However, Google's approach isn't proving good for the company.

Google's relentless pursuit of perfecting AI without knowing what's right and wrong made Geoffery Hinton resign from the company

Google's Vice President Geoffrey Hinton told the New York Times he tendered his resignation at Google earlier this year in April. For those who don't know, Hinton is considered by many as the "godfather of AI," and his departure from the company could signal some trouble brewing up in the company.

Now, high-profile exits are nothing in the tech world. However, Hinton's departure is not something to be taken lightly. He left Google based on the dangers of AI which is neither controlled nor regulated. Hinton, on several occasions, expressed how he is worried about Google ramping up its AI work to ensure other competitors don't take the first position. By doing so, Google opened itself up to a lot of ethical issues.

For instance, Hinton has talked about how generative AI can end up flooding the public with information that is incorrect and has become very difficult to tell apart from actual information. In addition to that, Hinton has also talked about how AI could replace jobs, which has become a big concern for a lot of people, including voice actors, writers, especially artists, and more.

However, it doesn't end here. Hinton has also talked about how AI that is not controlled or regulated could actually end up being a huge concern with fully automated weapons, as well as the behavior that AI will end up learning from the training data that it has been fed. Of course, this is not something that is related to Google, but Hinton is well aware of the dangers of AI and the effect it can have on the future. This does, after all, sound like something straight out of Terminator.

This is not the first time AI has raised some ethical concerns. Remember the engineer that Google fired because he spoke about how the LAMDA model started developing feelings? That was just one of many discoveries that have happened over the past couple of months and while one might call Hinton or the said engineer paranoid, it certainly is something to worry about because giving AI absolute power could start blurring the lines between what's true and what's false, and that is indeed a scary world to live in.

Written by Furqan Shahid

Post a Comment

0 Comments