Why does this "AI Godfather" always bombard ChatGPT?

Text|Lin Weixin

Editor|Su Jianxun

Source丨Smart Emergence

Image source: Generated by Unbounded AI tool

On June 22, local time in Toronto, Yang Likun, the "Godfather of AI" and Meta's chief AI scientist, appeared at the Munk Debate and bombarded ChatGPT again: "Within five years, no one will use ChatGPT." He believes that ChatGPT does not really understand The real world, just pure text training, "and most of human knowledge has nothing to do with text or language."

This bold prediction immediately attracted a lot of attention.

The Munk debate attended by Yang Likun also had the theme of "artificial intelligence poses an existential threat". During the debate, Yang Likun insisted that "the prediction of doom brought by artificial intelligence is nothing but a new obscurantism".

Yang Likun believes that what people are worried about is that once the future AI is turned on, it will take over the world in a few minutes, "this is ridiculous". This fear of AI is rooted in human psychology and has no scientific evidence for the time being.

Yang Likun at the debate scene

Regarding the "A threat theory", the discussion in Silicon Valley has been heated up, and no consensus has been reached. This debate is a continuation of this topic, and it did not arouse too much noise on the Internet. Interestingly, the short video of Yang Likun criticizing ChatGPT incidentally during the debate became a hot topic on domestic social media.

This is not the first time Yang Likun complained about ChatGPT. At the Viva Tech conference in Paris in mid-June, Yang Likun believed that AI such as ChatGPT is not as smart as a dog. In April, he said in a live broadcast that ChatGPT is not an actual research and development in the field of AI, "it is a product."

His doubts about ChatGPT first stem from the fact that there is no special innovation at the technical level. The reason is that other companies have similar technologies. OpenAI is not advanced, but "(the technology) is well combined."

Maybe it caused too much controversy. Yang Likun later explained on Twitter that he did not criticize OpenAI as a company, but just hoped to "correct" the public and the media's perception-ChatGPT is an incredible innovation that surpasses other companies' products.

In the past few months, Yang Likun has always insisted on criticizing those who exaggerate and even boast about the basic technical capabilities used by ChatGPT.

In his view, AI systems trained on large language models are still very limited because they are purely text training and cannot learn the real experience of humans in the real world. He once said that today's AI can pass the US bar exam, but can't use a dishwasher, while a 10-year-old child can "learn it in 10 minutes."

ChatGPT bubble may be about to burst

Since ChatGPT was released, more than 100 million people around the world have used it, and many of them pay $20 a month for ChatGPT Plus subscription. However, many users have complained on the Internet recently that GPT4 is worse than GPT3.5.

"Two weeks ago, GPT4 was the greatest writing assistant in the world, not long ago, it became very mediocre," wrote one user, "I think they cut the processing power or made it less intelligent."

Even if the researchers of OpenAI come out to refute the rumors, saying that "the large model is static", it will not weaken the performance, and it will not be fully convincing.

A recent piece of data has attracted widespread attention. According to SimilarWeb data, the number of ChatGPT visits soared in the first three months of this year, and slowed down in April. The month-on-month growth rate dropped from 55.8% in March to 12.6%, and continued to decline in May, leaving only 2.8%. According to forecasts, there may be a negative number in June.

Multi-national research reports show that the usage rate of ChatGPT is not as high as expected, and at least 40% of the people "haven't heard of it" or "haven't used it". ChatGPT also cannot retain ordinary users who have lost their freshness.

Accuracy is a big reason for turning off users. ChatGPT speaks nonsense in a serious manner, which not only causes unnecessary troubles for users, but also limits the application scenarios of ChatGPT. To this day, OpenAI has not come up with an effective solution.

Yang Likun said more than once that the generative AI behind ChatGPT has reached a dead end. His assertion may not be objective, but judging from the reaction of the capital market, the big model is indeed cooling down.

Bank of America analyst Michael Hartnett wrote that artificial intelligence is currently in a "baby bubble". The current boom in artificial intelligence reminds him of the Internet bubble in the early 21st century. He warned that the artificial intelligence bubble may be about to burst, citing companies and shareholders sharply reducing their holdings despite the stellar performance of U.S. stocks.

Yang Likun's outspoken criticism of ChatGPT caters to many people who are watching the AI wave with cold eyes. A Reddit netizen believes that Yang Likun may be right, "Sam Altman has made it clear that GPT is facing a situation of diminishing returns and ultimately needs a new thing."

At the end of the scolding, just the ass decides the position?

Even though ChatGPT has exposed many problems, Yang Likun has repeatedly bombarded it publicly, which will arouse some people's resentment after a long time.

A Reddit netizen posted that Yang Likun was correcting the public's misunderstanding, but it must be said that he was not generous enough for the overnight success of competing products and startups. At the end of the day, AI research "is useless until it reaches the end of the tunnel, and 99% of research papers are just minor updates to state-of-the-art technology" and "inventing the internal combustion engine is as important as putting it in a car".

After the Munk debate, someone asked Yang Likun on Twitter: "You said in the debate that you predicted ChatGPT before it happened. Do you really know that ChatGPT is coming?"

Yang Likun replied that Meta had launched two dialogue robots before ChatGPT, one called BlenderBot 3 and the other called Galactica. However, when Galactica went live, it was heavily criticized for some false answers (now called "hallucinations"). Meta ended up taking down Galactica.

What he resents is that ChatGPT is as nonsense as Galactica, but the public chooses to be tolerant.

In the field of AI, Yang Likun and Meta woke up early, but rushed to the late episode. As early as 2013, he joined Meta to conduct artificial intelligence research. Over the years, his team has applied AI technology to the company's social products, especially since 2016, when Facebook was caught in a storm of fake news, AI technology was used to assist the platform in controlling false information.

Yang Likun insisted that Meta and Google did not release products like ChatGPT first, "not because they can't, but because they won't do it."

To put it bluntly, big companies are more afraid of public opinion. Considering that generative AI can bypass Meta's security mechanisms with relative ease, and may be used to spread false and harmful information, Meta worries that releasing generative AI too early will push the company into the abyss of public opinion again.

For a time, these concerns clouded Meta's internal AI strategy. In 2022, at least one-third of AI researchers have doubts about the future of the company in the field of AI, so they choose to leave Meta. After OpenAI released ChatGPT in November, more employees left.

The success of OpenAI obviously stimulated Yang Likun and Meta. In February of this year, Meta launched a large model technology called LLaMA, and open-sourced its code to specific personnel, allowing developers to use LLaMA to train their own chatbots. Last month, Meta released the first "humanoid" AI model I-JEPA, which claims to be more accurate and efficient than generative AI. As always, related papers and codes are open source.

OpenAI chose closed source. Yang Likun was tit-for-tat again: "Openness will make research progress faster, and there will be a more dynamic ecosystem where everyone can contribute."

Zuckerberg publicly praised the company's advances in artificial intelligence not long ago. But don’t forget that in February this year, Zuckerberg called 2023 the “Year of Efficiency.” Meta laid off more than 11,000 employees and closed several projects one after another.

It is understandable for Yang Likun to complain about competing products and stand for his own products, but business competition ultimately depends on the results, and people are also looking forward to what answer sheet Yang Likun, known as the "Godfather of AI", will finally hand in.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Share
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)