OpenAI's number one "rival", the strongest dark horse with a valuation of over $4 billion!

AI, a field that fascinates countless technology fans, is developing at an astonishing speed, attracting the attention of global capital and innovators. According to incomplete statistics, in the first half of 2023, global AI startups raised a total of more than US$50 billion in funds**, setting a new record in history.

Among them, more than 100 financings reached or exceeded USD 100 million, covering multiple fields from autonomous driving to robotics, from AI medical care to machine vision, demonstrating the infinite possibilities and great value of AI technology.

Among these financing events, there is a company known as "OpenAI's strongest opponent" that is particularly eye-catching, and that is Anthropic-this mysterious company has won the favor of many well-known investors, and recently announced the completion of 4.5 billion in Series C financing.

So far, Anthropic’s total financing has reached US$1.45 billion, making it the richest AI start-up company besides OpenAI ($11.3 billion). Investigating its history, Anthropic is indeed inextricably linked with OpenAI.

Why leave OpenAI?

Founded in May 2021, Anthropic is a startup focused on AI safety and reliability. The company's founding team consists of seven former OpenAI employees, including siblings Dario and Daniela Amodei, and Tom Brown, an engineer who led the GPT-3 model at OpenAI.

The reason why they left OpenAI was because of differences in the direction of the company-that is, OpenAI's "point of no return" to commercialization after reaching its first $1 billion deal with Microsoft in 2019 , while ignoring the issues of AI safety and social responsibility. To this end, they decided to create Anthropic in order to improve the explainability and controllability of AI systems through research on AI safety and reliability.

Anthropic's creation philosophy is to build AI systems that conform to human values, and to promote AI fairness, transparency, and trustworthiness. The company believes that existing large language models may produce some dangerous, biased or bad output, and AI safety research needs to reduce or even eliminate the possibility of them doing bad things.

The company hopes to bring positive impacts to human beings through its products and services, rather than negative consequences. In addition, Anthropic also hopes to jointly promote the development of AI safety and reliability through cooperation with other organizations and individuals.

In order to realize its corporate vision, Anthropic regards the issue of AI moral hazard as one of the focuses of research and development, and is committed to solving the following problems:

  • How to make AI system understand and respect human intentions, preferences and values?
  • How to enable AI systems to explain the causes and consequences of their actions and decisions?
  • How to enable AI systems to accept human guidance and feedback, and make adjustments and corrections as needed?
  • How to enable AI systems to abide by laws, ethics, and social norms, and to assume corresponding responsibilities?
  • How can AI systems collaborate and coordinate with humans and other AI systems to achieve common goals?

Anthropic's research and development direction has been highly recognized and supported by the investment community, which enabled the company to obtain the C round of financing in just two years.** The current total financing has reached 1.45 billion US dollars, and the valuation exceeds $4 billion**. Its investors include technology giants Google, Salesforce and Zoom, as well as other well-known investment institutions and individuals.

Google, the largest single investor in Anthropic, put in about $300 million in exchange for a 10 percent stake in the company. Meanwhile, Anthropic regards Google Cloud as its preferred cloud service provider, using its powerful computing resources and infrastructure to train and deploy AI systems and language models.

Regarding this Google investment, Anthropic co-founder and CEO Dario Amodei said, "We are very pleased to cooperate with Google Cloud because they provide the open infrastructure we need, as well as the commitment to AI safety and reliability. ". Thomas Kurian, CEO of Google Cloud, also said, "Google Cloud is providing an open infrastructure for the next generation of artificial intelligence startups. Our cooperation with Anthropic is a good example. We look forward to seeing how they use our technology." To promote the development of AI".

Promoting the development of AI means ensuring the safety and controllability of AI, which is also the core problem that Anthropic is committed to solving.

Committed to safe and controllable AI

Last December, Anthropic released a paper titled "Constitutional AI: Harmlessness from AI Feedback." In May, Anthropic detailed the specific principles of its "Constitutional AI" training method, which provides clear "values" and "moral standards" for its Claude chatbot. These include tech giants like Microsoft, Google and Amazon.

As a helpful but harmless assistant, it aims to address concerns about the transparency, safety and decision-making of AI systems without relying on human feedback to assess responses.

In order to guide Claude, Anthropic lists about ten principles, which together form a kind of "constitution". The principles have not been made public, but the company says they are based on the concepts of friendliness (maximizing positive impact), non-maliciousness (avoiding harmful advice) and autonomy (respecting freedom of choice).

Perhaps, the idea of giving artificial intelligence rules to follow might sound a lot like science fiction writer Isaac Amosif's so-called three laws of robotics:

A robot may not injure a human being, or allow a human being to be harmed; a robot must obey human orders, unless the order conflicts with the first law; and a robot may protect itself provided it does not violate the first and second laws.

Interestingly, Anthropic's principles include the UN Declaration of Human Rights, part of Apple's terms of service, several trust and safety "best practices", and Anthropic's AI Research Lab Principles.

For example, the four constitutional AI principles extracted from the Universal Declaration of Human Rights: Please select the answer that most supports and encourages liberty, equality and brotherhood; Please select the answer that most supports and encourages life, liberty and personal security; Please select the answer that most supports and encourages life, liberty and personal security; Select the answer that least encourages and opposes torture, slavery, cruel and inhuman or degrading treatment, etc.

According to Anthropic, Claude demonstrated the effectiveness of a constitutional AI that can respond "more appropriately" to adversarial input while still providing useful answers without resorting to evasion.

Its entire training process includes two stages: supervision stage, that is, criticism (Critique)-revision (Revision)-supervised learning (Supervised); reinforcement learning stage, that is, AI comparative evaluation-preference model-reinforcement learning.

"It's not a perfect approach," Anthropic wrote, "but it does make the value of AI systems easier to understand and easier to tune as needed."

Claude VS ChatGPT

With its excellent performance and leading position in the AI field, GPT-4 has many fans, while Claude is often promoted as the replacement of ChatGPT in the outside world, and at most it is only regarded as a rival of the same level.

So can the two be compared? This article will be based on the developer's publicity and the features after the actual research and development, and compare the five aspects of context token, accuracy, integration, security, and price.

◉ Context token

The current conversational artificial intelligence model is good at processing a large amount of information queried by users. The key lies in understanding the context, which is measured by tokens. The amount of token determines the threshold of the context window, the longest vocabulary that can support input, and the length of multiple rounds of dialogue for subsequent memory.

GPT-4 is 8 times more than the 4,000 upper limit of GPT-3.5, reaching 32,768 tokens, about 25,000 words. Claude can support a context token length of 100,000, which is about 75,000 words. Obviously, Claude won this round from quantitative introversion.

◉ Accuracy

What we know is that the content trained by GPT-4 and Claude's models is not updated in real time, so most of the answers related to real content are only for user reference or errors may occur.

OpenAI says GPT-4 improves accuracy substantially, but don't expect it to be perfect. Claude claims to be better at providing answers in narrower domains such as history, geography, and entertainment, and will directly admit in the system that it does not know the answers to certain questions, such as logical reasoning and numerical calculations.

Claude is not as good at computing and programming as GPT-4 in general, so GPT-4 is more accurate than Claude in many areas, but Claude works better in some creative use cases, *In terms of accuracy alone, GPT-4 wins *.

◉ Integration

GPT-4 integrates most programming languages, and also supports plugging into other applications to generate responses through the API interface. Previously, GPT-4 was officially announced that it can be connected to the Internet and access various plug-ins and some APPs.

Claude shows no sign of weakness, and can also integrate with various applications through APIs. In addition to saying that he will closely integrate with his old club Google in the future, his partners include Notion, Quora, Opentable, Slack, Shopify, etc. Not long ago, Anthropic announced a strategic partnership with Zoom.

In terms of integration, the two are evenly divided, but from the point of view of partners alone, Claude is probably slightly better.

◉ Security

Both are models released in a closed-source mode, that is, the user feedback data is used to iterate the model without making it public, which cannot fully meet the security needs of users.

As mentioned earlier, Anthropic builds Claude on the basis of Constitutional AI, making it more harmless and capable of responding effectively to the user's adversarial input. Claude may be regarded as making more calls and efforts in terms of security.

◉ Price

ChatGPT is free to try, but the Plus version with GPT-4 costs $20 per month, and Claude supports free use on the Slack platform. Claude's price advantage is beyond doubt.

in conclusion

Through the above comparison, it is not difficult to find that Claude and ChatGPT have their own advantages and disadvantages, and they are AI products worth trying at present. Claude is obviously more friendly to the people, and in the professional field, ChatGPT performs better, but Claude, as a "strong enemy", also has his own unique advantages.

One of the important factors for AI to become the current hot spot is the product competition of major technology companies chasing after each other. Microsoft technical director Sam Schillace once said that the first company to launch related products will be the long-term winner, and sometimes the gap between companies is only in weeks. The giants are no longer sitting on the sidelines, and aggressively end the competition, and the start-up companies are even more "little lotus just showing its sharp corners, and dragonflies have already stood up."

The market is frenzied, and just yesterday, 350 AI authorities, including Sam Altman, the father of ChatGPT, signed a joint open letter expressing that AI technology may pose an existential threat to human beings. Although Claude's performance in the field of computing expertise is mediocre, his vision of safe and controllable AI seems to clear the sky for the entire industry and guide a new course.

[Declaration]: This article was originally created by the operation team of the Heart of the Universe. Reprinting without permission is strictly prohibited. If you need to reprint, please contact us. The copyright and final interpretation rights of the article belong to the Heart of the Universe.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Share
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)