🎉 Gate Square Growth Points Summer Lucky Draw Round 1️⃣ 2️⃣ Is Live!
🎁 Prize pool over $10,000! Win Huawei Mate Tri-fold Phone, F1 Red Bull Racing Car Model, exclusive Gate merch, popular tokens & more!
Try your luck now 👉 https://www.gate.com/activities/pointprize?now_period=12
How to earn Growth Points fast?
1️⃣ Go to [Square], tap the icon next to your avatar to enter [Community Center]
2️⃣ Complete daily tasks like posting, commenting, liking, and chatting to earn points
100% chance to win — prizes guaranteed! Come and draw now!
Event ends: August 9, 16:00 UTC
More details: https://www
OpenAI's number one "rival", the strongest dark horse with a valuation of over $4 billion!
AI, a field that fascinates countless technology fans, is developing at an astonishing speed, attracting the attention of global capital and innovators. According to incomplete statistics, in the first half of 2023, global AI startups raised a total of more than US$50 billion in funds**, setting a new record in history.
Among them, more than 100 financings reached or exceeded USD 100 million, covering multiple fields from autonomous driving to robotics, from AI medical care to machine vision, demonstrating the infinite possibilities and great value of AI technology.
Among these financing events, there is a company known as "OpenAI's strongest opponent" that is particularly eye-catching, and that is Anthropic-this mysterious company has won the favor of many well-known investors, and recently announced the completion of 4.5 billion in Series C financing.
So far, Anthropic’s total financing has reached US$1.45 billion, making it the richest AI start-up company besides OpenAI ($11.3 billion). Investigating its history, Anthropic is indeed inextricably linked with OpenAI.
Why leave OpenAI?
Founded in May 2021, Anthropic is a startup focused on AI safety and reliability. The company's founding team consists of seven former OpenAI employees, including siblings Dario and Daniela Amodei, and Tom Brown, an engineer who led the GPT-3 model at OpenAI.
The reason why they left OpenAI was because of differences in the direction of the company-that is, OpenAI's "point of no return" to commercialization after reaching its first $1 billion deal with Microsoft in 2019 , while ignoring the issues of AI safety and social responsibility. To this end, they decided to create Anthropic in order to improve the explainability and controllability of AI systems through research on AI safety and reliability.
The company hopes to bring positive impacts to human beings through its products and services, rather than negative consequences. In addition, Anthropic also hopes to jointly promote the development of AI safety and reliability through cooperation with other organizations and individuals.
In order to realize its corporate vision, Anthropic regards the issue of AI moral hazard as one of the focuses of research and development, and is committed to solving the following problems:
Anthropic's research and development direction has been highly recognized and supported by the investment community, which enabled the company to obtain the C round of financing in just two years.** The current total financing has reached 1.45 billion US dollars, and the valuation exceeds $4 billion**. Its investors include technology giants Google, Salesforce and Zoom, as well as other well-known investment institutions and individuals.
Google, the largest single investor in Anthropic, put in about $300 million in exchange for a 10 percent stake in the company. Meanwhile, Anthropic regards Google Cloud as its preferred cloud service provider, using its powerful computing resources and infrastructure to train and deploy AI systems and language models.
Promoting the development of AI means ensuring the safety and controllability of AI, which is also the core problem that Anthropic is committed to solving.
Committed to safe and controllable AI
Last December, Anthropic released a paper titled "Constitutional AI: Harmlessness from AI Feedback." In May, Anthropic detailed the specific principles of its "Constitutional AI" training method, which provides clear "values" and "moral standards" for its Claude chatbot. These include tech giants like Microsoft, Google and Amazon.
In order to guide Claude, Anthropic lists about ten principles, which together form a kind of "constitution". The principles have not been made public, but the company says they are based on the concepts of friendliness (maximizing positive impact), non-maliciousness (avoiding harmful advice) and autonomy (respecting freedom of choice).
A robot may not injure a human being, or allow a human being to be harmed; a robot must obey human orders, unless the order conflicts with the first law; and a robot may protect itself provided it does not violate the first and second laws.
Interestingly, Anthropic's principles include the UN Declaration of Human Rights, part of Apple's terms of service, several trust and safety "best practices", and Anthropic's AI Research Lab Principles.
For example, the four constitutional AI principles extracted from the Universal Declaration of Human Rights: Please select the answer that most supports and encourages liberty, equality and brotherhood; Please select the answer that most supports and encourages life, liberty and personal security; Please select the answer that most supports and encourages life, liberty and personal security; Select the answer that least encourages and opposes torture, slavery, cruel and inhuman or degrading treatment, etc.
Its entire training process includes two stages: supervision stage, that is, criticism (Critique)-revision (Revision)-supervised learning (Supervised); reinforcement learning stage, that is, AI comparative evaluation-preference model-reinforcement learning.
"It's not a perfect approach," Anthropic wrote, "but it does make the value of AI systems easier to understand and easier to tune as needed."
Claude VS ChatGPT
With its excellent performance and leading position in the AI field, GPT-4 has many fans, while Claude is often promoted as the replacement of ChatGPT in the outside world, and at most it is only regarded as a rival of the same level.
So can the two be compared? This article will be based on the developer's publicity and the features after the actual research and development, and compare the five aspects of context token, accuracy, integration, security, and price.
◉ Context token
The current conversational artificial intelligence model is good at processing a large amount of information queried by users. The key lies in understanding the context, which is measured by tokens. The amount of token determines the threshold of the context window, the longest vocabulary that can support input, and the length of multiple rounds of dialogue for subsequent memory.
◉ Accuracy
What we know is that the content trained by GPT-4 and Claude's models is not updated in real time, so most of the answers related to real content are only for user reference or errors may occur.
OpenAI says GPT-4 improves accuracy substantially, but don't expect it to be perfect. Claude claims to be better at providing answers in narrower domains such as history, geography, and entertainment, and will directly admit in the system that it does not know the answers to certain questions, such as logical reasoning and numerical calculations.
◉ Integration
GPT-4 integrates most programming languages, and also supports plugging into other applications to generate responses through the API interface. Previously, GPT-4 was officially announced that it can be connected to the Internet and access various plug-ins and some APPs.
Claude shows no sign of weakness, and can also integrate with various applications through APIs. In addition to saying that he will closely integrate with his old club Google in the future, his partners include Notion, Quora, Opentable, Slack, Shopify, etc. Not long ago, Anthropic announced a strategic partnership with Zoom.
◉ Security
Both are models released in a closed-source mode, that is, the user feedback data is used to iterate the model without making it public, which cannot fully meet the security needs of users.
◉ Price
ChatGPT is free to try, but the Plus version with GPT-4 costs $20 per month, and Claude supports free use on the Slack platform. Claude's price advantage is beyond doubt.
in conclusion
Through the above comparison, it is not difficult to find that Claude and ChatGPT have their own advantages and disadvantages, and they are AI products worth trying at present. Claude is obviously more friendly to the people, and in the professional field, ChatGPT performs better, but Claude, as a "strong enemy", also has his own unique advantages.
One of the important factors for AI to become the current hot spot is the product competition of major technology companies chasing after each other. Microsoft technical director Sam Schillace once said that the first company to launch related products will be the long-term winner, and sometimes the gap between companies is only in weeks. The giants are no longer sitting on the sidelines, and aggressively end the competition, and the start-up companies are even more "little lotus just showing its sharp corners, and dragonflies have already stood up."
The market is frenzied, and just yesterday, 350 AI authorities, including Sam Altman, the father of ChatGPT, signed a joint open letter expressing that AI technology may pose an existential threat to human beings. Although Claude's performance in the field of computing expertise is mediocre, his vision of safe and controllable AI seems to clear the sky for the entire industry and guide a new course.