Anthropic releases Claude 2, a more capable, less gullible AI chatbot

Just five months after Anthropic debuted its open-source ChatGPT rival, Claude, the company is back with that promises longer answers, more detailed reasonings, fewer hallucinations and generally better performance. It also now scores in the 90th percentile of graduate school applicants on the GRE reading and writing exams.

The updated version, Claude 2, is . It as many as 100,000 tokens — that’s around 75,000 words, or a few hundred pages of documents users can have Claude digest and analyze — up significantly from the previous version’s 9,000 token limit. In AI, tokens are the bits and pieces that your input prompt gets broken down into so that the model can more readily process them — hence Claude’s ability to “digest” user data.

This increased capacity will also translate into longer, more nuanced responses. Claude 2 will even be able to generate short stories “up to a few thousand tokens,” . Its coding capabilities have also improved, rising to a score of 71.2 percent on the Codex HumanEval benchmark, up from 56 percent.

The Claude “” system is already guided by . Extensive red-team testing since the release of the first version has tempered Claude 2 into a more emotionally stable and harder to fool AI. Compared to its predecessor Claude 2 is reportedly, “2x better at giving harmless responses compared to Claude 1.3,” the company’s announcement claimed. If you’re already subscribed to the Claude 1.3 API, great news, you’ll be automatically rolled over to Claude 2 at no extra charge.

Trending Products

Add to compare
Add to compare

We will be happy to hear your thoughts

Leave a reply

Register New Account
Compare items
  • Total (0)
Shopping cart