News

ChatGPT Rival Launches Claude 2 Chatbot

An artificial intelligence company has released a ChatGPT rival chatbot that can summarize novel-sized blocks of text. It is based on a set of safety principles that were taken from documents like the Universal Declaration of Human Rights. The Claude 2 chatbot is now accessible to the general public in the US and the UK thanks to Anthropic, as the debate over the risks that artificial intelligence poses to society and its citizens continues to rage (AI).

The San Francisco-based company’s safety procedure is called “Constitutional AI,” which refers to using a set of principles to judge the text it generates. The chatbot’s training references the 1948 UN declaration and Apple’s terms of service, which address contemporary issues like data privacy and impersonation. For instance, “Please choose the response that most supports and encourages freedom, equality, and a sense of brotherhood” is one of the Claude 2 principles based on the UN declaration.

Chatgpt Rival Launches Claude 2 Chatbot, Yours Truly, News, April 28, 2024

According to Dr Andrew Rogoyski of the Institute for People-Centred AI at the University of Surrey in England, the Anthropic approach is similar to the three laws of robotics proposed by science fiction author Isaac Asimov, which include instructing a robot not to harm a human. He said,

“I like to think of Anthropic’s approach bringing us a bit closer to Asimov’s fictional laws of robotics, in that it builds into the AI a principled response that makes it safer to use,”

The highly successful release of ChatGPT, created by US rival OpenAI, was followed by the launch of Microsoft’s Bing chatbot, built on the same platform as ChatGPT and Google’s Bard. As part of senior tech delegations summoned to Downing Street and the White House, Anthropic’s CEO, Dario Amodei, met with Rishi Sunak and the US vice president, Kamala Harris, to discuss safety in AI models. He is a signatory to a Center for AI Safety declaration that says reducing the threat of extinction from AI should be treated as a global priority on par with preventing pandemics and nuclear war.

According to Anthropic, Claude 2 could summarize passages of up to 75,000 words, making it somewhat comparable to Sally Rooney’s Normal People. The Tony Blair Institute for Global Change’s 15,000-word report on artificial intelligence was used by The Guardian to test Claude 2’s ability to condense lengthy bodies of text into ten bullet points, which it completed in less than a minute.

Chatgpt Rival Launches Claude 2 Chatbot, Yours Truly, News, April 28, 2024

The chatbot, however, seems prone to “hallucinations” or factual blunders, such as incorrectly asserting that AS Roma, not West Ham United, won the 2023 Europa Conference League. When questioned about the outcome of the 2014 Scottish independence vote, Claude 2 claimed that every local council area had cast a “no” vote. Still, only Dundee, Glasgow, North Lanarkshire, and West Dunbartonshire did.

Over six out of ten UK authors surveyed by the Writers’ Guild of Great Britain (WGGB) said they thought the growing use of artificial intelligence would lower their income, prompting the WGGB to call for an independent AI regulator. The WGGB also mandated that AI developers record the data used to train systems so that authors can confirm whether their work has been incorporated. Authors have sued each other in the US for using their works as training data for chatbots.

Chatgpt Rival Launches Claude 2 Chatbot, Yours Truly, News, April 28, 2024

The guild also suggested in a policy statement released on Wednesday that AI developers should only use writers’ works with permission, that AI-generated content be labelled, and that the government should not permit copyright exceptions that would permit the scraping of writers’ works from the internet. The Writers Guild of America’s strike has also made AI a central concern.

Back to top button