Artificial intelligence company Anthropic has launched its next-generation AI assistant, called Claude. Touted as ChatGPT's rival, Claude reportedly can help with summarization, search, creative and collaborative writing, Q&A, coding and more.

Users who participated in a try-out Tuesday reported Claude was an easy-to-communicate-with platform and that it yielded the desired results with fewer efforts. It was designed in a way that it could take direction on personality and tone and yield results accordingly, Anthropic said on its website.

Similar to ChatGPT owner OpenAI, Anthropic also has a big tech backing. Google invested about $300 million in the startup in February.

What is Claude?

Claude is "much less likely to produce harmful outputs," the AI startup said. The chatbot can process a wide variety of conversational and text tasks while "maintaining a high degree of reliability and predictability" and is "more steerable."

It has two versions: Claude and Claude Instant. More updates on both these systems are in the works and will be introduced in the coming weeks.

"As we develop these systems, we'll continually work to make them more helpful, honest, and harmless as we learn more from our safety research and our deployments," the company added.

Similar to ChatGPT, Claude cannot access the internet.

While the chatbot has yet to be fully rolled out, people can try it by filling out this form. It has already been tested by some of Anthropic's partners. For instance, Quora offered Claude to users through its AI chat app Poe.

Poe users found "Claude feels more conversational than ChatGPT," and "more interactive and creative in its storytelling."

"I personally love the way the answers are presented and how in-depth, yet simply presented they are," one user said.

Online learning platform Juni Learning's CEO Vivian Shen said, "Across subjects, including math problems or understanding symbolism in critical reading, incorporating Claude provided better, richer answers for our students' learning."

Legal infrastructure company Robin AI used the chatbot to "suggest new, alternative language that's more friendly to our customers."

"We've found Claude is really good at understanding language - including in technical domains like legal language. It's also very confident at drafting, summarizing, translations and explaining complex concepts in simple terms. Since deploying Claude in our product, we're seeing higher user engagement, stronger user feedback and we're closing more deals," Robin AI CEO Richard Robinson told Techcrunch.

While the excitement runs high, Claude does come with its set of limitations. Like ChatGPT and similar AI systems, Claude sometimes gives factually wrong statements. It invented the name of a chemical that doesn't exist and gave out suspicious instructions to produce weapon-grade uranium. One user was able to breach its built-in filters for harmful content when they used the Base64 encoding scheme to get instructions on how to make meth at home, Techcrunch reported.

Yann Dubois, a Ph.D. student at Stanford's AI Lab, compared ChatGPT and Claude and found the latter was more helpful. It follows what is asked and is better for writing in English. But code-wise ChatGPT was better, and it performed better when asked hard French grammar questions, Dubois said.

Anthropic is planning to offer Claude to more people for beta testing after further developing the app.

Illustration picture of ChatGPT
Reuters