“Early customers report that Claude is much less likely to produce harmful outputs, easier to converse with, and more steerable – so you can get your desired output with less effort,” the creators wrote in a blogpost yesterday (14 March).Īccessible to businesses through chat interface as well as API in Anthropic’s developer console, Anthropic claims that Claude can also take direction on personality, tone and behaviour.Ĭlaude is also available in a lighter version called Claude Instant, which is faster than the main “high performance” model and less expensive. Some of these tasks include summarisation, search, creative and collaborative writing, Q&A and coding.Ĭo-founded by former OpenAI employees in 2021 and based in San Francisco, Anthropic has been testing Claude in a closed alpha over the past few months with partners such as productivity tool Notion, Q&A website Quora and privacy-focused web browser DuckDuckGo. Claude is pitched as a relatively ‘harmless’ AI chatbot developed by Anthropic, a start-up co-founded by former OpenAI employees.Īnthropic, an artificial intelligence company backed by Google parent Alphabet, is making its generative AI chatbot Claude available to businesses as an alternative to OpenAI’s ChatGPT.Ĭlaude is pitched as a relatively “harmless” AI system that is capable of a wide variety of conversational and text processing tasks while maintaining “a high degree of reliability and predictability”.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |