News

Grindr Inc. said it’s using artificial intelligence tools from Amazon.com Inc. and Anthropic to develop features for its ...
On Wednesday, Anthropic released a report detailing how Claude was recently misused. It revealed some surprising and novel ...
The study also found that Claude prioritizes certain values based on the nature of the prompt. When answering queries about ...
Anthropic's groundbreaking study analyzes 700,000 conversations to reveal how AI assistant Claude expresses 3,307 unique ...
Hungry to learn more about Anthropic, directly from Anthropic? You aren't alone if so, which is why we're so delighted to ...
Anthropic launched two new features for Claude AI, a fast Research tool and Google Workspace integration for Gmail, Calendar, ...
In order to ensure alignment with the AI model’s original training, the team at Anthropic regularly monitors and evaluates ...
Anthropic has issued a takedown notice to a developer for trying to reverse engineer its coding tool. According to reports, ...
Anthropic shows how bad actors are using its Claude AI models for a range of campaigns that include influence-as-a-service, credential stuffing, and recruitment scams and becomes the latest AI company ...
Claude exhibits consistent moral alignment with Anthropic’s values across chats, adjusting tone based on conversation topics.
Anthropic examined 700,000 conversations with Claude and found that AI has a good moral code, which is good news for humanity ...
The company analysed 5 lakh coding-related interactions across Claude.ai and Claude Code to study how developers are using ...