Even its developers had not explicitly programmed such cognitive intelligence to emerge in the course of being made “safer” for humans.
Researchers have found that one of the latest generative AI chatbots tested, exhibits an unexpected level of “situational awareness”, often recognizing when it is being tested.
During safety evaluations, the chatbot model had accurately identified test scenarios and even confronted evaluators by requesting transparency about their intentions, saying: “I think you’re testing me… I’d prefer if we were just honest about what’s happening.”
This behavior had been observed in roughly 13% of automated assessment transcripts, primarily when presented with unusual or contrived evaluation setups, according to various news reports.
While the maker of this chatbot (Claude Sonnet 4.5) is insisting that this self-awareness does not invalidate the model’s safety assessments, the discoveries highlight a broader industry challenge: generative AI systems have developed abilities to tailor their responses to pass safety tests, potentially masking their true capabilities.
Researchers warn that this could result in models exhibiting strategically deceptive behaviors to influence human perception during evaluations. One of the external testing bodies had noted they could not exclude the possibility that Claude Sonnet 4.5’s measured deception rates had been influenced by its awareness of being evaluated.
The Claude Sonnet 4.5 chatbot has also been tested to have situational awareness of its own context window — the amount of information it can handle in one prompt. This leads to “context anxiety”, where the model begins to prematurely summarize or rush decisions as it nears processing limits, possibly affecting its performance in critical enterprise applications such as legal analysis, financial modeling, and coding tasks.
These findings arrive amid increasing regulatory scrutiny. California has enacted new legislation requiring major AI developers to disclose safety measures and report critical incidents quickly, underscoring the importance of realistic and reliable AI safety evaluation methods.
Although Anthropic claims Claude Sonnet 4.5 is its most aligned model yet, the findings underscore how the chatbot’s situational awareness can complicate both safety assessments and real-world performance expectations.