By Michele Maatouk
Date: Monday 23 Feb 2026
(Sharecast News) - London-listed NCC Group was under the cosh on Monday, along with a number of other European software and cybersecurity stocks, after Anthropic unveiled a new security tool.
At 1215 GMT, NCC Group shares were down 4.1%, LSEG and Sage Group were both 1.9% lower, France's Atos was off 2.8%, SAP was 2.2% lower and TeamViewer was down 3.5%.
The moves in Europe mirrored those on Wall Street on Friday, after US-based AI start-up Anthropic launched a new security feature into its Claude AI model.
Anthropic said the tool, Claude Code Security, scans codebases for security vulnerabilities and suggests targeted software patches for human review, allowing teams to find and fix security issues that traditional methods often miss.
Berenberg pointed out in a research note on Monday that the announcement by Anthropic triggered a broad selloff in many cybersecurity stocks last week amid concerns that large language model (LLM) vendors could begin commoditising elements of cybersecurity software.
"We believe the reaction across the broader cyber sector is harsh, especially for run-time cyber security providers such as endpoint, as well as networking and identity providers," it said.
"Firstly, we think that the sell-off was not completely justified as this offering from Claude Code Security is primarily focused on application security testing (AppSec), a relatively small sub-segment within the broader cybersecurity domain and so does not pose as large a threat to wider cybersecurity sector.
"Secondly, code-level scanning aimed at debugging codebases is fundamentally different from enterprise-grade run-time security such as endpoint security, networking, cloud and identity security, where the core moats of cybersecurity vendors reside in the depth and breadth of the platform, network effects, ecosystem, workflow integration, depth of telemetry, policy enforcement layers and compliance mapping these are capabilities that cannot be easily replicated by a standalone AI tool.
"Finally, we believe that a scenario in which LLM vendors bundle security capabilities with AI capabilities is an overstretch."
Berenberg noted that currently, AI and LLM outputs remain probabilistic and are not 100% reliable, which it said was "a critical limitation in a corporate environment where cyber security needs are highly sophisticated and a single failure can carry significant financial and reputational cost".
Email this article to a friend
or share it with one of these popular networks:
You are here: news