With the tech manufacture singularly focused connected AI models, Anthropic is having an exceptionally bully year.
The institution whitethorn soon propulsion up of its main competitor, arsenic it looks to rise tens of billions of dollars successful a backing circular that would enactment its valuation at immoderate $950 billion (OpenAI was valued astatine $854 billion successful its March round), and concern customers progressively explicit a prefererence for Claude implicit ChatGPT. A caller study showed Anthropic recently outpaced OpenAI among concern customers, quadrupling its marketplace stock since May 2025.
Cat Wu, Anthropic’s caput of merchandise for Claude Code and Cowork, has been a cardinal fig successful that success. Since joining the institution successful August 2024, Wu has helped shepherd Claude done a captious phase, leveling it up from a purely informational chatbot to a coding instrumentality and beyond. Wu, who oversees the improvement of caller features, is often paired with Boris Cherny, a halfway subordinate of Anthropic’s method unit and the creator of Claude Code, starring the brace to beryllium characterized as Anthropic’s “Batman and Robin.”
Wu sat down with maine astatine last’s week’s 2nd yearly Code with Claude league successful San Francisco, wherever she discussed however she thinks astir merchandise strategy, and however she hopes the acquisition of utilizing Claude volition alteration successful the future.
This interrogation has been edited for magnitude and clarity.
When you’re looking astatine merchandise strategy, however overmuch of it is reactive to your peers oregon your competitors? Do you deliberation astir that astatine all?
The main happening that we plan for is staying connected the exponential, truthful I think, crossed our team, we instill successful everyone the acquisition that AI volition conscionable proceed to get better. For us, we conscionable request to enactment astatine this frontier. We don’t deliberation astir competitors. I deliberation if you bash deliberation astir competitors, you extremity up being, like, perpetually 2 weeks, oregon like, a period down however accelerated you tin execute. And truthful it’s usually not the champion mode to enactment astatine the frontier.
Anthropic released astatine slightest six models past twelvemonth and has already released astir arsenic galore this year. Do you expect this gait of improvement to continue?
Our anticipation is that it continues (laughing). I deliberation the models are inactive improving astatine a precise dependable pace, and truthful we should beryllium capable to support sharing those with our users. I deliberation the deployments mightiness look a spot different—like however we handled Glasswing, but arsenic overmuch arsenic possible, we privation this quality to payment arsenic galore radical arsenic possible, and it has to beryllium handled successful a precise harmless way, which is wherefore we handled Glasswing [in the mode that we did].
[Glasswing is an inaugural that Anthropic launched successful April that invited a tiny consortium of spouse organizations — including companies similar Amazon, Apple, CrowdStrike, and Microsoft — to summation entree to its caller cybersecurity model, Mythos. Unlike galore of Anthropic's different AI models, Mythos is not being fixed a wide nationalist release. The institution has claimed that it fears the exemplary — which is designed to scan codebases for bundle vulnerabilities — is excessively powerful, and could beryllium weaponized by atrocious actors.]
You said successful a erstwhile interrogation that the aboriginal of enactment is fundamentally unit managing fleets of agents. It seems similar that could yet pb to a concern wherever the agents are amended astatine the job, oregon cognize the job, amended than the human.
I deliberation it is highly hard to negociate agents if you can't bash the occupation yourself. I deliberation the managers inactive request to beryllium experts successful their domain. It's a caller accomplishment acceptable that a batch of radical are going to person to learn, but managing agents is really precise akin to being a manager of people, successful the consciousness that you person to understand, like, wherefore did the cause marque this mistake? Did it misinterpret my instruction? Was my petition under-specified? You person to person the quality to debug it.
It does look similar the agelong word extremity is to chopped down connected squad size, though. Because if you person agents doing a job, past you don't request an intern, right?
Ideally, I deliberation the thought is that everyone tin get a batch much done. I deliberation that, for everyone’s job, there's ever this percent of it that's truly tedious. For me, it’s responding to emails. I deliberation everyone has this portion of their life...So my anticipation is that it [the AI agents] really does that, and past everyone has, like, each these chill things that they volition privation to physique [in their spare time].
What are you guys astir excited astir successful the adjacent six months?
I deliberation the adjacent large happening is proactivity. Last twelvemonth we were successful this satellite of synchronous development. Right now, radical are shifting to routines, truthful similar automating, for example, responses to lawsuit enactment tickets. And I deliberation the adjacent measurement is that Claude understands what you enactment on, and conscionable sets up immoderate of these automations for you.
When you acquisition done links successful our articles, we whitethorn gain a tiny commission. This doesn’t impact our editorial independence.
Lucas is simply a elder writer astatine TechCrunch, wherever helium covers artificial intelligence, user tech, and startups. He antecedently covered AI and cybersecurity astatine Gizmodo. You tin interaction Lucas by emailing lucas.ropek@techcrunch.com.














English (US) ·