Anthropic releases Claude Opus 4.7 with benchmark-leading coding and agentic performance


In short: Anthropic has released Claude Opus 4.7, its most capable generally available model, with benchmark-leading scores on SWE-bench Pro (64.3% vs GPT-5.4’s 57.7%), multi-agent coordination for hours-long workflows, 3x higher image resolution, and a 14% improvement in multi-step agentic reasoning with a third of the tool errors. Priced at $5/$25 per million tokens, it […]



This story continues at The Next Web

Post a Comment

hey there, great job keep on interacting
© Quancea official©.ⒹPowered by Datamiv  All rights reserved. Powered by Mrskt