By Allen Robin Hubert• Automations• 4 read• April 24, 2026Anthropic launched Claude Opus 4.7 on April 16, 2026, positioning it as its strongest generally available model for complex reasoning and agentic coding. The model is available through Claude products, the Claude API, Amazon Bedrock, Google Cloud Vertex AI, and Microsoft Foundry. Developers using the API can call it with the model ID claude-opus-4-7.
The headline detail for engineering teams is that the base API pricing remains unchanged from Opus 4.6. Anthropic’s pricing page lists Claude Opus 4.7 and Opus 4.6 at $5 per million input tokens and $25 per million output tokens. Cache write, cache hit, and batch pricing also match Opus 4.6.
That does not mean every production workload will cost the same. Anthropic says Opus 4.7 uses an updated tokenizer, and the same input can map to roughly 1.0 to 1.35 times more tokens depending on the content type. It also thinks more at higher effort levels during later turns in agentic workflows, which can increase output tokens. Before upgrading, teams should run the same coding-agent tasks on Opus 4.6 and Opus 4.7, then compare total input tokens, output tokens, tool calls, latency, retries, and final task success.
Prompt behavior also needs testing. Anthropic says Opus 4.7 follows instructions more strictly, which can make older prompts behave differently. Prompts that worked because Opus 4.6 interpreted them loosely may need tuning. This matters for coding agents because system prompts often include repository rules, testing instructions, deployment limits, formatting rules, and permission boundaries. A stricter model can be useful, but existing harnesses should be reviewed before replacing the model in production.
There is also a change in thinking controls. Anthropic’s developer documentation says extended thinking budgets are removed in Opus 4.7. The previous format using thinking: {"type": "enabled", "budget_tokens": N} returns a 400 error. Developers now need to use adaptive thinking, with effort controls such as high or xhigh depending on the task. This is a direct migration item for teams that hardcoded thinking budgets into their agents.
For coding agents, the main reason to test Opus 4.7 is its reported improvement in long-horizon work. AWS describes the model as stronger in ambiguity handling, problem solving, instruction following, systems engineering, complex code reasoning, and long-running tasks. Anthropic also says Opus 4.7 improves file system-based memory, which matters for multi-session coding agents that need to remember project notes, repo conventions, and previous decisions.
Security checks should be part of the upgrade. Anthropic says Opus 4.7 improves on Opus 4.6 in honesty and resistance to malicious prompt injection attacks, though its own alignment note still describes the model as largely well-aligned rather than perfect. For business users running agents over browsers, emails, files, CRMs, dashboards, or internal tools, this does not remove the need for allowlists, approval steps, tool permissions, logging, and restricted write actions.
Teams using GitHub Copilot should also check availability. GitHub says Claude Opus 4.7 is available for Copilot Pro+, Business, and Enterprise users, with selection in VS Code, Visual Studio, Copilot CLI, GitHub Copilot Cloud Agent, JetBrains, Xcode, Eclipse, GitHub.com, and mobile apps. Rollout is gradual, and Business or Enterprise admins must enable the Opus 4.7 policy in Copilot settings.
The safest upgrade path is staged. Start with non-production coding tasks, bug-fix branches, code review, test generation, documentation updates, and controlled refactors. Track whether Opus 4.7 finishes more tasks with fewer corrections. Then test heavier agent workflows such as multi-file refactoring, migration scripts, dependency upgrades, CI failure repair, and security review. If it improves completion quality enough to justify any increase in token usage, move selected workflows first rather than replacing every model route at once.
For business readers, the decision is simple: Opus 4.7 is worth testing where model quality affects delivery speed, engineering review time, or agent reliability. It should be measured carefully where cost predictability matters, especially in high-volume agentic workflows.