Claude Opus 4.7 "Weaker" Debate: Anthropic Stops Guessing User Intent, Executes Strictly

Claude Opus 4.7 "Weaker" Debate: Anthropic Stops Guessing User Intent, Executes Strictly

“Feels Weaker”

On May 1, 2026, an interesting phenomenon emerged in the Chinese AI community: multiple users reported on social platforms that Claude Opus 4.7 “seems weaker.”

A typical observation:

Asked around and many people feel Claude Opus 4.7 seems weaker. The more likely reason is that it no longer guesses what you want — it does exactly what you write.

This is not an isolated voice. Similar feedback is growing across Reddit, X/Twitter, and major AI communities.

But the key issue is not whether model capability has declined, but that Anthropic has changed Claude’s interaction philosophy.

From “Mind Reading” to “Instruction Following”

Looking at Claude’s iteration history, Anthropic has been adjusting a core balance point:

Proactive Help vs. Strict Compliance

In previous versions, Claude tended to:

  • Infer the user’s “true intent” (even when expression was unclear)
  • Proactively supplement content the user did not explicitly request
  • Give “benevolent interpretation” and optimization to instructions

The benefit of this strategy: users do not need to write perfect prompts; Claude will “think for you.”

But the costs are also obvious:

  • Sometimes users clearly request A, but Claude gives B (because it “thinks B is better”)
  • In coding scenarios, “taking initiative” modifications may introduce bugs
  • Users need to spend time “correcting” Claude’s over-performance

Opus 4.7’s strategy shift is: I do what you write, I do what you say, I no longer make decisions for you.

This Is Not Regression, It Is Alignment Philosophy Adjustment

From a technical perspective, this involves changes to the RLHF objective function:

Old objective: Maximize “user satisfaction” — the model proactively supplements and optimizes

New objective: Maximize “instruction compliance” — the model strictly follows the user’s literal meaning

Both objectives have no absolute优劣, only differences in applicable scenarios:

DimensionMind-Reading ModeStrict Mode
Creative Writing✅ Good at supplementing and expanding❌ May be too conservative
Code Generation❌ May introduce unrequested modifications✅ Implements exactly as required
Data Analysis⚠️ May choose analysis methods user does not need✅ Only executes specified analysis
Translation✅ Good at free translation and context adaptation⚠️ May be too literal
Automation Workflows❌ May deviate from expected steps✅ Strictly executes each step

Why Anthropic Made This Shift

Several possible reasons:

1. Rise of Developer Users

As Claude Code becomes a core product, the proportion of developer users has increased significantly. What developers need most is predictability — code cannot be “approximately right,” it must be precise. For developers, “do what I say” is far more important than “do what you think is best.”

2. Agent Scenario Demands

In agent workflows, Claude is often a link in an automated process. “Taking initiative” here is catastrophic — downstream steps depend on precise input formats.

3. Safety and Compliance Considerations

“Guessing user intent” poses risks at the safety level: the model may “benevolently” make decisions the user did not authorize. In enterprise and compliance scenarios, strict compliance is the safer choice.

4. Accumulation of User Feedback

Anthropic’s 81,000-user survey (released in late March) may have provided data support — large numbers of users feedback wanting Claude to be more “obedient” rather than more “smart.”

How Users Should Adapt

For users accustomed to the “mind-reading” mode:

  • Need to express requirements more explicitly, including “please proactively supplement content you think is relevant”
  • For creative tasks, can add instructions: “Please freely expand, do not be completely constrained by my framework”
  • Understand this is a strategy adjustment, not capability degradation

For developers:

  • This is good news for you — Claude Code behavior will be more predictable
  • Precise instructions will get precise execution

For agent workflows:

  • Strict mode is more suitable for automation scenarios
  • Reduces downstream errors caused by “AI擅自 changing formats”

Industry Perspective: This Is a Trend, Not an Isolated Case

Claude’s strategy shift is not an isolated event:

  • GPT-5.5: Also enhancing instruction compliance precision
  • Qwen 3.6: Optimized for agentic coding and tool use, essentially also requiring precise execution
  • Gemini: Google Cloud CEO Kurian hinted the new version will focus more on “controllability”

The entire industry is shifting from “the smarter AI the better” to “the more controllable AI the better.” This marks AI applications moving from the “tasting phase” to the “production phase.”

Verdict

Opus 4.7 has not become weaker, it just no longer thinks for you.

For users who need AI to assist with creativity and expand thinking, this may require an adaptation period.

But for users who need AI to precisely execute tasks (developers, agent workflows, enterprise automation), this is a clear improvement.

Anthropic’s choice is clear: in production scenarios, predictability is more important than surprises.