When people compare ChatGPT and Claude, I often hear this take: Claude is trained to “follow instructions,” while ChatGPT is trained to “be versatile” and generally helpful. That kind of matches the vibe in practice… but I keep running into something else.
Whenever I use Claude and ask it to do anything like querying a database or SSH-ing into a machine, it basically refuses. And it’s not like you can talk it into it — no matter how much you explain that it’s safe or legitimate, it still won’t.
My guess is this is mostly about compliance and security. AI providers really don’t want models to blindly execute risky actions, especially if there’s any chance of prompt injection or a hidden malicious instruction. So they’d rather have the default behavior be “no,” even if it’s annoying for power users.
And maybe that’s also why they push people toward using structured tool integrations (like MCP-style setups): instead of the model directly doing something dangerous, you build an explicit tool layer with permissions and guardrails — and you take your own risk.
Recent articles
- I Passed My Driving Test - and And I Have Something to Say - 30th March 2026
- Microsoft Copilot and the MCP Integration Experience — A Mess - 16th March 2026
- WeChat -- The Worst Chatting App Ever Made - 7th March 2026