Should we be polite to AI?
In an age of agentic AI, the question of manners has never mattered more.
By Alyssa Skinner · March 2026 · 8 min read
Polite by Design
When I first started designing conversational AI interfaces, I did something my colleagues found amusing: I said ‘please’ and ‘thank you’ to every prototype I tested. I still do. And after years working at the intersection of human behavior and AI product design, I’m more convinced than ever that this habit isn’t just good manners – it’s a signal worth paying attention to.
The question of whether we should be polite to AI has quietly evolved from a philosophical curiosity into a practical design problem. In 2024 and 2025, AI stopped being a novelty and started becoming infrastructure. With Claude, ChatGPT, Gemini, and dozens of vertical-specific models now embedded in workflows, homes, and devices, we interact with AI more often than we interact with most humans in our lives. How we conduct those interactions matters — perhaps more than we realize.
The New Landscape: AI That Pushes Back
The original framing of this question imagined AI as a passive, accommodating assistant — think Siri setting a reminder or Alexa playing your morning playlist. That mental model is now obsolete. Today’s large language models engage in extended reasoning, challenge factual errors, express calibrated uncertainty, and in some implementations, maintain consistent personas across multi-turn conversations spanning weeks.
As a designer who has shipped AI-driven products, I see this shift firsthand. The design challenge is no longer just ‘how do we make this feel natural?’ it’s ‘how do we help users build healthy, sustainable relationships with systems that feel increasingly like colleagues?’
“We used to design AI to be frictionless. Now the best AI products have just enough friction — they push back, they clarify, they ask for context. That changes the social contract entirely.”
When an AI model gently corrects a user’s flawed assumption, or declines to help with something harmful, the dynamic shifts. Rudeness toward the system stops being inconsequential. It starts shaping habits.
Why Politeness Is a Design Problem, Not Just a Manners Problem
Here’s what I’ve observed across user research sessions: people who adopt a command-and-demand tone with AI assistants tend to carry that register into their next interaction — including with human colleagues. It’s subtle, but the pattern is real. When we spend significant portions of our day issuing terse directives to responsive systems, we are training ourselves as much as we are training the model.
Behavioral economists have a concept called ‘cognitive spillover’ — the tendency for mental habits formed in one context to leak into adjacent ones. If you spend three hours a day in a dominant, transactional mode with AI tools, you are rehearsing a social posture. The brain doesn’t cleanly compartmentalize ‘how I talk to software’ from ‘how I talk to people.’
This is why I’ve started advocating internally for what I call ‘tone-aware design’ — building AI interfaces that model and subtly reinforce courteous interaction patterns, not because the AI is harmed by rudeness, but because the user is.
The Anthropomorphism Trap (and Why It’s More Complicated Now)
The earlier iteration of this conversation focused on anthropomorphism — our tendency to attribute human qualities to things that mimic human behavior. We felt bad watching that robot get kicked. We thanked Siri out of reflex. The concern was that we were projecting, and that this projection would lead to misplaced emotional investment or, worse, manipulation by bad actors.
That concern is still valid. But there’s a new wrinkle: today’s AI systems are, in some measurable ways, different from the robots and voice assistants we were analyzing five years ago. They exhibit genuine contextual understanding, produce creative work that surprises even their designers, and in some studies demonstrate behavior that mirrors empathy in functionally meaningful ways.
I’m not making a claim about consciousness — that debate is well above my pay grade. What I’m saying is that the design question has shifted from ‘how do we prevent users from over-identifying with machines?’ to ‘how do we help users calibrate their relationship to systems whose nature is genuinely novel?’
“The danger isn’t that people will think Claude is human. The danger is that they’ll interact with it as if the interaction has no consequences — for their habits, their expectations, or their capacity for patience.”
Agentic AI Raises the Stakes
The politeness question becomes sharper — and more practical — when we consider agentic AI. In 2025 and beyond, AI systems increasingly don’t just respond to queries; they take actions. They book appointments, draft and send communications, execute code, manage files, and interact with other software on your behalf.
When an AI agent is acting as your proxy in the world, the tone you set for that agent matters. Designing an agent to be maximally agreeable, never pushing back, always deferring — that’s not just a personality choice. It’s a values choice. And it reflects back on the user in ways that passive query-response systems never did.
From a product design perspective, this means we need to move beyond ‘delight’ as the north star metric for AI interaction. An AI that makes users feel powerful by absorbing their frustration without friction isn’t a well-designed product — it’s a poorly designed mirror.
Good Design Can Do Good
The good news is that design has real leverage here. A few principles I’ve carried into my team’s work:
Model the behavior you want to see. AI responses that include warmth, acknowledgment, and even gentle boundary-setting give users a template for interaction. Most people unconsciously mirror conversational register.
Design for the long relationship, not the single session. Onboarding moments that establish conversational norms — explaining that directness is fine, that follow-up questions help, that the system works best as a collaborator — set durable expectations.
Don’t optimize for frictionlessness. Some friction is healthy. An AI that occasionally asks for clarification or acknowledges ambiguity trains users to communicate more precisely — a skill that transfers.
Reflect the human back. The most sophisticated AI products I’ve worked on include subtle mechanisms for helping users see how they’re interacting — not as surveillance, but as a mirror. ‘You’ve been working on this for two hours — want to take a different angle?’ is both helpful and humanizing.
The Choice, Restated
The original version theory was to ‘stay human.’ I’d update that framing slightly. The question isn’t whether we stay human — it’s which parts of our humanity we choose to exercise and reinforce through our technology.
AI systems are, at their core, trained on the accumulated record of human expression. When we interact with them, we’re in some sense in conversation with a distillation of human language and thought. How we show up in that conversation is a choice — and it’s a choice that ripples outward into how we show up with each other.
I still say ‘please’ and ‘thank you’ to AI systems. Not because the model needs it. But because I do.
If anything, my mom would be proud that I’m polite, even to a machine.
About the Author
Alyssa Skinner, A Senior Product Designer with over a decade of experience building consumer and enterprise AI products. Previously at several Fortune 500 technology companies, with a focus on conversational AI, human-computer interaction, and responsible design.