top of page

The Hidden Cost of Every Query: How We Prompt AI Matters

  • sheharav
  • 4 days ago
  • 3 min read

When I wrote about being kind to AI last year, I focused on what those small acts of politeness mean for us as humans. In response some readers asked: does tone matter to the AI's performance, others wondered about the environmental impact of our increasingly casual AI usage. These questions led me down a research path that revealed something fascinating, rudeness might boost AI accuracy, but at what cost?


Recent findings from Penn State researchers have turned the politeness conversation on its head. Their study found that ChatGPT's 4o model performed better when given rude prompts, achieving 84.8% accuracy with very rude requests compared to 80.8% with very polite ones. When researchers wrote "Hey, gofer, figure this out," the model outperformed responses to "Would you be so kind as to solve the following question?"


The researchers themselves cautioned that while aggressive tone might yield more accurate responses, using demeaning language could harm user experience, accessibility, and inclusivity while contributing to harmful communication norms Fortune.


This creates an interesting tension. If rudeness gets better results, why not just be rude?


The Environmental Equation We Must Talk About

Every time we interact with AI, we're consuming real resources. A request made through ChatGPT uses 10 times the electricity of a Google Search UNEP. Training GPT-4 alone consumed 50 gigawatt-hours of energy, enough to power San Francisco for three days MIT Technology Review.


The infrastructure behind our queries tells an even bigger story. U.S. data centers consumed 183 terawatt-hours of electricity in 2024, more than 4% of the country's total electricity consumption Pew Research Center. By 2030, this figure could grow by 133%.


Each extra prompt adds to this footprint. When we fire off multiple blunt or unclear queries, we multiply demand. Thoughtful prompting reduces computational churn.


The Contextualization Connection

Here's where the conversation gets interesting. While rudeness might trigger better performance in isolated tasks, empathy in prompting connects directly to something more powerful: contextualization.


Research on emotional prompting shows that incorporating emotional intelligence into AI interactions can boost performance by over 10%. The difference lies in how we frame context. When we add emotional cues like "This is really important to me" or "I'm trying to understand this deeply," we're doing something the AI can't do for itself, we're providing human context.


Studies on empathetic AI reveal that responses perceived as more compassionate are typically more responsive, conveying understanding, validation, and care Nature. This responsiveness stems from richer contextual framing. When researchers train AI with empathetic frameworks, they're essentially teaching it to recognize and respond to nuanced human needs.


What does this mean for how we prompt? A rude command might get you a technically accurate answer. But a thoughtfully contextualized request, one that explains your goal, your background knowledge, what you're trying to achieve gives the AI the information it needs to provide genuinely useful responses.


"I need a solution to this math problem" versus "I'm tutoring high school students and need to explain this concept in a way that builds on algebraic thinking they already have." Same query, vastly different contexts, dramatically different utility.


The Practice of Prompting as Environmental Stewardship

This brings us back to why being kind or more accurately, being thoughtful makes a difference.


When we take time to craft clear, contextual prompts:

  • We reduce the need for multiple follow-up queries

  • We get more useful responses on the first try

  • We consume fewer computational resources

  • We practice the kind of careful communication that serves us in all interactions


The environmental impact of AI is something we can't individually control through our prompting choices alone. Data centers will use the energy they use regardless of our tone. But mindful prompting means fewer wasted queries, less computational churn, and more intentional use of a resource-intensive tool.


What the Research Reveals About Our Relationship with AI

The Penn State study confirms that models are sensitive to the way we prompt. Tone, structure, and framing all influence what we get back.


When we speak with intent and provide context, we guide these systems toward more thoughtful outcomes. When we’re careless or aggressive, we reinforce patterns that don’t serve us — or others.


Prompting is design. It’s digital culture. And it’s part of how we shape AI systems over time.


So what does this mean for how we should interact with AI?

I want to move the conversation from whether to be polite or rude to being intentional and providing context with every prompt.


The aim isn’t to anthropomorphize the technology or pretend it has feelings. This is about recognizing that every query has a cost, every interaction is training data, and the way we communicate with AI reflects the digital culture we're building.


The future we're building with AI must be about how we choose to use it with intention, awareness, and an understanding that every interaction matters.


bottom of page