Are you copying and pasting prompts from the internet only to get mediocre results? Do you feel like AI tools are powerful but you're not getting the consistent outputs you need?
In this article, you'll discover a strategic framework for getting reliable results from AI models. You'll learn why most prompting advice fails, how to prime AI with the right context, and a systematic approach to refining your prompts for better outcomes every time.
This article was co-created by Jordan Wilson and Michael Stelzner. For more about Jordan, scroll to the end of this article. Why “Magic Prompts” Don't Deliver Consistent Results From AI
The internet is filled with supposed magic prompts that promise perfect AI outputs. Jordan Wilson sees this misconception constantly in his work, helping companies adopt AI tools.
The fundamental problem is context. When someone shares a prompt that generated great results for them, they're only sharing the surface-level instructions. They're not sharing their conversation history, their accumulated context, or the back-and-forth refinement that led to that prompt actually working.
Jordan compares this misuse of AI to buying a Ferrari just to use it as an umbrella to shield yourself from the rain. You're technically using the tool, but you're completely missing what makes it powerful. If all you needed was protection from rain, you could have bought an umbrella for that.
The Upside of Strategic Prompting
When prompting is done strategically, you get three critical benefits.
First is consistency. When you build context properly, you can reliably reproduce results. This matters for businesses that need to maintain brand voice, follow specific processes, or deliver predictable outputs.
Second is efficiency that actually scales. Most people experience a short-term boost with AI, then hit a wall when they try complex work. Strategic prompting eliminates this ceiling. Once you've properly primed a model with context, you can handle increasingly sophisticated tasks without having to start from scratch each time.
Third is better decision-making through collaboration. When you stop expecting perfect outputs on the first try and view prompting as iterative, you discover solutions you wouldn't have conceived alone.
What to Consider Before You Begin
The biggest mistake people make is treating ChatGPT, Claude, or Gemini as a smarter version of Google search. Jordan points out that this is the absolute worst way to use these tools because you're just trying to get a quick answer rather than engaging in the collaborative process these models are designed for.
He suggests thinking about AI more like working with a consultant from a big four consulting company on a project. You'd have a conversation, train them on your specific context, give them opportunities to ask questions, and iterate along the way.
The other critical consideration is using the right model for the task. Jordan emphasizes that different models have different strengths. Gemini Pro is ridiculously good with images. Claude is superior at writing. Yet many people still use only ChatGPT because it was the first tool they learned, then blame the tool when it underperforms.
Jordan recommends spending time understanding which model best handles the specific workflows for your needs. You might even have access to these tools through your workplace, as Microsoft Copilot or Gemini is built into your system.
The key mindset shift is to accept that great AI outputs require upfront investment. You're not saving time by rushing to an output. You're saving time by building reusable context that improves every subsequent interaction.
#1: Prime Your Model With Strategic Context
Priming is the foundation of Jordan's three-step framework called Prime Prompt Polish. This is where you teach the AI what it needs to know before you ever ask it to produce anything.
Start By Defining the Model's Role
You assign ChatGPT a role just like you would brief a consultant. Instead of just asking for advice, you tell the model what role it should play and what expertise it should draw from. That specificity completely changes how the model approaches subsequent questions.
Share Detailed Context About Your Situation
Provide a comprehensive background about your industry, your target audience, your current challenges, and your goals. If you're creating content, explain your brand voice. If you're solving a business problem, describe your constraints.
The critical instruction Jordan always includes is telling the model not to produce an output yet. He tells the model he doesn't want the SWOT analysis yet, doesn't want the KPI dashboard, and doesn't want the financial breakdown yet. First, he asks ChatGPT to ask him every single question it has based on the context he's shared.
Critical Instruction — Read Carefully
Do NOT create the SWOT, dashboard, financial model, or strategy yet.
Your task right now is NOT to produce an output.
Instead, based solely on the context above and your expertise:
1: Identify every question you need answered in order to produce a high-quality, accurate, and useful result.
2: Ask those questions in a structured, grouped way (for example: Strategy, Operations, Financials, Market, Constraints, Metrics, Risks).
3: Do not make assumptions.
4: Do not suggest solutions.
5: Do not preview or outline the final deliverable.
When you’re finished, stop and wait for my responses.
I will answer your questions, and only after that will I explicitly ask you to produce the final output.
This prevents the AI from jumping ahead and instead creates a back-and-forth that builds a comprehensive understanding.
Provide Examples of What Good and Bad Look Like
Rather than describing in abstract terms what you want, show actual examples. If you want social media posts in your style, provide three posts you've written. If you're solving a customer service problem, share examples of complaints and ideal responses.
Jordan also recommends sharing examples of what you don't want. This helps the model understand boundaries. You might share a competitor's content and explain why their approach doesn't align with your brand.
Upload Reference Documents and Data
For complex projects, upload the actual documents rather than manually summarizing them. If you're working on content strategy, upload your style guide, past performance reports, and competitor analyses. If you're tackling a business challenge, upload market research or customer feedback surveys.
Jordan points out that this ensures accuracy and saves time. You're not relying on memory or verbal summaries. Choose documents directly relevant to the task at hand.
Describe Your Audience and Their Needs
Go beyond basic demographics. Share what your audience cares about, what problems they're trying to solve, what language they use, and what objections they raise. Describe their expertise level and what would make them engage versus scroll past.
Set Clear Expectations for Format and Scope
Explain what format the final deliverable should take without asking for it yet. This allows the model to frame its understanding with the end goal in mind. Also set expectations about what you don't need to prevent the model from wasting effort on approaches you'll reject.
#2: Prompt With Recall
After you've invested time in priming, Jordan's next step is the prompt phase. He recommends starting your actual prompt by asking the model to recall all the relevant information from your priming conversation.
The instruction might be:
Based on everything we've discussed about my business, my audience, and my goals, now please create a content calendar for the next month with the specifications I outlined.
Jordan explains that AI models sometimes lose track of context in long conversations. Explicitly asking the model to recall and confirm understanding before producing output helps ensure nothing gets overlooked.
This also works when you return to a conversation later. Starting your prompt with “Recall our previous discussion about my email marketing strategy” helps reactivate that context.
#3: Polish Through Iterative Feedback
The final phase is polishing, during which you refine the output with specific feedback. Jordan recommends using what he calls the good-bad-why method.
The Good-Bad-Why Feedback Method
When the model produces an output, analyze it in three parts: identify what's good and why, identify what's bad or missing and why, and most importantly, explain the underlying principles.
Jordan provides an example from his own work. He asked ChatGPT to act as a strategist and create a SWOT report for Everyday AI. One of the opportunities the model presented was that Jordan should do a certified course.
Jordan recognized this as good on the surface – he puts out free information and wants to educate people. But when he applied the good-bad-why method, he explained why this was actually bad given his specific context.
He pointed out that he had told ChatGPT about his relationships with people at Google, OpenAI, and Microsoft. He had also mentioned being a professor of AI at DePaul University. Given this context, Jordan asked: Shouldn't he create a course through a university collaboration using his contacts? And shouldn't he bring in previous podcast guests, since he only accepts two percent of guest pitches?
This level of feedback – input, output, good or bad, and why – teaches the model the underlying principles that define quality for your specific needs and circumstances.
Iterate Until It Meets Your Standards
Great outputs require multiple iterations. Jordan pushes back against the idea that AI should produce perfect results instantly.
He explains that even with excellent priming and clear prompts, the first output is often 70-80% of the way there. The polishing phase closes that gap.
Jordan recommends planning for at least two or three rounds of refinement on important outputs. The first iteration gets the general approach right. The second refines details and adjusts tone. The third fine-tunes specific elements.
This might sound time-consuming, but it's dramatically faster than creating content from scratch. More importantly, each iteration improves the model's understanding of your standards, ensuring better outputs from the start.
Maintaining Your AI Systems Over Time
Jordan emphasizes that effective prompting isn't a one-time setup. If you're using AI tools regularly for business, you need ongoing maintenance.
He recommends reviewing your AI workflows at least weekly if you're part of a team. This means examining what prompts and processes are working well, what's breaking down, and what needs adjustment.
AI models change over time. Companies like OpenAI and Anthropic regularly update their models and modify underlying behavior. What worked perfectly last month might need tweaking after a model update.
Jordan suggests keeping offline or version-controlled copies of your most important prompts and instructions. For teams using custom GPTs or AI assistants, have someone review the chain of thought at least weekly to ensure the model is following the intended workflow.
He also points out that you should document when the model does something particularly good, even if it wasn't explicitly in your instructions. Add that to your instructions to ensure it continues.
Jordan acknowledges that this maintenance might sound complicated, but frames it as necessary to sustain the productivity gains AI provides. The benefits of AI allow you to be almost superhuman in many tasks, but retaining that enhancement requires regular attention.
Jordan Wilson is an AI strategist who helps companies adopt AI tools effectively and is the founder of Everyday AI, a media company and consultancy. He hosts the daily Everyday AI podcast. Follow him on LinkedIn.
Other Notes From This Episode Connect with Michael Stelzner @Stelzner on Facebook and @Mike_Stelzner on X. Watch this interview and other exclusive content from Social Media Examiner on YouTube. Listen to the Podcast Now
This article is sourced from the AI Explored podcast. Listen or subscribe below.
Where to subscribe: Apple Podcasts | Spotify | YouTube Music | YouTube | Amazon Music | RSS
If you enjoyed this episode of the AI Explored podcast, please head over to Apple Podcasts, leave a rating, write a review, and subscribe.
Stay Up-to-Date: Get New Marketing Articles Delivered to You!
Don't miss out on upcoming social media marketing insights and strategies! Sign up to receive notifications when we publish new articles on Social Media Examiner. Our expertly crafted content will help you stay ahead of the curve and drive results for your business. Click the link below to sign up now and receive our annual report!
YES! I WANT YOUR UPDATES Want to Unlock AI Marketing Breakthroughs? If you’re like most of us, you are trying to figure out how to use AI in your marketing. Here's the solution: The AI Business Society—from your friends at Social Media Examiner.
The AI Business Society is the place to discover how to apply AI in your work. When you join, you'll boost your productivity, unlock your creativity, and make connections with other marketers on a similar journey.
I'M READY TO BECOME AN AI-POWERED MARKETER