Have you ever felt like your AI assistant didn’t really understand you? That despite your efforts, the answers seemed off the mark or too vague to be useful? Then our guide will likely be helpful to you…
As generative artificial intelligence becomes integrated into our professional tools and daily lives, a new skill is becoming not only useful but absolutely essential: prompt engineering. Crafting a clear, targeted, and well-structured request can make all the difference between a vague answer and a truly impactful result. Yet few people have mastered this skill. The quality of the responses depends almost entirely on the precision of the prompt. So, having access to artificial intelligence is no longer enough—you must learn to communicate with it effectively. That’s where these ten practical tips come in.
Be clear and precise
This is probably the most common mistake: thinking that the AI will “guess” what we mean. However, even the most sophisticated models like our Expert Chatbots, ChatGPT, or Gemini cannot read our minds. They only respond to what is explicitly written. If you ask “Give me a report,” expect something generic. On the other hand, if you specify “Write a one-page report on the effects of remote work on marketing team productivity in Europe,” you’ll get a much more focused answer.
Precision also means avoiding ambiguous or overly broad requests (like “tell me about artificial intelligence”). It’s better to frame the topic with a clear objective. The so-called “contextual” approach consists of providing relevant details to guide the AI toward a response that fits the actual need.
For example, if you want to rephrase a text, avoid a too simplistic request like “rephrase the following text: …” You’ll often get a version that’s almost identical, with only a few synonyms changed. If you want a more casual tone, clearer statements, or a version adapted to a specific audience, say so. The more clearly your request expresses your goal, the more relevant the result will be.
And be careful: being precise doesn’t mean being wordy. Too much unnecessary information can dilute your main message. The right balance often comes after a few rounds of trial and error, progressively refining the wording until you get exactly what you’re looking for.
Give concrete examples
A good example is sometimes worth more than a long explanation. In prompt engineering, showing what you expect helps the AI better understand your intent—especially when using “few-shot” or “one-shot” techniques. For example, if you want ChatGPT (or any other AI chatbot) to write in a humorous tone, give it a representative sample instead of just saying “be funny.”
This is also a very practical way to establish a model or structure that the AI will then replicate. Giving two or three well-chosen examples often yields much more consistent results than long abstract instructions.
Of course, it depends on the task: for data classification or code generation, examples need to be functional. For creative writing, they should reflect the desired style. In all cases, it’s a winning strategy that drastically improves the quality of responses.
Clearly define the expected format
Do you want bullet points? A comparison table? A structured paragraph in three parts? Or a concise one-sentence response? Too often overlooked, format has a huge impact on the clarity and usefulness of the final result. Sometimes, simply adding “respond in table format” is enough to turn a dense wall of text into a clear and directly usable presentation.
The prompting white paper published by Google in February 2025 strongly emphasizes this point: explicitly specifying the expected length or structure (e.g., “summarize this in under 100 words”) not only leads to better-calibrated responses but also… answers that can be used immediately without editing.
And if you work with tools like the Gemini API or OpenAI, also consider technical formats (JSON, basic HTML tags…) that facilitate integration into your digital projects or automated reports.
Use role prompting
Asking the AI to “act as” an expert dramatically changes its stance and responses. This is called role prompting—a simple but extremely effective technique that involves assigning a persona or profession to the model: “you are a labor law attorney,” “answer as if you were a biology teacher.”
This method not only adjusts the tone and vocabulary used but can also trigger specific cognitive behaviors in some models (like ChatGPT or Anthropic’s Claude), such as more technical explanations for an engineer.
But be careful: it works well when it’s credible and consistent with the task at hand. There’s no point in asking the AI to be a medieval poet if you’re expecting a financial analysis… unless that’s precisely your creative goal!
Obviously, reassigning a specific role to the chatbot in every new session can quickly become tedious. To spare you that burden, the prompt engineers at WAIZZ have designed a suite of expert chatbots, specifically trained and optimized for particular domains. These are directly accessible via WAIZZ web application, and many of them are completely free and unlimited with our “Nova” model.
Add useful context
A common mistake is forgetting that the AI doesn’t have access to your immediate environment or the documents you refer to—unless you include them in your prompt. Contextual prompting aims to fill this gap by adding necessary elements: project goals, target audience, time constraints…
For example, instead of saying “write me an email,” say “write me a professional email to convince my supervisor to approve three additional days to finalize our internal audit.” Again, this is the core logic of prompt engineering: crafting prompts that guide.
Guides published by the OpenAI Academy regularly remind us that without enough explicit context provided by the human user… even the best model remains blind to the subtleties of the issue at hand.
Experiment with different writing styles
Slightly changing the tone or rephrasing the request can produce surprisingly different results—even with exactly the same initial request. That’s why it’s highly recommended to try several stylistic variations until you find the one that “triggers” the desired response from the AI.
For example, you can compare a neutral version like “Explain how the Internet works” with a more engaging one like “Imagine you’re explaining the Internet to a curious child.” The content will be similar… but not identical in tone or accessibility.
This is where the creative side of prompt engineering comes into play:
- play with narrative styles (informative vs. persuasive);
- vary lexical registers (everyday language vs. technical);
- test different rhetorical structures.
In short, step outside the automatic mode to fine-tune according to your real needs.
Structure your reasoning (Chain of Thought)
Simply asking for a direct answer is not always enough, especially when trying to understand how the AI arrives at its conclusions. The Chain of Thought method encourages the model to break down its reasoning step by step (examples: “explain your approach,” “justify each choice”).
This is particularly useful in analytical or logic-based tasks, where it’s not just about getting the right answer but also the right intellectual process behind it—something you can later verify or adapt yourself.
This approach not only promotes transparency but also… often yields more reliable results thanks to the progressive breakdown of the problem.
Test multiple prompts in parallel
There isn’t ONE right way to ask a question—there are often several possible variations depending on the level of detail you want or the type of information you expect in return. Hence the value of comparing side by side two different prompts responding to the same need.
This also applies to different formulations addressed to the same tool: test six slightly different versions and objectively compare their performance (clarity, relevance, originality…). It’s tedious at first but incredibly educational over time.
And once you start to recognize which types of formulations work best for which use case or model… that’s when you really start to level up as a skilled prompt engineer.
Adopt an iterative approach
No one gets the perfect prompt on the first try—and that’s perfectly normal. The process relies on gradual adjustments where each attempt teaches you something about what works… or not. Changing a word here, adding an example, rephrasing the request elsewhere—that’s how you gradually refine your mastery. Each interaction should be seen as a learning opportunity, not a final answer set in stone by the machine.
And above all: keep track of effective prompts you’ve used before (saved in a Word file, for example). This allows you to either reuse them as-is or quickly adapt them to new situations you encounter.
Stay curious and informed
The models powering chatbots evolve rapidly, their capabilities expand almost monthly—so your methods must evolve too. A prompt that worked yesterday may become obsolete tomorrow if a new parameter is introduced server-side (or a bug is fixed).
That’s why actively following official updates for the model you’re using has become almost mandatory to stay relevant in your daily use.
Finally… never hesitate to explore open community resources like GitHub Awesome Prompts, which are full of collectively tested prompt examples—great for learning or simply saving time!
Ultimately, mastering the art of prompt engineering is less about memorizing a magic formula and more about gradually developing your own critical eye toward today’s conversational AIs—their internal logic as well as their real-world limitations.