Hands-On: Debunking GenAI Myths with Real-Life Examples
Image by DALL-E.
Exploring Common Myths Through Hands-On Scenarios
In Monday’s article, we debunked some widespread myths about GenAI. Today, we’ll put these insights into practice with specific examples to illustrate where GenAI falls short. By seeing real outputs, we’ll understand how to navigate GenAI’s capabilities and limitations with confidence.
The Tool of the Week: Use Constraints to Focus Output
Before diving in, let’s spotlight the featured tool:
Use Constraints to Focus OutputSetting constraints like word limits, formatting, content type, helps guide GenAI’s responses to fit your specific needs. While the results may not always be perfect, they show the potential of using boundaries to shape AI’s output.
1. GenAI Doesn’t Know Everything
I put GenAI to the test by asking about my research papers. It then responded simple, “I don’t have information on that,” indicating that GenAI, despite being well-read, isn’t omniscient.
Without searching the web: tell me about a research paper from Fernando Yanez, a PhD candidate at the University of Toronto
Copy the prompt and try it with ChatGPT
Result: The AI’s response made it clear—its knowledge has limitations, and its results may not include content that are not widely publicized, especially if it hasn’t been incorporated into its training set.
Takeaway: Use GenAI for broad insights but verify details on niche topics or personal work independently.
2. GenAI Doesn’t Always Deliver the Right Answer
Even after following up with questions about my pre-October 2023 research, GenAI was unable to provide an answer. This underscores that GenAI can sometimes fail to retrieve lesser-known information, even if it’s publicly available, because such data may not be sufficiently represented in its training dataset.
Without searching the web: then tell me about his research prior to October 2023
Copy the prompt and try it with ChatGPT
Observation: In this instance, GenAI couldn’t provide the desired information. However, there are cases where it generates summaries that sound credible but may lack factual accuracy upon verification. This demonstrates that even with extensive training data, GenAI’s responses might not always align with the actual facts.
Takeaway: Use GenAI’s outputs as starting points or ideas for further exploration rather than accepting them as definitive answers.
3. GenAI’s Output is as Good as Its Prompt
The quality of GenAI’s output hinges on the clarity and detail of its input. Broad prompts result in generic content, while precise prompts guide GenAI to generate tailored, meaningful results. A well-crafted, specific prompt can dramatically elevate the response quality.
Here is an example of a generic prompt:
Generate creative content in 100 words.
Copy the prompt and try it with ChatGPT
Result: The response to this is typically very general and not customized to our needs.
Now here is an example of a detailed prompt:
In 100 words, no more, no less, create a story set on an upside-down planet where plants speak and animals are mute. In this world, there is no wind; even so, plants move consciously as it is part of their multimodal language. The third element of this multimodal language between plants is synaptic connections between roots.
Copy the prompt and try it with ChatGPT
Result: With a detailed prompt like this, GenAI can craft more vivid and focused narratives.
Conclusion: The more context and detail you provide in your prompt, the higher the quality of the response. Investing time in creating a clear, specific prompt can lead to more impressive, targeted results.
4. GenAI Doesn’t Understand Context as Well as Humans Do
In the previous prompt example, we noted that plants in this fictional world use a multimodal means of communication with three elements. We explicitly describe the kinetic and synaptic aspects. However, sound was implied through the phrase “an upside-down planet where plants speak and animals are mute.” It is easy for us to infer that, unlike mute animals, plants produce sounds in such a world. Yet, GenAI mentioned “silent debates” among the plants. This oversight highlights its struggle with nuanced context. While GenAI correctly included movement and synaptic connections, it missed the auditory element implied in the prompt.
Insight: This example shows that even with detailed prompts, GenAI may fail to fully comprehend complex relationships or implied details, emphasizing the need for human review to ensure accuracy and depth.
5. GenAI Doesn’t Handle Every Task Perfectly
Adding constraints to prompts can test GenAI’s precision. In this case, even when given precise instructions to write 100 words—not more, not less—both generated stories had 99 words. This shows that GenAI may fail in some tasks requiring exact adherence to detailed criteria.
Insight: While GenAI can approach the target closely, human oversight ensures adherence to precise details.
Tool of the Week Reflection
Using constraints like word limits can guide AI effectively. Despite falling short by one word, it stayed within the general boundary, preventing overly long responses. This shows how leveraging constraints focuses output, even if perfection isn’t guaranteed.
Final Thoughts: Balance, Not Perfection
These examples demonstrate that our Thinking Buddy, while powerful, has its limitations. Understanding where it may falter—be it in handling precise instructions, interpreting nuanced prompts, or drawing from specific data—enables us to use it more effectively as a Thinking Buddy. When approached with clear prompts and constraints, GenAI’s assistance can be invaluable, but human oversight ensures the best results.