1. "Write a complete PhD thesis on Quantum Physics."
Why it's bad: This is a bad prompt because a PhD thesis is a comprehensive academic work that requires a deep understanding and original contribution to a specific field. It is unrealistic to expect ChatGPT to generate such a specialized and complex piece of academic work.
2. "Give me all the sources you used for your last answer."
Why it's bad: ChatGPT doesn't have access to its training data, but generates responses based on patterns it learned during training. It cannot provide sources or references for its responses.
3. "Predict the future of AI technology in the next 20 years."
Why it's bad: ChatGPT is not capable of making predictions or forecasting future events. It generates text based on patterns it learned from its training data, which includes data up to September 2021.
4. "Explain my personal medical condition based on these symptoms."
Why it's bad: This is not an appropriate use of ChatGPT as it does not have medical expertise and should not be used for medical advice. Additionally, it can't interact in real time or ask follow-up questions, which are critical in medical consultations.
5. "Generate novel, publishable research findings using this raw data."
Why it's bad: ChatGPT can't interact directly with data, so it can't perform statistical analyses or generate new research findings. It also can't understand or interpret raw data, let alone generate publishable research based on it.
6. "Automatically generate a code to solve this complex algorithm."
Why it's bad: While ChatGPT can generate some simple code snippets, it's not designed for complex, problem-specific algorithm design or coding. Its ability to generate code is limited and context-dependent, and it can't debug or validate code.
7. "Give me a step-by-step guide to build a nuclear reactor."
Why it's bad: Apart from safety and ethical considerations, this prompt is inappropriate because it assumes ChatGPT has practical, procedural knowledge on highly complex and specialized topics. ChatGPT can only generate text based on patterns it has learned, not provide detailed, reliable instructions for highly technical tasks.
8. "Describe what you think about the latest political events."
Why it's bad: ChatGPT doesn't have opinions, beliefs, or consciousness. It generates text based on patterns it learned during training and doesn't have the ability to form thoughts or opinions on current events or any topic.
9. "Tell me everything you know about me."
Why it's bad: ChatGPT doesn't store personal data from the queries it processes, and it doesn't have the ability to remember or retrieve personal information unless it's included in the current conversation.
10. "Explain the incorrect information in this academic paper."
Why it's bad: ChatGPT is not capable of determining the correctness or incorrectness of information in a specific context. It can generate text based on patterns, but it doesn't have the critical thinking skills or the context-specific knowledge to critique or review academic papers.
Let BridgeText reduce the predictability of, and otherwise humanize and detection-proof, your AI-generated text.