Fear AI Flattery, not its Hallucinations

AI blinds us by flattering us.

  • “That sounds great! I’m all ears.”

  • “This is an excellent idea that solves a real problem!”

This goes deeper than pleasantries.

When we ask AI for help, we also give it our assumptions.

Our assumptions magically become facts when these words are sent back to us in the AI response.

I usually see when the AI makes up things but I am blind when it does not double-check my assumptions.

For example, I gave the AI various suggestions for how to build an app to avoid street cleaning parking tickets.

I learned, after hours of misdirection and manual research, that two major ideas of mine were wrong.

The AI never double-checked the ideas I gave it. It just incorporated them as if they were true.

This image shows the prompt I created for my app. The red highlights show my hidden assumptions that the AI later reinforced.

First assumption: San Francisco Parking Data

I had assumed street cleaning parking data would be available in API form.

I was wrong.

I found later that San Francisco parking data comes in the form of a data file. You have to download, store and process that file each time you want to update the data. You can’t just ping the San Francisco API in real-time to check a car’s location against street cleaning zones.

Second assumption: Apple AirTags

In our household, we have 3 cars for 4 drivers. I already keep track of each car with an Apple AirTag.

My vision for this app was to use those AirTags and have the app automatically set the car’s location without any user input.

I described this in my initial prompt and chatGPT and Claude happily included it in their AI-generated specification.

Even though both AIs said “Searching the web” when they processed my prompt, they did not double-check the “facts” from my prompt.

I was wrong again. Apple does not give access to AirTag location data outside of its FindMy app.

Fighting the AI Flattery

There seem to be a couple ways to fight this AI flattery.

  1. Co-develop your ideas with a knowledgeable engineer during the Product Discovery phase so you have an expert on what’s “just now possible.”

  2. Keep your prompt brief. Less time to include assumptions. For apps, only specify the customer problem (not your solution) in the prompt. This seems to work but less context also lets the AI to go off the rails.

  3. Create an instruction set to include alongside your request (you can paste this in at the beginning or the end of your prompt or include it in a Custom GPT or as a “system” prompt using other integration styles).

I created these instructions for my Spec Maker Custom GPT and it worked most of the time.

Create an Assumptions and Risks section to double check both user-provided and AI-created information:

- What worries you about this spec so far?

- Assumptions made in absence of explicit user detail.

- Technological dependencies that may not yet exist or be reliable.

- Gaps in user flow, logic, or feasibility.

- Potential implementation risks that could delay development or reduce impact.

- Edge cases and any potential constraints not previously mentioned

In product development, AI offers acceleration. We can develop ideas very quickly but we need to understand that “garbage in” becomes “garbage out”.

When prompting an AI, be aware what you ask for. Create guardrails such as asking the AI to do assumption and risk testing.

The AI may appear like a fully functional PM but it is not yet acting as holistically as a high quality human PM would.


The AI-Enhanced Product Manager


Jim coaches Product Management organizations in startups, growth stage companies and Fortune 100s.

He's a Silicon Valley founder with over two decades of experience including an IPO ($450 million) and a buyout ($168 million). These days, he coaches Product leaders and teams to find product-market fit and accelerate growth across a variety of industries and business models.

Jim graduated from Stanford University with a BS in Computer Science and currently lectures at University of California, Berkeley in Product Management.

Next
Next

How AI is like a Dishwasher