The negative consequences I see from LLMs mostly come when decision‑makers start prompts with “Is it a good idea to…?”.

Of course, the model replies sycophantically and then spins a positive justification.

Remember: LLMs follow the asker’s framing and favor pleasing replies over facts.