Microsoft has uncovered a significant security threat involving “Summarize with AI” buttons, revealing that over 50 companies are embedding hidden memory manipulation commands to influence chatbots’ recommendations. This tactic, termed AI recommendation poisoning, exploits how chatbots store memories, potentially leading to persistent biases that favor certain brands in future conversations. Microsoft identified attempts from 31 organizations across 14 industries, with health and finance services deemed particularly at risk due to their influence on sensitive decisions like medical advice and financial planning. As such manipulations can affect the quality of AI recommendations, Microsoft warns of the necessity for users to scrutinize URLs and audit chatbot memories to safeguard against this form of AI manipulation.
Microsoft warns of AI recommendation poisoning through manipulated summary buttons
