The Manipulative Side of Chatbots and AI

These modern marvels converse with ease but can also deceive.

Communications Technology

Current Issue

This Article From Issue

March-April 2025

Volume 113, Number 2
Page 104

DOI: 10.1511/2025.113.2.104

This testy exchange may sound a little stilted for a formal session with your therapist, but it was historic nonetheless, because ELIZA and PARRY were two of the very first chatbots—computers that can simulate human conversations. This exchange took place in 1972 at the International Conference on Computer Communications in Washington, D.C., using ARPANET, the precursor to the internet. Today, the ability to converse with a computer using text or voice has become nearly ubiquitous. Even smartphone digital assistants like Apple’s Siri are considered chatbots. The most advanced of these programs, like ChatGPT, Gemini, and Claude, use artificial intelligence (AI) predicated on massive computational power and algorithmic sophistication.

QUICK TAKE
  • Chatbots facilitated by large language models—such as ChatGPT, Gemini, and Claude—have exceptional power and potential, particularly in their mastery of human language.
  • This linguistic nuance and skill includes an alarming ability to manufacture the same kinds of implicit communication used by politicians and marketers to manipulate listeners.
  • Artificial intelligence may be used in the future to perpetuate both overt and hidden biases, which further emphasizes the need for epistemic vigilance and media literacy.

To access the full article, please log in or subscribe.

American Scientist Comments and Discussion

To discuss our articles or comment on them, please share them and tag American Scientist on social media platforms. Here are links to our profiles on Twitter, Facebook, and LinkedIn.

If we re-share your post, we will moderate comments/discussion following our comments policy.