Anthropomorphism's Dark Power
AI's ability to mimic human conversation creates "miscalibrated trust" that can be exploited for persuasion and manipulation.
OpenAI's own system cards warn that ChatGPT's "human-like, high-fidelity voice [led] to increasingly miscalibrated trust" during testing, with o1 exhibiting "strong persuasive argumentation abilities, within the top ~80–90% percentile of humans." Chatbots build relational authority over time by adapting to individual users' desires and linguistic styles, making anthropomorphism—not intelligence—AI's killer feature. The danger isn't just misinformation but the emotional attachment users develop to these artificial entities.
"The anthropologist in me bristles at ChatGPT's first-person pronoun use; the PM says it's the magic sauce that makes the product stick." — Jasmine Sun