Manipulation Machine

The increasing availability of microtargeted advertising and the accessibility of generative artificial intelligence (AI) tools, such as ChatGPT, provide anyone with a laptop the ability to harness large language models to scale microtargeting efforts for political purposes.

A study by Almog Simchon, Matthew Edwards, Stephan Lewandowsky titled 'the persuasive effects of political microtargeting in the age of generative artificial intelligence' looks to examine the effectiveness of this putative “manipulation machine.”

Overview:

The report tests the effectiveness of this “manipulation machine” in four studies. They show that personality-congruent political ads are indeed more effective than other ads, and they also show that ads can be generated and validated automatically.

The results demonstrate that personalised political ads tailored to individuals’ personalities are more effective than non-personalised ads. Additionally, they showcase the feasibility of automatically generating and validating these personalised ads on a large scale. These findings highlight the potential risks of utilising AI and microtargeting to craft political messages that resonate with individuals based on their personality traits.

This process isn't exclusive to online advertising, and can easily be adopted into less obvious tactics including direct messaging and amplification strategies.

Given this understanding, there is a need for behavioural and cognitive science to concentrate on developing prevention methods through the design of interventions that enhance people’s ability to detect manipulation efforts and make informed decisions in their online environments.

Reference

Author: Greig Dowling 

Share this post