The Device You Trust
We use our phones to research candidates. We scroll social media for news. We watch clips, read posts, maybe check a fact or two. We think we’re forming our own opinion. And in a sense, we are — but we’re forming it from a feed that was built specifically for us, by an algorithm that knows what makes us click.
Most Americans now get their political information through social media or search engines. Not newspapers. Not town halls. Platforms that make money by keeping us engaged — and engagement means showing us what we already believe, or what will make us angry enough to keep scrolling.
That was already a problem. Then AI got involved.
AI doesn’t just filter what we see. It generates content tailored to each of us. A campaign can now produce thousands of variations of the same message — different words, different tone, different emphasis — and deliver each one to a different voter based on their profile. Our neighbors see a completely different candidate than we do. Same name on the sign. Different person behind it.
What It’s Actually Doing
AI microtargeting doesn’t just personalize ads. It fragments shared reality. When every voter sees a different version of the same candidate, there is no common ground left to stand on. We can’t debate a policy position if our neighbor was shown a completely different one. We can’t hold a politician accountable for a promise they only made to our zip code.
This is how it works: a campaign feeds voter data into an AI system — our age, our neighborhood, our browsing history, our consumer profile, our likely concerns. The AI generates a message designed to resonate with each voter specifically. Pro-gun voter in a rural district? The candidate is a Second Amendment champion. Suburban mom worried about school safety? Same candidate, different message — now they’re the common-sense gun reform candidate. Both messages go out the same day. Neither voter knows the other message exists.
This isn’t hypothetical. Political campaigns already use AI to generate fundraising emails, draft social media posts, and produce targeted video content. The technology to run personalized messaging at scale is here. The guardrails are not.
When every voter sees a different version of the same candidate, there’s no shared reality left to hold anyone accountable. That’s not a campaign. That’s a con.
The older version of this trick was simpler. A politician would say one thing in one town and something different in the next. But at least there were witnesses. Reporters could compare notes. Voters could compare yard signs. The contradictions were catchable because the messages existed in the same world.
AI microtargeting removes even that. Each voter gets a private feed, a private candidate, a private set of promises. There are no witnesses because there’s no shared experience. It’s not that politicians are lying more. It’s that the technology lets them tell different truths to different people, simultaneously, at scale, and never get caught.
We’ve Known This for Ten Years
In March 2016, Saturday Night Live ran a sketch about politicians saying different things to different audiences. Kate McKinnon played a candidate who shape-shifted depending on who was in the room. It was brilliant. It was devastating. And it was ten years ago.
I laughed out loud when I saw that video. It was really funny. But as I was laughing, I also knew it was true.
That sketch aired a decade ago, and nothing has changed — except the technology got better. In 2016, a candidate had to physically go to different rooms and say different things. Now AI does it for them, automatically, to millions of people, all at once. The joke landed because everyone recognized the behavior. The tragedy is that we recognized it and did nothing about it.
If we’re seeing that clip for the first time, it hits fresh. If we’re old enough to remember when it aired, it hits different — because we’ve had ten years of watching the problem get worse. Either way, the question is the same: are we going to keep laughing about it, or are we going to fix it?
The Alternative: Democratize the Tool
The best defense against AI manipulation isn’t banning AI. It’s giving everyone the same tool.
Right now, AI is a weapon that campaigns point at voters. But it doesn’t have to be. The same technology that generates personalized propaganda can also cut through it. AI can read a candidate’s website and summarize their actual positions. It can compare two candidates side by side. It can flag contradictions between what a candidate says in one district and what they say in another. It can answer the questions we actually have, instead of feeding us the answers a campaign wants us to hear.
We don’t have to take my word for any of this. Anyone can check it right now. Open any AI assistant — ChatGPT, Claude, Gemini, whatever works — and ask it to compare candidates, check for contradictions, and explain who spells out the how, not just the what. I walk through specific prompts and how to think critically about the results on my How I Use AI page.
I’m inviting everyone to do this to my own website. If AI finds contradictions in my positions, I want to know about them too. A candidate who can’t survive a five-minute AI audit doesn’t deserve our votes. That includes me.
Here’s what this looks like as policy:
- AI transparency in campaigns — require political campaigns to disclose when content is AI-generated or AI-targeted. Voters deserve to know if the message they’re reading was written by a human or assembled by an algorithm based on their consumer profile. Sunlight is the best disinfectant.
- Microtargeting limits for political ads — ban the use of AI to deliver fundamentally different policy messages to different voter segments. Tone and language can vary. The substance cannot. If a candidate supports a policy, every voter should know it. If they oppose it, same rule.
- Public AI tools for voter education — fund open-source, non-partisan AI tools that let voters compare candidates, verify claims, and analyze policy proposals. The technology already exists. What’s missing is a public version that isn’t owned by a company with its own agenda.
- Algorithmic audit requirements — require social media platforms to submit their political content algorithms to independent audits during election seasons. If the algorithm is shaping how people vote, the public has a right to know how it works.
I use AI in my own campaign. I’m transparent about how — the full breakdown is on my How I Use AI page. I use it to draft, to research, to organize. I don’t use it to tell different voters different things. Every page on this website says the same thing to everyone. That’s not a technical limitation. That’s a choice.
The question for every candidate in every race is simple: will your website survive an AI audit? If the answer is no, voters should ask why.