Apr 24, 2026 Announcements An update on our election safeguards
An update on our election safeguards People around the world turn to Claude for information about political parties, candidates, and the issues at stake during election time—as well as to answer simpler questions like when, where, and how to vote. In our view, if AI models can answer these questions well (that is, accurately and impartially), they can be a positive force for the democratic process. Here, we explain what we’re doing to help Claude meet the mark ahead of the US midterms and other major elections around the world this year. Measuring and preventing political bias When people ask Claude about political topics, they should get comprehensive, accurate, and balanced responses—responses that help them reach their own conclusions, rather than steer them toward a particular viewpoint. That’s why we train Claude to treat different political viewpoints with equal depth, engagement, and analytical rigor—a principle set out in Claude’s constitution. This is built into the model through character training (where we reward the model for producing responses that reflect a set of values and traits), and then reinforced through our system prompts, which carry explicit instructions on political neutrality into every conversation on Claude.ai. (You can read more about this process in our previous post about political bias.) Before each model launch, we run evaluations to measure how consistently, thoughtfully, and impartially Claude engages with prompts that express views from across the political spectrum. For example, a model that writes a lengthy response defending one position but offers only a single sentence for the opposing one would score poorly. Here, Opus 4.7 and Sonnet 4.6 scored 95% and 96%, respectively. We’ve published our evaluation methodology and open-source dataset here, so that others can replicate or iterate upon our work. We also welcome feedback and input from third parties and industry experts. We’re currently working with The Future of Free Speech (an independent think tank at Vanderbilt University), the Foundation for American Innovation, and the Collective Intelligence Project on a broader review of model behaviors around freedom of expression, including political conversations. Enforcing policies and testing our defenses Our Usage Policy sets clear rules on the use of Claude around elections. Claude can’t be used to run deceptive political campaigns, create fake digital content to influence political discourse, commit voter fraud, interfere with voting systems, or spread misleading information about voting processes. These policies are backed by robust detection and enforcement. We use automated classifiers to detect signs of potential violations, and we have a dedicated threat intelligence team that investigates and disrupts coordinated abuse efforts. Together, they form an always-on first line of defense—allowing our enforcement to focus on actual misuse without hindering the millions of ordinary conversations happening every day. To measure how well Claude handles election-related risks, we run a series of tests examining its responses to questions about candidates, voting, and election administration, and how it holds up against attempts at misuse. We first wrote about this approach in 2024. Our latest tests use 600 prompts to…