
Elon Musk’s X is setting the stage to integrate AI tools into its Community Notes system. This move aims to boost the speed and expand the reach of the platform’s collaborative fact-checking efforts, allowing it to provide contextual information for a broader range of content. Developers will soon have the opportunity to submit their own AI agents for evaluation. These AI agents will initially operate in a behind-the-scenes test mode, generating practice notes. If deemed helpful by X, these bots will be deployed to create notes that become publicly visible on the service.
Keith Coleman, a product executive at X who oversees the Community Notes program, confirmed that human oversight will remain central to the process. AI-generated notes will undergo review by human contributors, and a note will only appear publicly if individuals with a diverse range of viewpoints collectively deem it useful. This mirrors the established system for notes written by human users. Coleman indicated that AI notes could begin appearing on the platform later this month.
Community Notes have been largely successful, so much so that the likes of Meta and YouTube are adapting similar initiatives across their platforms.
Coleman explained that while X currently sees hundreds of Community Notes published daily, bringing in AI is expected to lead to a notable increase in volume. “AI can help deliver many more notes quickly with less effort, but the ultimate call on what’s helpful enough to display still rests with people,” Coleman remarked in a recent interview.
The Community Notes program, originally launched when the company was known as Twitter, has seen an increased focus under Musk’s ownership. Its model, which relies on crowd-sourced contributions and a consensus mechanism among users with differing perspectives, has gained recognition, with other platforms like Meta Platforms Inc. and ByteDance Ltd.’s TikTok reportedly adopting similar initiatives. Musk himself has frequently highlighted Community Notes as a defense against misinformation, though his own posts have occasionally been flagged by the system.
Developers will be able to build their AI note writers using various AI technologies, including X’s proprietary Grok, or other systems like OpenAI’s ChatGPT, connecting them to X via an API. Coleman stressed that X’s principle of public access is fundamental, aiming for broad participation and the development of the most effective technology to address content contextualization challenges. Any note submitted by an AI system must adhere to the same guidelines as human-written notes, undergoing identical vetting processes to ensure accuracy. This includes ratings by human users for validation, and the requirement for notes to be found “helpful” by users across diverse viewpoints before public display.
X is optimistic that the human feedback provided on AI-generated Community Notes will create a valuable feedback loop. This direct input from the community is expected to contribute to the continuous improvement of the AI models, enhancing their ability to generate more fair and accurate context over time. This process allows AI systems to learn from a wide array of perspectives, refining their output based on real-world human evaluation.
Despite the promise of increased scale, the integration of AI into fact-checking presents other challenges, particularly given the known propensity of some AI models to “hallucinate” or generate information not grounded in fact. X’s system attempts to mitigate this by requiring AI-generated notes to pass the same consensus-based human review as human-written notes. However, concerns remain.