I've spent my career grappling with bias. As an executive at Meta overseeing news and fact-checking, I saw how algorithms and AI systems shape what billions see and believe. At CNN, I even hosted "No Bias, No Bull" (easier said than done, as it turned out). 🧵
Trump's executive order on "woke AI" has reignited debate around bias and AI. The implication was clear: AI systems aren't just tools, they're new media institutions, and the people behind them can shape public opinion as much as any newsroom ever did.
But for me, the real concern isn't whether AI skews left or right, it's seeing my teenagers use AI for everything from homework to news without ever questioning where the information comes from.
Focusing on political bias misses the deeper issue: transparency. We rarely see which sources shaped an answer, and when links do appear, most people ignore them.
An AI answer about the economy, healthcare, or politics sounds authoritative. Even when sources are provided, they're often just footnotes while the AI presents itself as the expert. Users trust the synthesis without checking sources.
And the stakes are rising. News-focused interactions with ChatGPT surged 212% between January 2024 and May 2025, while 69% of news searches now end without clicking to the original source. People consume information directly from AI summaries.
Traditional media trained us to trust brands and bylines. That trust eroded through years of outlets claiming neutrality while harboring clear bias. We're making the same mistake with AI.
We accept AI conclusions without understanding their origins or how sources shaped the final answer. The solution isn't eliminating bias (impossible), but making it visible.
Restoring trust requires acknowledging everyone has perspective. Pretending otherwise destroys credibility. AI offers a chance to rebuild trust through transparency, not by claiming neutrality, but by showing its work.
What if AI didn't just provide sources as afterthoughts, but made them central to every response? "A 2024 MIT study..." or "How a Wall Street economist, labor researcher, and Fed official each interpret the numbers..."
Some models have made progress on attribution, but we need audit trails that show where words came from and how they shaped the answer. When anyone can sound authoritative, radical transparency isn't just ethical—it's essential.
What would make you click on AI sources instead of just trusting the summary?
Full transparency: I'm developing a project focused on this challenge—building transparency and attribution into AI-generated content. Would love your thoughts.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
We’ve reluctantly made the decision to restrict the availability of news on Facebook in Australia. Our goal was to find resolution that strengthened collaboration with publishers, but the legislation fails to recognize fundamental relationship between us & news organizations
Publishers choose to share their stories on Facebook because they get value from doing so, from finding new readers to getting new subscribers. We provide free tools, products and programs to support their goals.
We were prepared to launch Facebook News in Australia and significantly increase our investments with local publishers, however, we were only prepared to do this with the right rules in place. We will now prioritize investments to other countries.