1/ We proudly present the Sentient Protocol, unveiled at the @openagisummit this week.
Sentient is an open source AI monetization protocol that enables community-built AGI. The key innovation is Model Loyalty and a new format, the OML format, for representing models that enables them to be Open (download and use locally), Monetizable (track and monetize their usage remotely), and Loyal (locked for usage that do not conform to safe, ethical, values espoused by the model owner). More details follow.
2/ Current forms of predominant AI were built on public goods from years of open innovation, but they extracted the value maximally from these public goods without sharing anything with the contributors - and created closed source hegemonies and empires out of it. Additionally, it censored information and imposed cultural preferences, which stifles innovation.
Open models are torch-bearers of resistance. They provide an alternative for AI innovators to participate in the large AI economy. However, there’s no way to monetize them, nor is there a way to ensure they are used safely and ethically.
We need a new ecosystem where open-source public goods of AI drive 𝗼𝗽𝗲𝗻 AGI innovation. There is an urgent need for a new technology that allows builders to share models openly and yet get rewarded when those models are used. There is a need for a new protocol that aligns incentives of AI builders with AI innovation. @viswanathpramod
3/ The Sentient Protocol is a blockchain protocol for solving the alignment problem of community-built open AGI. It comprises contracts for incentives (ownership, usage, rewards) and nodes to enable decentralized control for access and alignment. The incentives and the necessary crypto-economic security is enforced via AVSs from the @eigenlayer ecosystem along with a trustless blockchain connected to @0xpolygon Agg layer. @hstyagi
4/ Underlying the Sentient protocol is a new cryptographic primitive called OML (open, monetizable, loyal). The goal is to allow the model to be transparently downloadable and yet retain the ability to track usage (monetization) and ensure safe and ethical usage (loyalty). The cryptographic primitive of program obfuscation will also solve OML, but that is a long standing open problem.
Sentient is devising 𝗔𝗜 𝗺𝗲𝘁𝗵𝗼𝗱𝘀 𝘁𝗵𝗲𝗺𝘀𝗲𝗹𝘃𝗲𝘀 to create OML libraries for AI models – the birth of a new area that we call 𝗔𝗜-𝗻𝗮𝘁𝗶𝘃𝗲 𝗰𝗿𝘆𝗽𝘁𝗼𝗴𝗿𝗮𝗽𝗵𝘆. In the first version, we convert backdoor attacks (security threat in AI) into model fingerprinting methods to authenticate model ownership. @sewoong79
5/ The Sentient protocol is modular and can be composed with other implementations of OML (say, using trusted hardware) in the distribution layer or different decentralized storage or compute networks.
6/ A detailed description of this thread is available here:
Most of today’s discourse reduces it to utility: how useful, how fast, how “smart.” However, utility alone is not enough. What truly makes a model great is its ability to encompass the randomness of human intelligence, matching the values of the communities that use it rather than the values of the corporations that made it.
At Sentient, we’re pursuing a different path. Through the GRID, the world’s largest network of intelligence, we’re building models that serve their communities first. Our work with Dobby and Loyalty Training builds on our earlier research into Fingerprinting, showing how open, community-driven methods can produce models that not only benchmark at state-of-the-art levels, but also reflect the values, voices, and needs of the people they belong to.
🧵 Let’s dive into the innovative model research we’ve done to contribute to the GRID
2/ Continuing to build towards Loyal AI from Fingerprinting to Loyalty Training
Loyal AI refers to models architected to maintain persistent alignment with community-defined values rather than corporate incentives. The objective is to embed robustness at the architectural and training levels so that models are resistant to adversarial manipulation (such as jailbreaks or prompt injection) and can reliably uphold their intended value structure over time.
Fingerprinting was the first step, and we continued to push the boundary on this ideal: 1. How do you fine-tune a model’s alignment on specific dimensions? 2. How do you ensure this alignment does not degrade performance?
3/ Dobby: The world’s first Loyal AI model
Dobby began as a research experiment: could we train models that are not only aligned, but loyal (models that hold persistent convictions even under coercion?)
The first prototypes were Dobby-Mini-Leashed-Llama-3.1-8B and Dobby-Mini-Unhinged-Llama-3.1-8B, both fine-tuned from Llama-3.1-8B-Instruct. These models were deliberately aligned to pro-crypto and pro-personal freedom values, refusing to adopt anti-crypto or anti-freedom narratives even when prompted otherwise.
What makes Dobby unique is that it is the first open model explicitly loyal to freedom and crypto. Where closed models like GPT-4o can be prompted into adopting almost any stance, Dobby’s loyalty training makes its alignment persistent, verifiable, and community-defined.
Announcing ROMA (Recursive Open Meta Agent): our new multi-agent framework that sets SOTA in reasoning + search.
Seal-0: 45.6%
FRAMES: 81.7%
SimpleQA: 93.9%
🧵 Read more about how recursive coordination lets agents tackle complex queries.
2/ ROMA works recursively to solve complex tasks
ROMA is an open-source framework for building high-performance meta-agents: systems that orchestrate smaller agents and tools to solve complex tasks.
- Parent nodes decompose a goal into subtasks.
- Children handle subtasks with specialized agents/tools.
- Results flow back up and are aggregated into the final answer.
This architecture makes complex reasoning tractable, transparent, and reproducible.
3/ ROMA’s first use case is deep research
“What are the top 5 NBA players by PPG averages in a season that have won both an NCAA college basketball championship and an NBA championship?”
ROMA decomposes complex questions into atomic sub-tasks—search, reason, and write—then executes them in sequence to produce accurate answers across multi-step workflows.
As GRID continues to grow as the world’s largest network of intelligence, we’re excited to onboard and showcase partners across the AI stack.
🧵Here’s a look at some of the partners fueling the model and verifiable AI experience
2/ Model Collaborations
Our research team has engineered a breakthrough alignment pipeline, and we've built strategic partnerships with the biggest projects in the space to tackle a variety of complex use cases:
@eigenlayer: Extends Ethereum security through restaking, allowing ETH and other assets to secure multiple protocols simultaneously. Together with Judge Dobby, it creates an adjudication layer for resolving complex, subjective disputes via community and governance-driven intelligence.
@KGeN_IO: Gives gamers ownership of their data while providing developers with access to authentic, decentralized player insights. With 300M+ attributes from 13M+ gamers, it powers a next-gen gaming LLM built with Sentient’s AI to transform player and developer experiences.
3/ Verifiable AI
GRID also integrates verifiable AI partners to address one of the most critical challenges in AI deployment: trust and transparency. Users need absolute certainty that models are executing as intended, outputs are authentic, and data remains secure throughout the entire pipeline. Our verifiable AI partnerships give users the peace of mind that comes with knowing their AI is actually doing what it's supposed to do, ensuring users can deploy AI with complete confidence:
@PhalaNetwork: Allows users of the GRID to run AI models inside Trusted Execution Environments (TEEs)—ensuring truly verifiable, zero-trust workloads.
@nillionnetwork: Provides cryptographic infrastructure for verifiable AI, ensuring models and their outputs can be proven, trusted, and privacy-preserving.
@lagrangedev: Verifiable AI inference, so users can be certain that an LLM’s output corresponds to the given input.
@LitProtocol: Decentralized key management network enabling programmable signing and encryption for agents
@Atoma_Network: An open-source AI network that offers security and privacy through confidential computing for AI workloads such as inference and fine-tuning.
The GRID is designed to turn frontier research into shared, open infrastructure.
Open Deep Search (ODS) is one of our proudest additions: a modular retrieval + reasoning framework that shows how open-source systems can outperform proprietary stacks on real benchmarks. Fully forkable and extensible, ODS is available for anyone to integrate, adapt, and build on inside and outside the GRID.
🧵 Check out how we created the best open-source search framework that outperforms ChatGPT and Perplexity, all contributing to GRID.
2/ Why we created an open-source search framework
Modern search-augmented AI systems operate as closed pipelines: the query is passed into a proprietary retriever, filtered through undisclosed ranking heuristics, and resolved by a large, inaccessible model. This architecture concentrates control and makes it difficult for the research community to study, replicate, or improve retrieval–reasoning interactions.
Open Deep Search (ODS) was developed to provide an open alternative. Its design goal is to expose and modularize each stage of the pipeline: query rewriting, document retrieval, snippet aggregation, reranking, and reasoning orchestration. By doing so, ODS allows open-source LLMs to achieve competitive performance on retrieval-intensive tasks while maintaining full transparency and extensibility.
3/ ODS is SOTA on standard search benchmarks
We evaluated ODS against both open baselines and proprietary search-augmented models on two benchmarks:
- FRAMES: multi-hop factual reasoning (Which film featuring a solar eclipse in its opening scene is adapted from the same source material as a David Lynch movie?)
- SimpleQA: single-hop factual QA (Who is the President of the United States?)
Results:
- FRAMES: ODS-v2 achieves 75.3%, outperforming GPT-4o Search Preview (~65%) by ~10 points and Perplexity Pro (~45%) by ~30 points. Naive open baselines plateau around 45–50%.
- SimpleQA: ODS reaches 88.3%, nearly matching GPT-4o Search Preview (90%) and surpassing Perplexity Pro (~85%).
Retrieval + reasoning orchestration is the key lever. By structuring how queries are expanded, evidence is aggregated, and reasoning steps are executed, ODS closes much of the performance gap that has been attributed to raw model size or proprietary data advantage.
Sentient’s mission is to ensure that AGI is open-source and not controlled by any single entity.
To enable open AGI, we announced the GRID: the world’s largest network of intelligence.
Over the past few weeks, we’ve highlighted partners across the GRID who are helping build AGI in the open. But the mission goes beyond models, data, agents, and tools, the GRID also drives research. Today, we’re excited to share some of the in-house work we’ve been doing to push open-source AI forward.
2/ Loyal AI: AI that is loyal to humanity and fully aligned with our interests
Loyal AI refers to models architected to maintain persistent alignment with community-defined values rather than corporate incentives. Through fine-tuning on domain- and community-specific data, combined with continual feedback loops, these systems adapt while preserving alignment constraints.
The objective is to embed robustness at the architectural and training levels so that models are resistant to adversarial manipulation (such as jailbreaks or prompt injection) and can reliably uphold their intended value structure over time.
3/ The first step towards Loyal AI: Fingerprinting
Fingerprinting in AI models is the process of embedding cryptographic signatures directly into a model’s parameters by training it to return specific, secret outputs for carefully chosen secret inputs. These key–response pairs act as a unique digital watermark that is undetectable under normal operation yet verifiable when ownership proof is required.
Because the fingerprint is integrated into the model’s learned representations, it cannot be stripped out or bypassed through fine-tuning, distillation, pruning, or model merging. This resilience makes fingerprinting both a robust mechanism for proving provenance and an interim enforcement layer for usage control, ensuring that creators can cryptographically authenticate their models while more advanced alignment and governance methods are still being developed.
With 100+ partners in the GRID, we’re scaling open-source intelligence across every dimension.
Sentient Chat connects users to the world’s largest network of intelligence, delivering high-quality answers across industries.
🧵Here’s a look at some of the data partners fueling the experience
2/ Data Labelling and Crowdsourced Data
Our data consortium partners label high-fidelity data across niche categories, leveraging human expertise to capture specialized data that can’t be replicated.
@crunchDAO: Crowdsourced financial data and competitions
@getmasafi: Real-time, validated social and web intelligence from X, Discord, Telegram, podcasts, and beyond
@PerleLabs: High-fidelity data pipelines for AI teams spanning code, advanced reasoning, multilingual content, satellite imagery, and other safety-critical domains
@dFusionAI: Open protocol to source, validate, and curate high-quality data that improves model accuracy by over 10x
@trypearai: Model benchmarking data and “human-like” evaluations
@JoinSapien: Decentralized data labeling across industries
@LabelLedger: Cryptographically proven image & video datasets focusing on autonomous systems, robotics, and maritime AI
@mizulabs: Ultra-low-cost data processing DePIN for hyperscale AI data, delivering data at a fraction of the price compared to centralized solutions.
3/ Data Storage
GRID also integrates secure data storage providers that allow model and agent builders to leverage decentralized storage for model training.
@0G_labs: Modular AI blockchain providing infinite scalability for data availability and storage
@Hyve_DA: Modular DA solution, achieving 1 GB/s bandwidth and sub-second response times, ideal for decentralized infrastructures with high-volume data requirements
@irys_xyz: Permanent data storage network enabling developers to store data forever on-chain with instant retrieval and programmable access controls
@zus_network: Datahub with bulletproof security, featuring ACID-integrity S3 storage on a zero-knowledge network