Introducing Bird SQL, a Twitter search interface that is powered by Perplexity’s structured search engine. It uses OpenAI Codex to translate natural language into SQL, giving everyone the ability to navigate large datasets like Twitter. perplexity.ai/sql
With Bird SQL, you can quickly find information on Twitter that would have been impossible to find with conventional search engines or web browsing:
"most liked tweets about #worldcup" perplexity.ai/sql?uuid=9b03e…
If you are interested in using our search engine on your structured or unstructured data, contact us at support@perplexity.ai, our Discord server, or via Twitter. Join our Discord to learn more about search and large language models: discord.com/invite/kWJZsxP…
This is a demo, not a commercial product. There are limitations to our Twitter data, database performance, and the expressiveness of SQL. Bird SQL does not interpret the content of tweets. For unstructured search, try Perplexity Ask.
With Microsoft retiring the Bing Search APIs in August, legacy search engines have abandoned the developer community who need real-time access to information.
We're stepping in to provide a search API designed for the new retrieval paradigms introduced by frontier AI systems.
We built Perplexity Search API around three criteria:
Voice Assistant uses web browsing and multi-app actions to book reservations, send emails and calendar invites, play media, and more—all from the Perplexity iOS app.
Update your app in the App Store and start asking today.
Perplexity Voice Assistant can search for and play podcasts, YouTube videos, and other media.
Need to look up and reschedule meetings in your calendar? Voice Assistant can help you find events and draft emails.
Perplexity's Sonar—built on Llama 3.3 70b—outperforms GPT-4o-mini and Claude 3.5 Haiku while matching or surpassing top models like GPT-4o and Claude 3.5 Sonnet in user satisfaction.
At 1200 tokens/second, Sonar is optimized for answer quality and speed.
Sonar significantly outperforms GPT-4o-mini and Claude 3.5 Haiku in user satisfaction.
It also surpasses Claude 3.5 Sonnet and nearly matches GPT-4o, doing so at a fraction of the cost and over 10x faster.
Powered by Cerebras inference infrastructure, Sonar delivers answers at blazing fast speeds, achieving a decoding throughput that is nearly 10x times faster than comparable models like Gemini 2.0 Flash.