Agent 3 is here! 🤖 Our AI is now more autonomous, reliable, and faster. It can test your app in a real browser, find bugs, and automatically fix them for you.
1/ Tackle bigger projects with longer run times. Agent 3 can now work autonomously for up to 200 minutes, with automated testing so you can track its progress.
2/ Ship faster with new App Connectors & Integrations. Connect your apps to your favorite services by signing in just once, and reuse the connection across all your projects.
3/ Build intelligent bots and workflows with Agents & Automations (beta). Create custom Slackbots, Telegram bots, or run tasks on a schedule, all from your workspace.
That's it for this week, be sure to follow along for weekly updates: docs.replit.com/updates
• • •
Missing some Tweet in this thread? You can try to
force a refresh
On March 20th, 2025, my colleague and I discovered a critical vulnerability in Lovable's implementation of Row Level Security (RLS) policies.
Applications developed using its platform often lack secure RLS configurations, allowing unauthorized actors to access sensitive user data and inject malicious data.
Lovable applications, being primarily client-driven, rely on external services for backend operations like authentication and data storage. This architecture shifts the security burden to the implementor of the application.
However, misaligned RLS policies between the client-side logic and backend enforcement frequently result in vulnerabilities, where attackers can bypass frontend controls to directly access or modify data.
Lovable later introduced a "security scanner," but it merely checks for the existence of any RLS policy, not its correctness or alignment with application logic. This provides a false sense of security, failing to detect the misconfigurations that expose data.
INITIAL DISCOVERY & SCOPE ASSESSMENT
The vulnerability was first identified on March 20th, 2025, while examining Linkable, a Lovable-built site for generating websites from LinkedIn profiles.
An inspection of network requests revealed that modifying a query granted access to all data in the project's "users" table. After we highlighted this on a reply on Lovable's Twitter account, Lovable denied the issue, then deleted their tweets and the site. Linkable was later reinstated with a $2 fee.
The core issue was not an exposed public API key as we initially thought (Supabase provides public `anon` keys by design) but the absent RLS configuration. This allowed unrestricted data retrieval from the exposed table (sample in Appendix A1).
To determine if this was an isolated incident, we investigated other Lovable-created sites, starting with those on Lovable Launched—a showcase presumably featuring polished projects.
Access to the list of these sites was gained by manipulating an endpoint on the Launched site itself, which also lacked RLS.
We then developed a script to visit the homepage of each Launched site, capture all external network requests, and filter for those made to external sources.
For each identified request, the script attempted to modify the request to select all data from the associated endpoint—an operation equivalent to `SELECT *`, which RLS would typically prevent.
AUTOMATED SCAN FINDINGS
The scan, completed on March 21st, identified 303 endpoints across 170 projects (approximately 10.3% of the 1645 analyzed) with inadequate RLS settings. This indicates widespread RLS misapplication, potentially highlighting systemic issues in Lovable's platform that may predispose projects to insecure data storage.
The following anonymized public endpoints from the scan illustrate the types of sensitive data exposed:
This script only analyzed homepages and did not attempt to access login-protected areas or perform deeper site scraping. Authenticated sessions on these vulnerable sites could expose even more sensitive data.
Users interacting with Lovable-built sites should exercise extreme caution in the data they submit.
Whether you're using Replit Agent, Assistant, or other AI tools, clear communication is key.
Effective prompting isn't magic; it's about structure, clarity, and iteration.
Here are 10 principles to guide your AI interactions:
Checkpoint: Build iteratively. Break large goals into smaller, testable steps. Use features like Replit Agent's Checkpoints to save progress and experiment safely.
Debug: Don't just say "it's broken." Give the AI context: exact error messages, relevant code snippets, and the steps you took. Help it help you.