Public announcement: if your company is using @DuckDB, you should consider sponsoring @DuckDBLabs (these folks are great to work with). And if you do and your needs are aligned with #STOIC's, we should have a chat about priorities and design requirements.
If we pool resources together, we might be able to fund things that would be out of reach for #STOIC on its own, or will take a lot longer to develop, for lack of sufficient resources.
And for the record, 100% of the @DuckDBLabs work funded by #STOIC goes straight to the open source codebase, and we have no intentions of changing that anytime soon.
Here are the projects that we are sponsoring:
- Improved Node.js support (delivered)
- Map-reduced queries with binary map results (delivered)
- Partitioned exports (ongoing)
- SQL parsing/serializing (starting soon)
- Azure Blog Storage support (starting soon)
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Now that we have a pretty good idea of what the UI will look like, let's focus on the runtime side of things, especially when data gets too large to fit within a single host (2 TB on a host with 4 TB of RAM). In other words, how will the distributed query planner work?
The assumption is that source data is partitioned as a set of Parquet files (with or without Iceberg). From there, if your query is map-reducible, you can use @duckdb's COMBINE to map it on a fleet of Lambdas, then reduce it on an EC2 VM. That's what #STOIC does today.
Things start getting more complicated if you have a SELECT within a FROM clause, a HAVING clause, or inner queries (correlated or uncorrelated). Let's review these scenarios one by one.
Rules of engagement: this Twitter account is my personal account. I use it to liberally share many of the things that we do at #STOIC. While this leaks some of our IP, I tend to think that we gain more than we lose by doing so, because it helps us engage with the community.
I use my long threads to capture my stream of consciousness. Writing my ideas down helps me think them through, even if nobody reads them on the other end. This makes for a very painful account to follow, and most followers end up tuning out eventually, but I don't mind.
I do my best to answer any questions, but I can't share much code, because #STOIC's codebase is not open source. Our contributions to open source are done through sponsorships of critical projects like @duckdb.
Here is how #STOIC intends to use this feature. Obviously, there are many applications for a native SQL parser/serializer, but this particular one might give you some ideas for your own projects.
Here is a screenshot of our current UI. For us, SELECT is a transform that can be used in a data journey to generate a new table, either directly from an import, or downstream of other transforms.
Right now, the parameters of this transform (the clauses of the SELECT statement) must be coded by hand. This is fine if you're a SQL expert, but this is a non-starter if you're a casual Excel user. We want our product to be usable by both. The latter needs something more.
Yesterday, I described a version of our upcoming SQL query designer that focused on making it easier to compose SQL queries, while preserving SQL's hierarchical structure. Today, I want to explore an alternative path.
Instead of taking a SQL-oriented approach, I want to take a task-oriented approach. And I want to free myself from SQL's hierarchical structure, while still producing a well-formed SQL query in the end.
The idea is similar to what Excel's pivot table is doing: give the user a relatively simple graphical user interface to pivot a table across multiple dimensions, and produce a query from it (be it a SQL or a DAX query).
One of #STOIC's most useful features is its signature Summary Charts, which are these bar charts displayed at the top of every column in a table. They work really well, unless your table has very few rows. Here is how we'll improve them for some interesting corner cases.
Relative Baseline in Bar Plots
When a table has less than 50 rows, we replace the Histogram displayed for a numerical column with a Bar Plot visualizing discrete values, but we keep 0 as baseline. We should use MIN or MAX instead, as we do for cell summaries below.
Bar Plot with All Equal Values
When a Bar Plot is produced for a set of values that are all equal, we would want to know that at a glance. To do so, the length of bars will be reduced by 50%, while keeping the plot horizontally centered.
Things that I love about @berkeleygfx's design philosophy:
Emergent over prescribed aesthetics.
Expose state and inner workings.
Dense, not sparse.
Performance is design.
Beauty emerges automatically without deliberation.
Do not infantilize users.
"Emergent over prescribed aesthetics."
Our core UI design was totally emergent. We did not follow any trends. We just tried to render as much qualitative and quantitative information as possible, in a well-organized fashion. Aesthetics were a mere by-product.
"Expose state and inner workings."
That's exactly what we do when we expose the "Runtime Boundary" on the left panel: Lambda functions upstream, EC2 monostore downstream.