i wonder how many people who know something about "usual" complexity stuff, eg big-O, common time/space compl. classes like P/NP or PSPACE, have heard about enumeration complexity, eg delays. Because it's extremely useful when it comes to query languages, esp. on graphs. 1/
massively simplifying, the "usual" complexity stuff is going to tell you how much time or memory your query engine may use (in worst case) for query/data of some size. Great, but not the whole story. 2/
It's not going to tell you i) how long you may need to wait till 1st result, ii) how long you may need to wait *between* results. That, however, makes huge difference to clients in practice. 3/
Like the query returns 1k results in 10s, one system waits 9.9s and then spews out all results, another ships 100 results each second. In a bunch of cases the client would prefer the latter (for rather obvious practical reasons) 4/
This is where enumeration complexity comes in. Answering queries is an enumeration of results. A QL with, say, poli-delay property, can be implemented s.t. every result gets to the client with some predictable delay after the previous 5/
An extreme example where it's important: think of a path traversal query in a QL w/o poli-delay: you run it, it shipped some results, then got stuck. Client waits 1hr just for the query to finish without more results. Annoying as hell but happens! But why? 6/
Because there's no (in general) a way to test if there *is* the next result without finding it. So it keeps searching for paths. With bounded delay you *know* that either you get to next result soon(ish) or you stop. 7/
#SPARQL doesn't have poli-delay property (but can be restricted for it), GraphQL is even easier: constant-delay (we know it thanks to @olafhartig and friends). 8/