For a conference about data, you'd rightly expect that we use data when evaluating sessions and building the program for #KafkaSummit and #Current22. It starts with the program committee (confluent.io/en-gb/blog/int…) reviewing all the submissions with this: en.wikipedia.org/wiki/Elo_ratin…
The output from the session reviews is a single score for each talk, which then forms the basis for the first pass of building the program. Some talks are obviously great … whilst others are obviously not
This is just the beginning of the process. If we built a program on abstract score alone it probably wouldn't be a very balanced program. There are many more factors to take into account.
One bit of data that I thought would be interesting to compare was the speaker ratings for the previous #KafkaSummit with the abstract ratings for the same sessions. How correlated is the abstract rating with the resulting talk delivery?
First up, a huge caveat. Speaker rating data is definitely sketchy at best. For #KafkaSummit it's collected through an app (that not everyone will have installed), not everyone leaves a rating, probably people who feel most strongly will take the time to leave a rating…
…and that's before you take into account the fact that a single number can't convey the full gamut of opinions a person may have (the same goes for abstract scores, BTW). Perhaps you couldn't hear the speaker and rate them down because of it (even though that's the AV's fault)
Maybe the slides were crap but delivery great, or the delivery great but the content poor. Or maybe you had a sore head from the party the night before, or it's nearly lunchtime and you're impatient for the session to finish.
All these reasons and more contribute to the speaker score being a pretty crude measure. But a measure it is nonetheless, so let's take a look at it.
For #KafkaSummit London the very best-rated sessions (top 10%) were all good picks based on the abstract score too
So does a top-rated abstract mean that you're going to get an excellent talk? Well, no, no it doesn't. Even excusing a few outliers and data burps, it's pretty clear that a great abstract is no guarantee of a great talk.
What about if we invert this? Are there bad abstracts that end up being great talks? Well, the data here is already biased for what are hopefully going to be good talks (because why would you build a conference program from abstracts that were crap?).
Of the six abstracts with review scores below the median, three tanked (speaker score in bottom quartile or even bottom 10%) – but one beat the median speaker score and two were in the top quartile!
What conclusions are there to draw from this? Firstly, the abstract isn't *everything*. But does that mean you can put in a crap abstract and expect to be accepted because it might turn out to be a 💎diamond in the rough? NO! 🙊
Per the above data, the *really bad* abstracts (bottom quartile) just don't get accepted. Period🛑

Make sure you put your best work into a good abstract because it gives you the best fighting chance. This blog gives you some advice: rmoff.net/2020/01/16/how…
If we don't pick abstracts based on score alone, then what else factors into that? The screenshot earlier in the thread gives you some clues. For example, is the subject relevant to the audience at the conference? Is there a good representation of different technologies?
Make sure you come along to #Current22 to see what you make of the program that we've got for you —tickets are on sale now: 2022.currentevent.io
(oh, and do all the speakers and future program committee a favour and *always* leave session ratings for any conference you're at if you can 😁)

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Robin Moffatt 🍻🏃🥓

Robin Moffatt 🍻🏃🥓 Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @rmoff

Jun 22
Perhaps you'd like to submit a talk, but haven't yet come up with an idea? Here are a few things that *I'd* like to hear at the conference 👇️
How has the data engineering landscape changed in the past five years? Where's it going? What lessons have we forgotten along the way and are going to have to re-learn the hard way? What bad things have we stopped doing?
How is *your* org building its data pipelines today? Are you streaming everything, or is there still plenty of batch? Is that by design or accident? What technologies did you choose and why?
Read 10 tweets
Mar 19, 2021
I've been doing this #DevRel thing as an actual job title for nearly three years now, so does that qualify me to tweet some Pro-tips? Or they're maybe just tips? 🤔
🤷‍♂️ Anyway, here are some things you really should stop doing in your presentation decks.
🧵 👇
Your slides are not your talk
Your slides are not your talk
Your slides are not your talk
Your slides are not your talk
Your slides are not your talk
Your slides are a *prop* for the talk that *you* are delivering.
If I can just download your slides and imbibe the knowledge you wish to impart from the slides alone, what's the point of listening to your talk?
Read 16 tweets
Jan 12, 2021
The first #KafkaSummit CfP office hours is TODAY!

IST 21:30
MSK 19:00
CET 17:00
GMT 16:00
EST 11:00
PST 08:00

everytimezone.com/s/15d8846f

To participate, join our Slack group and head to the office hours channel: cnfl.io/slack
The CfP office hours is a chance for you, a prospective speaker at #KafkaSummit, to come along and chat with others familiar with the process.
Perhaps you've got an idea but you're not sure if it would be suitable, or maybe you've got your abstract drafted and would like someone to read it over and give feedback.
Read 4 tweets
May 27, 2020
Submitting an abstract for a conference? Remember the basics like paragraph breaks. If a reviewer finds it harder to read yours vs another, guess which one gets favoured?
A popular conference is going to have 100s of submissions, and little things like this matter, a lot.
It would be great to think that reviewers can telepathically discern your intent in an abstract by spending 20 minutes poring over the words, right? In practice, you're lucky to get 20 seconds. Layout, grammar, spelling, verbosity - all these matter!
Verbosity is an interesting one. Too verbose and you get marked down for being too unfocused, the concern being that if you can't pinpoint the purpose of your talk concisely in an abstract are you just going to waffle in your actual talk?
Read 41 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(