For the last 2.5 years, my daughters and I have been rating breakfast places in #charlottesville #cville area. We rated 51 restaurants from 1 (worst) to 10 (best) on taste, presentation, menu, ambiance, & service. We also recorded cost-per-person.

Here's what we learned. 1/
Across 51 restaurants we spent $1,625.36 pre-tip, average cost of $9.91/person (sometimes other family members joined).

Cheapest per person: Duck Donuts $3.10, Sugar Shack $3.41, Bojangles $4.30.

Most expensive per person: The Ridley $27.08, Farm Bell Kitchen $17.81, Fig $17.44
Averaging all 5 ratings across all raters is one way to determine an overall rating. The grand average is 7.1 out of 10 w/ a range of 4.8 to 9.1. How strongly related are cost per person and overall rating?

r=0.36

Just 13% of the variation in quality is associated with cost.
Assuming a linear relationship, this modest relationship between cost and quality means that a one point higher quality rating is associated with $12.47 greater cost per person.
But, this modest relationship between cost & quality also means that there are restaurants that exceed the expected quality given their cost (good values) or fall short of expected quality given their cost (bad values). The best & worst values (quality vs cost) are pictured.
Regardless of cost, the top rated breakfast places were Thunderbird Cafe, Quirk Cafe, & Croby's Urban Vittles (now closed). And, the bottom rated were Dunkin', Bojangles, and Cavalier Diner.
When considering taste, presentation, menu, ambiance, & service separately, we get some additional insights.

The top row box shows correlation of each dimension with cost per person. Cost is most strongly associated with presentation & ambiance, & weakly w/taste, menu, & service
And, among the five ratings, taste & presentation are strongly related as are ambiance with service and presentation. Other factors are only modestly related.

This suggests that personal priorities about dining experience will lead to a different set of top breakfast places.
The Taste Top 10 features Bluegrass (now closed), Charlie and Litza's, Quality Pie and Oakhurts; and bottom 10 anchored by Cav Diner, Taco Bell, Bojangles, and Tip Top.
The Menu Top 10 features Bluegrass, IHOP, Fig, and Thunderbird; and Bottom 10 anchored by Starbucks, Quality Pie, Bowerbird, and Juice Laundry.

[Our idiosyncratic interests in types of breakfast is most apparent in menu ratings.]
And, the Presentation, Ambiance, and Service Top and Bottom 10 are pictured. Few places received top or bottom marks on all dimensions.
The observed variance across dimensions is interesting. Quality Pie had the most extreme variance (stdev=3.0) with among the highest ratings for taste and presentation and lowest for ambiance and menu. So good, but a disaster inside and right on a busy road outside.
Others with high variation across dimensions were

Taco Bell (2.4): surprisingly good service/ambiance, terrible presentation

Dunkin' (1.9): Good taste, disaster presentation/ambiance

Starbucks (1.9): Good taste, terrible presentation/menu
Finally, there was variation across raters. In blue, all three raters were positively correlated with 36% to 65% shared variance -- lots of idiosyncrasy in our assessments. In red, Joni's quality ratings were least correlated with cost.
Our individual Top and Bottom 10's are pictured. There was consensus on Thunderbird Cafe being the best breakfast place in the Charlottesville area. And, despite being a donut loving family, we had a terrible (dirty restaurant) breakfast at Dunkin'.
Individual interests also played out in unique ways. If Joni woke up in a bad mood, ratings for that restaurant suffered (sorry Farm Bell Kitchen, Michaels' Diner, and Oakhurst).
Finally, all data are publicly accessible for reanalysis and for creating better visualizations. docs.google.com/spreadsheets/d…

This concludes the first ever breakfast rating open science twitter thread.
A few visual highlights from Haven...

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Brian Nosek (@briannosek@nerdculture.de)

Brian Nosek (@briannosek@nerdculture.de) Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @BrianNosek

Mar 20, 2023
In "Psychology’s Increased Rigor Is Good News. But Is It Only Good News?" Barry Schwartz concludes "My aim here has only been to encourage us to acknowledge that there is a price."

I agree with that, but disagree with most of the rest of the analysis. behavioralscientist.org/psychologys-in…
Area of agreement: We must examine the impact of new behaviors to improve rigor because there are almost always unintended consequences that might be counter to the aims of accelerating progress.
Disagreement: "There is an inevitable trade-off between the two types of error...The more stringently we reduce false alarms (false positives), the more vulnerable we are to misses (false negatives)."

This is true only when everything stays the same except the criterion.
Read 11 tweets
Oct 31, 2022
Everyone moving from twitter to mastodon is very likely to fail because it is a collective action problem, benefits depend on others actions.

Here's how it can succeed. It will require your commitment for 1 month to do a few things. Read and retweet if you commit to do them.
Context: You might be tempted to post new content on both platforms until it is clear that Mastodon is going mainstream. This WILL NOT work.

Most user behavior is consuming content, not producing it. Twitter has built-in advantage of inertia and audience.
Posting the same content on both gives no reason for the consumer to go to the new platform, and the barriers to moving and rebuilding one's network are high.

So, producers MUST move consumption to the new platform. How?

For November commit to the following 3 things:
Read 13 tweets
Oct 5, 2022
I favor an all Green OA world with peer review, typesetting, copyediting, etc as microservices, but I don't see an all Gold OA world as being necessarily as bad as others do. A few reasons and would love to have these challenged by others who are thinking about system change...
In an all Gold OA world, price is evident and meaningful for decisions of authors for where to submit. As such, academic publishing becomes an actual marketplace in which the decision-making consumer is price conscious. Therefore, competition on price increases.
The primary price drivers will be (1) prestige/reputation, and (2) access to relevant audiences. Eventually, (3) quality of service will become influential as well. All three are reasonable cost drivers, even though we hate that the first exists.
Read 16 tweets
Oct 4, 2022
Baumeister, Tice, & Bushman have a new piece examining what we have learned from multi-site replication studies here: researchgate.net/profile/Brad-B…

It is worth taking seriously even though I don't agree with all the conclusions.

A thread & relevant paper: annualreviews.org/doi/abs/10.114…
The positives: The piece has no invective, no misattribution of claims, and represents other perspectives fairly.

You might counter that is a low bar. For hot topics, I disagree. Also, compare the piece with responses to replication circa 2014-2016. This is real, scholarly work.
Also, I agree with most of the intro in which they value: replication, preregistration, transparency of exploration, and caution when findings differ across outcomes/analyses.

Moreover, the paper is clear when the authors are exploring or speculating.
Read 19 tweets
Sep 9, 2022
Massive status bias in peer review.

534 reviewers randomized to review the same paper revealing the low status, high status, or neither author. 65% reject low status, 23% reject high status.

Amazing work by Juergen Huber and colleagues. #prc9 Image
Or, look at it another way. If the reviewers knew only the low status author, just 2% said to accept without revisions. If the reviewers knew only the high status author, almost 21% said to accept without revisions. Image
I thought it was painful to have 25 reviewers for one of my papers. My condolences to these authors for having to read the comments from 534.
Read 7 tweets
May 7, 2022
Lovely replies on the upsides.

In case it is useful perspective for anyone else, here's part of how I managed the downsides as an ECR so that the upsides dominated my experience in academia.
Key downsides that needed managing for me: (a) dysfunctional culture that rewarded flashy findings over rigor and my core values, (b) extremely competitive job market, and (c) mysterious and seemingly life-defining "tenure"
In my 3rd year (~2000), I almost left grad school. Silicon valley was booming and calling. I was stressed, not sure that I could do the work. And I saw the dysfunctional reward system up close and wanted no part of that.

I didn't leave. I changed my perspective.
Read 15 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(