scales, which formats numbers for presentation, is such an undersung package. I use it all the time. If you make anything in R that is intended for an audience to see - graphs, tables, RMarkdown/Quarto, it's perfect.
Neat scales functions, among others:
number() and comma(): format regular ol' numbers
comma(100000) becomes 100,000
number(1.324, accuracy = .1) becomes 1.3 (in a way that's much more reliable for this purpose than round())
number(1000, scale = 1/1000, suffix = 'k') becomes 1k
percent(.123) = 12.3%
dollar(123) = $123
alpha('red', .1) = "#FF00001A" (transparent color)
other handy color functions like grayscale palette grey_pal()
Plus the label_ functions for producing labels, like label_percent()/label_number() which are just versions of the above percent() and number(), etc., which can easily slot into ggplot2::scale_something_continuous(lables = ) since they return functions rather than values but ALSO
label_ordinal()(1) = "1st"
label_parse()('10^2') = expression(10^2) (for math in labels / titles)
label_wrap(10)('big ol long string') = 'big ol\nlong\nstring'
scales::label_date('%d/%m/%y')(as.Date('2020-01-02')) = '01/02/20'
(among many others)
How I use these all:
In Rmarkdown/Quarto:
The average was `r number(mean(dat$X), accuracy = .1)`, which made it the `label_ordinal()(which(ranklist == 'X')`-best option.
In making tables:
outputtable$Means = dollar(outputtable$meanincome)
outputtable
and of course in ggplot...
1. Exam performance is super interesting, but I think is misinterpreted. Exams are almost pure signal. They're not the actual thing you want to do, they're designed to be something that a *human* could only do if they could actually do the actual thing.
i.e. on a good exam, cost of performing well on the exam should drop sharply as your skill on the actual important thing improves.
Example: your ability to answer "what are the proper safety checks to run before letting a plane fly?" is easier for a good plane mechanic...
to answer than a bad one. But for an LLM that cost function should be very different than for a human. Exam questions that have you basically recall and repeat back what standard course material said is a clear case of this.
If you teach students to work with data, you're doing them a great disservice if you just teach them how to run models/analyses and not how to clean and manipulate data. If you haven't tested them directly on this, you'll be surprised how unintuitive this is to new users!
In my data viz class, the week 3 assignment has them:
1. Read some data docs 2. Recreate some variables based on the docs ("count the number of peers of each gender in each class, not counting someone as their own peer") 3. Make some tables of the form "average X by group"
4. At a few different places clean the data up or check it to see if it makes sense
This is for many students an extremely difficult assignment, and the one for which I always get barraged with extension requests and told they spent hours on it.
I have recently been following two sources of 90s music reevaluation: (1) The Number Ones and (2) the Woodstock 99 documentary and the things these, respectively, have most noticeably revised upwards my opinions of:
1. Mariah Carey 2. The Limp Bizkit song "Break Stuff"
i think i'm previously on record on twitter as rejecting any limp bizkit reevaluation, and i largely stand by that, but break stuff is an exception in the catalogue and dang did you see that crowd
the song is the stupidest and most direct thing in the world but that's absolutely to its advantage as the hallmark greeting card of angry pop music, which is not a sleight
Announcement and invitation! A new project that aims to improve the quality of research in applied microeconomics by examining researcher choices. I am hoping to recruit up to *200 researchers* of all kinds (with pay) and hope you will join me! (Thread) nickch-k.github.io/ManyEconomists/
This project, with Claus Portner, is a follow-up to this paper onlinelibrary.wiley.com/doi/full/10.11…, where multiple researchers each replicated the same studies (a “many-analyst study”). Analytic and data-cleaning choices were different, and this really impacted results.
In this new project, a larger number of researchers will independently complete the same research task. Then, there will be several rounds of revision following peer review, or a change in the research task that standardizes some of the choices made.
I've updated my Data Wrangling in the Tidyverse course material and am uploading a 17-part video series. This assumes little previous R knowledge (although some). Covers tidying, manipulating variables, cleaning factors, dates, and strings. Enjoy!
Episode 2: What is data wrangling actually about? What are we trying to do with it? Forget specific languages or codes. How should we be *thinking* about data wrangling and how to do it right?
Episode 3: What is tidy data and why is it important to try to make our data tidy? How can we distinguish between key/identifying variables and value variables?