"It quickly became apparent that daily, close contact with the data was necessary to understand what states were reporting... At first, we just thought you could pull numbers off dashboards. As it turned out, we actually needed to do deep research.”
Valuable data work lesson 1/
"Building automated tooling facilitated our manual data collection and was crucial to our ability to verify data. Ultimately, though, human scrutiny was needed to develop a deep understanding of the data." 2/
"States frequently changed how, what, and where they reported data... Had we set up a fully automated data capture system in March 2020, it would have failed within days." 3/
Human eyes on, and heavy use of, data pages and dashboards enabled informed feedback including:
- comments about website accessibility
- requesting more data
- correcting errors
Great post about using automation as a way to support and supplement manual work, not to replace it.
By @JonathanGilmou3 about @COVID19Tracking, although I think the lessons apply more broadly 5/
"The framing of so-called US-China 'AI arms race' is increasingly used to justify expansion of large tech corporations’ AI capabilities, while acting as a defense against critical work calling for restraint, reflection, & regulation of AI tech & firms."
It is critical to hold both Chinese and US actors accountable for the rights violations they perpetuate. Instead, debates around which state’s technologically-mediated harms are “worse” tend to erase efforts to resist technology-related rights violations within China.
"We often see arguments that portray any check on American tech companies as allowing or endorsing China to grow. In this framing, AI (and the Big Tech companies capable of producing it) is conflated with the national security interests of the US."
Also, let's not forget ventilation as a very powerful and under-utilized tool.
Setting incentives based on R<1 would be more motivating to use all our options to get there, not solely vaccines (which are great, but in many places not enough on their own)
Stories like this are sadly common. When MDs discourage everyone else (even STEM PhDs) from reading medical literature, this really does NOT help us create a society with higher scientific & medical literacy.
Problems with "appeal to authority" arguments:
- many ppl are more motivated when they know the underlying reasoning/mechanism/data
- doesn't build underlying scientific knowledge or critical thinking skills
- science is messy & always evolving
- discourage interdisciplinary work
I value interdisciplinary work, which entails encouraging others to learn about your field & engage with it, and recognizing that their other perspectives/skills/domains that can be relevant & shine new light on your area.
It bothers me when doctors, journalists, & others lump together:
- disabled & chronically ill patients who read papers/research their own medical issues AND
- able-bodied anti-vaxxers/conspiracy theorists who use "do your own research" to justify anti-science 1/ #NEISvoid
Patients w/ chronic illnesses & disabilities have 3 types of expertise:
- physical experience of their illness/disability
- how to navigate medical system, which is traumatizing
- in many cases: deep scientific knowledge, from keeping up on relevant research 2/
Able-bodied anti-vaxxer/conspiracy theorists typically don't have any of the above 3 types of expertise, which is why I think we need to draw more careful distinctions when talking about "patients doing their own research" in a derisive way. 3/
It is funny when the "only doctors are allowed to speak about public health" crew suggests I wouldn't want outsiders weighing in on AI, because one of my core values/driving motivations is that I want people from all backgrounds involved with AI & AI ethics 1/
Most problems related to AI/automated systems are inherently interdisciplinary, benefiting from having experts from many domains involved, spanning STEM, social sciences, and the humanities.
And those who are most impacted by a system should be centered most. 2/
Too often, there has been unnecessary gatekeeping & credentialism in AI, narrow definitions of who has the "right" background. I talk about this more here, but our vision for fast.ai was a community for ppl w/ the "wrong" backgrounds. 3/
We need more nuanced ways to talk about medical & public health disagreements, without the simplistic black-and-white reductions that you either trust ALL doctors in ALL matters OR you must be anti-science. 1/
"Science is less the parade of decisive blockbuster discoveries that the press often portrays, and more a slow, erratic stumble toward ever less uncertainty." @zeynep 2/