Some thoughts on why this news - the potential for next gen watches (both Samsung Galaxy and Apple) to provide blood glucose readings - could be game-changing (not so much for #T1D folks but for everyone else). macrumors.com/2021/03/05/app…
This tech would almost certainly not be an improvement over existing continuous glucose monitors like what I use, but (I think?) it's rare for people with Type 2 to have insurance coverage for CGMs, especially if you're not on insulin and don't have to worry about lows.
The beauty of continuous monitoring over finger sticks is that you can get DATA. Unless you waste a lot of (expensive!) test strips to try to experiment, you're not going to know e.g. exactly when and how much your blood sugar spikes after meals.
So the accessibility of this tech (and yes, a $400 watch is accessible compared to spending that much PER MONTH for CGM supplies w/o insurance) could really help people without insurance coverage for CGMs. But also...
When I first recognized some symptoms I went to the drugstore and bought a blood glucose monitor over the counter. I'm guessing that was VERY UNUSUAL. A lot of people get so sick they end up in the hospital - what if your watch could alert anyone to high blood sugar instead?
Which I mean... sigh... like MANY things about healthcare and health tech does also mean another avenue towards better health outcomes for people who can afford tech like this. :(
• • •
Missing some Tweet in this thread? You can try to
force a refresh
As you know, I am a fan of @tiktok_us these days, but I need to put them on blast for a bad design choice. Folks interested in content moderation/platform safety, buckle up. This is a story about bad people exploiting a loophole for harassment. We can learn from this. [Thread 🧵]
TikTok has a "block" feature that works similarly to Twitter. If you block someone, you can't see them and they can't see you. This includes comments.
So now we have A (person being harassed) and B (awful person who thinks it's fun to e.g. leave death threats in comments)...
B has figured out that they can comment on A's post and then immediately block A, which then means that A can't see that comment - and in fact doesn't even know it's there since it doesn't show up in their notifications.
Ever thought about how messed up it is from a harm vs benefit perspective that copyright infringement is more heavily moderated/enforced than, say, hate speech and harassment? I was reminded of this by re-listening to this @ThisAmerLife episode. [Thread🧵] thisamericanlife.org/670/beware-the…
The second act is the story of Lenny Pozner, the father of a Sandy Hook victim, who was harassed, threatened, and stalked by Alex Jones fueled conspiracy theorists accusing him of being a "crisis actor." And one tactic was making cruel memes out of photographs of his son.
And after trying to report content and get things taken down for lies and harassment, he finally realized that his best course of action was reporting copyright violations since he owned the photographs which were e.g. used in a YouTube video.
Not that I was *surprised* to see this study about predicting "political orientation," but since I've been talking about the "gaydar" (sigh) algorithm from the same researcher for a while now, here's some reflection. nature.com/articles/s4159…
Given criticism of the previous paper (which if you're not familiar is here: psyarxiv.com/hv28a/ ) I was genuinely expecting to see an ethical considerations section by the end of this paper (since that criticism pretty much constructed it exactly!). There is not one.
There is a lengthy "author notes" document linked to from the article that includes FAQs (like "physiognomy????") and twice warns to not "shoot the messenger" so I guess that's the ethics statement.
Hm. I wonder what happens when a community moves off a platform because accounts are getting banned for reasons that conflict with the values of that community?
Or: I'm not saying Trump supporters have a lot in common with fanfiction writers, but remember LiveJournal? [Thread]
In 2007, LiveJournal suspended a bunch of accounts in an attempt to remove certain kinds of objectionable content, and this ended up sweeping up a lot of fanfiction and fan art accounts/communities. People were Not Happy. fanlore.org/wiki/Strikethr…
This policy change by LiveJournal was directly (if of course only partially) responsible for the conceptualization and creation of Archive of Our Own. And the rallying cry was: own the servers!!! cmci.colorado.edu/~cafi5706/CHI2…
A thread on filter bubbles, confirmation bias, design against misinformation, and social media content policy. Or: how can people really think that the U.S. election was rigged, and is it social media's fault. 🧵
If you are reading this tweet, it is possible that you literally don't know a single person who voted for Donald Trump. Meanwhile, I know a couple of people who likely literally don't know a single person who DIDN'T vote for Donald Trump, besides me.
It's not like this is new - 30 years ago the same might be true just because all your friends live in your local community - but the internet makes us FEEL like we KNOW so many more people, and that we have a broader view of the world.
In a few hours (evening for me, morning in India!) I'm giving a keynote for the COMPUTE conference on integrating ethics into computer science education. Including some links in this thread to papers and other things I will reference in that talk! Perfect for #CSEdWeek2020. :)
First: Why integrate ethics into technical CS classes? It's one way to change the culture towards recognizing that ethics is an integral part of the practice of computing, and not a specialization. howwegettonext.com/what-our-tech-…