Shreya Shankar Profile picture
Dec 21, 2019 13 tweets 3 min read Read on X
cw: mental illness, hallucinations

today, i painfully resigned from a machine learning educational program for high schoolers that i cofounded with 3 friends (CS students).

i'm sharing my story b/c it's okay to quit things that are bad for mental health: (1/13)
the company started in a coffee shop in Palo Alto in summer 2018. "wouldn't it be cool if we could teach ML to high school kids?" one of us mentioned. "it's fairly easy to learn now, but online curricula can be too complicated."

we decided to make it happen. (2/13)
at the time, i was seriously dating someone. i didn't know it then, but the relationship was incredibly toxic. i had horrible nightmares and cried to sleep most nights. i started hallucinating events that involved my s/o, family members, and people close to me. (3/13)
of course, i love ML -- but also personally, i helped start the org because i believed if i spent more time with my closest friends and s/o (also studying ML), the mental problems would go away. i never told anyone anything about my mental problems. (4/13)
we were incredibly successful in our first year! we worked with over 100 students in the Bay Area and were cash flow positive. but the hallucinations and sleep problems got worse as the org grew. i began to resent everyone close to me for not being able to help. (5/13)
i wanted to leave in may 2019, but i never knew how. i was hallucinating events that caused me to resent my cofounders and closest friends. i saw psychiatrist after psychiatrist and was prescribed > 5 different medications in the past year. i felt so alone. (6/13)
whenever i did work, i couldn't sleep. i knew it wasn't my cofounders' fault, but i didn't know who to blame. i tried blaming my ex, who broke up w/ me when i started seeing a psychiatrist in Jan 2019, but it didn't help that he studied ML and reminded me of the org. (7/13)
to be honest, i couldn't blame anyone. the org reminded me of struggling through mental illness alone. but sometimes shit happens with no one to blame. once i scaled back my work for A4 in Sep 2019, things slowly got better. by Nov 2019, i could sleep for 8 hours at night. (8/13)
i thought i'd contribute to the org in greater capacity in 2020, but i felt too much stress and dread when thinking about the person i used to be when i started the org with my friends. i feel my brain wandering to places i don't want it to be in when i think about it. (9/13)
the reason i hadn't left before now was because the people / work weren't bad; it was just my fault i couldn't deal with it. i wanted to deal with it. but i realized i needed to leave. not because the work itself was bad, but because it reminded me of toxic things. (10/13)
so today i sent that resignation email, and i received nothing but love and support from my cofounders. it hurts to think i left them when they collectively did nothing wrong. sometimes (i mean a lot of times) i feel like it is my fault i can't get over my past. (11/13)
healing from my experiences around last year will take a long time. i've learned that no job is worth prolonging my symptoms of mental illness. it hurts to know i gave up, but i find comfort in the fact that today i chose to make room for new, positive experiences. (12/13)
if you or someone you know is leaving a job for mental health reasons, know that it is not always because of the environment or people. sometimes it is just hard to deal with mental health issues and a job. thank you for listening 💕 (13/13)

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Shreya Shankar

Shreya Shankar Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @sh_reya

Oct 17, 2023
recently been studying prompt engineering through a human-centered (developer-centered) lens. here are some fun tips i’ve learned that don’t involve acronyms or complex words
if you don’t exactly specify the structure you want the response to take on, down to the headers or parentheses or valid attributes, the response structure may vary between LLM calls / it is not amenable to production
play around with the simplest prompt you can think of & run it a bunch of times on different inputs to build intuition for how LLMs “behave” for your task. then start adding instructions to your prompt in the form of rules, e.g., “do not do X”
Read 9 tweets
Sep 12, 2023
thinking about how, in the last year, > 5 ML engineers have told me, unprompted, that they want to do less ML & more software engineering. not because it’s more lucrative to build ML platforms & devtools, but because models can be too unpredictable & make for a stressful job
imo the biggest disconnect between ML-related research & production is that researchers aren’t aware of the human-centric efforts required to sustain ML performance. It feels great to prototype a good model, but on-calls battling unexpected failures chip away at this success
imagine that your career & promos are not about demonstrating good performance for a fixed dataset, but about how quickly on average you are able to respond to every issue some stakeholder has with some prediction. it is just not a sustainable career IMO
Read 8 tweets
Mar 29, 2023
Been working on LLMs in production lately. Here is an initial thoughtdump on LLMOps trends I’ve observed, compared/contrasted with their MLOps counterparts (no, this thread was not written by chat gpt)
1) Experimentation is tangibly more expensive (and slower) in LLMOps. These APIs are not cheap, nor is it really feasible to experiment w/ smaller/cheaper models and expect behaviors to stay consistent when calling bigger models
1.5) we know from MLOps research that high experimentation velocity is crucial for putting and keeping pipelines in prod. A fast way is to collect a few examples, load up a notebook, try out a heck of a lot of different prompts—calling for prompt versioning & management systems
Read 15 tweets
Dec 23, 2022
IMO the chatgpt discourse exposed just about how many people believe writing and communication is only about adhering to some sentence/paragraph structure
I’ve been nervous for some time now, not because I think AI is going to automate away writing-heavy jobs, but because the act of writing has been increasingly commoditized to where I’m not sure whether people know how to tell good writing from bad writing. Useful from useless.
In my field, sometimes it feels like blog posts (that regurgitate useless commentary or make baseless forecasts about the future) are more celebrated/impactful than tooling and thought. Often such articles are written in the vein of PR or branding
Read 5 tweets
Dec 7, 2022
I want to talk about my data validation for ML journey, and where I’m at now. I have been thinking about this for 6 ish years. It starts with me as an intern at FB. The task was to classify FB profiles with some type (e.g., politician, celebrity). I collected training data,
Split it into train/val/test, iterated on the feature set a bit, and eventually got a good test accuracy. Then I “productionized” it, i.e., put it in a dataswarm pipeline (precursor to Airflow afaik). Then I went back to school before the pipeline ran more than once.
Midway through my intro DB course I realized that all the pipeline was doing was generating new training data and model versions every week. No new labels. So the pipeline made no sense. But whatever, I got into ML research and probably would never do ML in industry again.
Read 22 tweets
Sep 20, 2022
Our understanding of MLOps is limited to a fragmented landscape of thought pieces, startup landing pages, & press releases. So we did interview study of ML engineers to understand common practices & challenges across organizations & applications: arxiv.org/abs/2209.09125
The paper is a must-read for anyone trying to do ML in production. Want us to give a talk to your group/org? Email shreyashankar@berkeley.edu. You can read the paper for the war stories & insights, so I’ll do a “behind the scenes” & “fave quotes” in this thread instead.
Behind-the-scenes: another school invited my advisor to contribute to a repo of MLOps resources. We contributed what we could, but felt oddly disappointed by the little evidence we could point to for support.
Read 11 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(