So I use @cloudinary to handle image optimization and transformations on kentcdodds.com (more details: kentcdodds.com/blog/building-…). However, their pricing for bandwidth is outrageously high and with the amount of traffic I get it's...
... just way more than I'm willing to pay.
So I decided that since @Cloudflare is dirt cheap for bandwidth I could put that in front of @cloudinary and save big time. One problem here is that @cloudinary's HTTP cache-control header marks its images as "private" and ...
... doesn't provide an s-maxage value (which makes sense because @cloudinary doesn't really want you to do this). Unless I missed something, this means @Cloudflare won't be able to cache these images with those headers. So ...
... I decided to proxy @cloudinary behind my @flydotio-hosted node server under the /img via express-http-proxy (which works great). With that I can change the cache-control header to exactly what I need: public, max-age=86400, immutable, s-maxage=31536000 ...
... So I got @Cloudflare set up to cache everything on my proxy (/img/*) and things were running great. My bandwidth on @cloudinary dropped considerably as pictured and I was thinking I could downgrade my account and save a ton of money.
But then I started noticing some images weren't loading on my blog... Cloudinary was sending me images that were jp2 (jpeg 2000) which my macbook doesn't support and if I navigate straight to the URL it just downloads rather than displays. Really weird.
I tried messing with my user agent and everything. I couldn't figure out why cloudinary was sending me the wrong image format for my user agent. That format works fine for like iPad and stuff, but not for a macbook I guess.
Finally I figured it out! Here's what happened: 1. iPad user visits a post 2. @cloudinary sends them a good image for their device 3. @Cloudflare caches the image for their device in the globally shared cache 4. I visit that blog post and get *their* cached version of the image!
The solution was simple. I just needed to add a `Vary` header to tell @Cloudflare to basically "namespace" their global cache by the "User-Agent".
I purged my @Cloudflare Cache and things are working again. Huzzah! The end.
Pushing this change now. You now have to click the login link with the same device that requested the link or it won't work.
If you want to login with your mobile device but have trouble accessing the link, then you can login on desktop and scan the QR code on your profile :)
Like I said. I originally had this in place, but gave in when people complained about being able to use the login link on a different device. That was the wrong choice.
Now even if someone gets your login link, they won't be able to login for you because the link won't work!
Don't believe the FUD. @Tesla cars/solar/batteries are the best in the business and only getting better. And they're doing more to reduce climate change than anyone else.
What's amazing to me is that FSD wasn't even close to being able to do this just a few months ago. This is an illustration of the leaps and bounds improvement the FSD rewrite is over what's in my car right now. Imagine where this will be in the next few months. #exponential
Every car drives itself any time the driver isn't paying attention. It's totally bonkers to me that we drive around trusting ourselves and other drivers to stay focused on the road.
In 2020, there was a driving related fatality every ~26 seconds. We *need* autonomous driving.
I'm still convinced that my kids (8 and younger) will never need to learn to drive a car. FSD will be *really* good by the end of this year and level-5 autonomy will get regulatory approval in the US in the next 2 years. Maybe sooner.
I'm getting a LOT of github issues on my projects/workshops involving npm v7... I'm still on v6. Did v7 just mess up a bunch of stuff or something?
Just realized that v7 has been out for FIVE months. For some reason I thought it was pretty recent 🙃
I guess I should upgrade and see if I can figure out what's going on with it...
Looks like the best way forward is to make sure the package-lock.json version is "1" (generated by npm@6) and the install script should use `--legacy-peer-deps --no-save`
I wouldn't say this if this were the first time this sort of thing has happened. But this isn't the only thing that makes them terrible. It's one of a list of things.
To be clear, a bad companies can also help a lot of people but that doesn't mean it's not a bad company. It's exploitive of both learners and educators, implements dark patterns, and expects its users to vet the content for thievery.
We chatted for a bit... And then he had to put me on hold again. 🤦♂️
Luckily, the hold music is new and he said if we get disconnected he'll call me right back. I think this journey (of leaving etrade forever) is almost over. 😌