I've had a bit of a breakthrough with this over the past couple of years: maintaining detailed progress notes in a GitHub issues comment thread has dropped my "getting back on track" time down to a fraction of what it was
The reason it takes 25 minutes to spin back up again is that you're holding a ton of stuff exclusively on your own memory - so write it down!
Something I've realized is that 90% of software engineering is research, not typing code - figuring out what the code needs to do, which APIs to use, how best to test it etc
So all of that research goes in issue comments. Here's my best recent example: github.com/simonw/s3-cred…
Some reasons I like GitHub issues for this:
- Can be public and private
- Everything is timestamped
- Easy to drag-and-drop in screenshots
- Good syntax highlighted code snippets
- Link to a section of code in the repo and it embeds the code directly
- Easy to cross-reference
... and GitHub Issues has a great API, which means I can extract all of my issue comments into my own searchable database! Demo of that here: github-to-sqlite.dogsheep.net/github/issue_c…
• • •
Missing some Tweet in this thread? You can try to
force a refresh
If I write a couple of KBs of data to the Biitcoin or Ethreum bloockchains that data still gets copied to every single active node, right?
Any estimates as to how much total disk space those 2KB take up worldwide?
Asking because evidently the idea of "storing data on the bloockchain" is a frequent point of confusion, I wonder if explaining how many copies that entails would help clarify things at all
In January 2021 there were an estimated 83,000 active full nodes, so presumably any data you write to the bloockchain gets duplicated 83,000 times? coindesk.com/tech/2021/01/2…
One of the biggest productivity tricks I'm using in the Datasette ecosystem is continuous deployment of live demos - every time I push to Datasette (+ a few other repos) it deploys a demo of latest main - it's fantastic for both catching bugs and linking to from issue comments
I've been working on the datasette-graphql plugin today and the live demo at datasette-graphql-demo.datasette.io/graphql helped me catch a bug where JS files were loading in the wrong order, breaking things - a problem that didn't occur on my laptop
@datasetteproj@EscolaDeDados To save attendees from having to get a working Python environment setup on their laptop, I instead encouraged them to use a free @gitpod account (gitpod.io) - I demonstrated each exercise in GitPod too
Cloud-based development environments are SO GOOD for tutorials
(I had planned to use GitHub Codespaces for this, but then realized that those are not freely available to non-paid users outside of the beta program yet)
Here's a fun challenge: given an array of datetimes, what's the best way to plot those on a frequency graph over time?
They might all be on the same day, or they might be spread out over several years - so the challenge is automatically picking the most interesting bucket size
Is there a reliable way to tell search engine crawlers that a site hasn't been updated in X days so they don't need to re-crawl it?
Do they tend to believe the <lastmod> element in sitemap.xml ? And can I set that to apply to the whole site, not just an individual page?
Asking because tailing logs shows a vast amount of crawler traffic to Datasette instances that haven't seen any data changes in over a year - I may have to robots.txt block crawlers from them to save in costs, but I'd rather tell them "no point in crawling, nothing has changed"
Datasette currently has a plugin for configuring robots.txt, but I'm beginning to think it should be part of core and crawlers should be blocked by default - having people explicitly opt-in to having their sites crawled and indexed feels a lot safer datasette.io/plugins/datase…