Fun fact: if you wanted to keep an open-air swimming pool on the surface of Mars, you'd have to keep it heated at a temperature exactly between 0°C and 0.5°C (about 32°F). Because the atmospheric pressure on Mars is so low, water would boil if its temperature got any higher.
And any lower than that, it would freeze (which would be the default given that the surrounding atmosphere would be at around -60°C / -80°F)
Now, fun medical puzzle: if you took off your spacesuit on the surface of Mars, what would immediately happen to you? Would you...
I'm sure the folks who made The Martian had to figure this one out

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with François Chollet

François Chollet Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @fchollet

3 Mar
New code walkthrough on keras.io: speech recognition with Transformer. Very readable and concise demonstration of how to build and train a speech recognition model on the LJSpeech dataset.
keras.io/examples/audio…
This example was implemented by @NandanApoorv. Let's take a look at the model architecture.

It starts by defining two embedding layers: a positional embedding for text tokens, and an embedding for speech features, that uses 1D convolutions with strides for downsampling.
Then it defines a Transformer encoder, which is your usual Transformer block, as well as a Transformer decoder, which is also your usual Transformer block, but with causal attention to prevent later timesteps to influence the decoding of earlier timesteps.
Read 4 tweets
22 Feb
Seeing lots of takes about nuclear power and its opponents. Yes, nuclear power could be an important element of a climate solution. Yes, the world needs to build more nuclear power plants. But it's absurd to blame environmental activists for the fact that it hasn't happened yet.
The primary reason why countries with large CO2 emissions haven't gone nuclear is economic: the upfront cost of a nuclear plant is a large multiple of that of a coal plant. That's why coal is king in India, for instance. Nothing to do with activists.
Or consider China, the largest emitter of CO2 today. You think environmental activism is why China hasn't built more nuclear plants? Lol. Economically, coal has been "good enough" -- assuming we ignore its health costs and long-term environmental costs.
Read 6 tweets
19 Feb
Interesting analysis by @mhmazur. Human work is driven by clear goals and is informed by task-specific context. A model that is optimized for generating plausible-sounding text, ignoring goals and context, virtually never produces any useful answer (unless by random chance).
Reminder: language serves a variety of purposes -- transmit information, act on the world to achieve specific goals, serve as a social lubricant, etc. Language cannot be modeled as a statistical distribution independent of these purposes.
This is akin to modeling the appearance of animals as a statistical distribution while ignoring the environment in which they live. You could use such a model to generate plausible-looking animals, but don't expect them to be able to survive in the wild (environmental fitness)
Read 4 tweets
14 Feb
There's a pretty strong relationship between one's self-image as a dispassionate rational thinker and the degree to which one is susceptible to fall for utterly irrational beliefs that are presented with some sort of scientific veneer
The belief in recursive intelligence explosion is a good example: only someone who thinks of themselves as a very-high-IQ hyper-rationalist could be susceptible to buy into it
If you want to fool a nerd, make long, complex, overly abstract arguments, free from the shackles of reality. Throw equations in there. Use physics analogies. Maybe a few greek words
Read 5 tweets
14 Feb
An event that only happens once can have a probability (before it happens): this probability represents the uncertainty present in your model of why that event may happen. It's really a property of your model of reality, not a property of the event itself.
Of course, if the event has never happened before, that implies that your model of how it happens has never been validated in practice. You can model the uncertainty present in what you know you don't know, but you'll miss what you don't know you don't know.
But that doesn't mean your model is worthless. Surely we all have the experience of writing a large piece of code and having it work on first try.
Read 4 tweets
14 Feb
An under-appreciated feature of our present is how we record almost everything -- far more data than we can analyze. Future historians will be able to reconstruct and understand our time far better than we perceive and understand it right now.
Consider the events of January 6. Future historians will likely know who was there, who said what to whom, who did what, minute by minute. The amount of information you can recover from even a single video is enormous, and we have hundreds of them.
We're recording all of the dots -- our successors will have currently-unimaginable technology to connect them.
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!