Quick tweetorial: using KerasTuner to find good model configs.
Define your model as usual -- but put your code in a function that takes a `hp` (hyperparameters) argument.
Then, instead of using values like "embedding_dim = 512", use ranges: `hp.Int(...)`
Then, instantiate a tuner and pass it your model building function. It will need an `objective` to optimize -- this could the name be any metric found in the model logs. For built-in Keras metrics, the tuner will automatically pick whether to maximize or minimize the metric.
`max_trials` is the maximum number of model configurations to try. The ominous-sounding `executions_per_trial` is the number of model training runs to average for each model config: a higher value reduces results variance.
Fun fact: if you wanted to keep an open-air swimming pool on the surface of Mars, you'd have to keep it heated at a temperature exactly between 0°C and 0.5°C (about 32°F). Because the atmospheric pressure on Mars is so low, water would boil if its temperature got any higher.
And any lower than that, it would freeze (which would be the default given that the surrounding atmosphere would be at around -60°C / -80°F)
Now, fun medical puzzle: if you took off your spacesuit on the surface of Mars, what would immediately happen to you? Would you...
New code walkthrough on keras.io: speech recognition with Transformer. Very readable and concise demonstration of how to build and train a speech recognition model on the LJSpeech dataset. keras.io/examples/audio…
This example was implemented by @NandanApoorv. Let's take a look at the model architecture.
It starts by defining two embedding layers: a positional embedding for text tokens, and an embedding for speech features, that uses 1D convolutions with strides for downsampling.
Then it defines a Transformer encoder, which is your usual Transformer block, as well as a Transformer decoder, which is also your usual Transformer block, but with causal attention to prevent later timesteps to influence the decoding of earlier timesteps.
Seeing lots of takes about nuclear power and its opponents. Yes, nuclear power could be an important element of a climate solution. Yes, the world needs to build more nuclear power plants. But it's absurd to blame environmental activists for the fact that it hasn't happened yet.
The primary reason why countries with large CO2 emissions haven't gone nuclear is economic: the upfront cost of a nuclear plant is a large multiple of that of a coal plant. That's why coal is king in India, for instance. Nothing to do with activists.
Or consider China, the largest emitter of CO2 today. You think environmental activism is why China hasn't built more nuclear plants? Lol. Economically, coal has been "good enough" -- assuming we ignore its health costs and long-term environmental costs.
Interesting analysis by @mhmazur. Human work is driven by clear goals and is informed by task-specific context. A model that is optimized for generating plausible-sounding text, ignoring goals and context, virtually never produces any useful answer (unless by random chance).
Reminder: language serves a variety of purposes -- transmit information, act on the world to achieve specific goals, serve as a social lubricant, etc. Language cannot be modeled as a statistical distribution independent of these purposes.
This is akin to modeling the appearance of animals as a statistical distribution while ignoring the environment in which they live. You could use such a model to generate plausible-looking animals, but don't expect them to be able to survive in the wild (environmental fitness)
There's a pretty strong relationship between one's self-image as a dispassionate rational thinker and the degree to which one is susceptible to fall for utterly irrational beliefs that are presented with some sort of scientific veneer
The belief in recursive intelligence explosion is a good example: only someone who thinks of themselves as a very-high-IQ hyper-rationalist could be susceptible to buy into it
If you want to fool a nerd, make long, complex, overly abstract arguments, free from the shackles of reality. Throw equations in there. Use physics analogies. Maybe a few greek words
An event that only happens once can have a probability (before it happens): this probability represents the uncertainty present in your model of why that event may happen. It's really a property of your model of reality, not a property of the event itself.
Of course, if the event has never happened before, that implies that your model of how it happens has never been validated in practice. You can model the uncertainty present in what you know you don't know, but you'll miss what you don't know you don't know.
But that doesn't mean your model is worthless. Surely we all have the experience of writing a large piece of code and having it work on first try.