Feynman once said "Religion is a culture of faith; science is a culture of doubt." But I must ask then, Engineering is a culture of?
GPT-3 has a very interesting answer. Don't read past this tweet so that it doesn't bias your answer.
Religion is a culture of faith; science is a culture of doubt; engineering is a culture of
1 - Procedure
2 - Proof
3 - Confirmation
4 - Verification.
Actually, this question is a good use case for GPT-3 to answer. The above answers aren't satisfactory because they aren't set up with good priming.
To answer this question requires several questions. It's not unlike the questions in an analogy test which requires several questions to answer correctly. It's like a constraint satisfaction problem.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Carlos E. Perez

Carlos E. Perez Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @IntuitMachine

5 Aug
Von Neumann once told a student who was troubled by the counter-intuitiveness of quantum mechanics: Image
There are many deep learning practitioners who are also lost in mathematics. The difference of course is that DL folks don't actually have to perform the computations, the network does that for them.
In fact, with methods like architecture search, they can just use the machine to discover the optimal neural architecture.
Read 13 tweets
4 Aug
What's the logic behind DeepMind's universal Perceiver-IO block? Image
Perhaps we want to compare this with the original perceiver architecture: Image
Note how the input array (in green) is fed back into multiple layers as a 'cross attention' in the previous diagram. That cross attention is similar to how your would tie an encoder with a decoder in the standard transformer model: Image
Read 15 tweets
4 Aug
Successful startups boil down to possesions of effective mass persuasion thingamajigs.
This is also why money printing thingamajigs have so much persuasive value. Hence why cryptocurrencies have their appeal.
People are more easily persuaded to pay for something if they perceive that it is an investment. An investment is anything that makes more money than what you originally put in.
Read 7 tweets
3 Aug
In Frank Herbert's Dune, the affairs of the entire universe revolve around a psychedelic drug known as the Spice Melange that enables beings the ability to fold spacetime and see into the future. dune.fandom.com/wiki/Spice_Mel…
The spice is essential because without it travel between planets in the universe would be practically impossible. The universe is interconnected in commerce through psychedelics.
The spice however also allows its consumers to see into the future. Hence to make predictions of what might come. Does not one find it odd that success in our modern financial industry relates to our ability to see into the future?
Read 23 tweets
2 Aug
Why are models that curve fit not compositional models? medium.com/intuitionmachi…
Why are neural networks unable to nail arithmetic or multiplication? That is, you might be able to ask GPT-3 what 5 plus 7 equals, but you can't calculate 59 + 77 (trust me, it can't). Why is that?
This is because neural networks are unable to formulate compositional models of reality. Would a caveman be able to invent arithmetic or multiplication? I seriously doubt it, it requires a gifted human individual to invent these from scratch.
Read 9 tweets
1 Aug
What is wrong with knowledge representations that it has barely moved the needle in machine understanding? @danbri
Intuitively, KR should be useful in that it diagrammatically records how concepts relate to other concepts. Yet for a reason that is not apparent, it isn't very useful in parsing out new understandings of the concepts in its graph. Where did we go wrong here?
Perhaps it's because knowledge graphs are noun-centric and not verb-centric. Reality is verb-centric. To get an intuition about this, watch this explanation of the open-world game Nethack:
Read 12 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(