Gazans going crazy when realizing there are Muslims in the IDF
A Thread ๐งต๐
"Are these Muslims like us or what is their religion exactly?"
"I knew there were arabs in their country that were drafted to the occupation army, but I didn't know their families knew about it and that if they die, their family pray on them like an ordinary Muslim!! !!!"
"I wish it would be found that this is AI"
"Glitch in the matrix"
"The worse thing I had ever seen in my life"
"We should redefine the arabs in israel as far as possible from the story of palestine, they all serve the zionists institutes"
"I am sure this year will end and I won't see a stranger sight than this"
"All the logic of the universe can not explain the contradictions in this picture"
Credit:
All based on the original Hebrew thread by @WachtelDan
I think Claude 3 crossed (or closely approached) an interesting threshold:
The "power users" threshold.
For the first time being able to help power users with heavy complex tasks faster than themselves.
This is a controversial topic in AI,
Let me try to explain:
I never use GPT-4 for code.
I use it for: 1. Brainstorming ideas. 2. Learning new topics that I don't know about. 3. Reading long texts instead of me (ask-your-pdf) 4. Just easy tasks.
But never for code.
It never helped me coding.
Every time I tried to use it for any coding task I found myself wasting way more time than simply doing the actual thing myself from scratch.
And I am not the only one that reached this conclusion,
I know number of people which think exactly the same.
Since the new model is able to run code, many people have already managed to ask it nicely for all the information it has inside the virtual machine it runs on.
At this stage we have already managed to extract:
- All the packages installed on the machine
- All the OpenAI system files that were sent with the machine
We are currently trying to upload code, a trained embedding model, and some special prompt to let it do retrieval.
I think I get it now. let's try something out:
.
Comment to this thread everything you "just guess" about GPT-4 ๐๐
Guess: In the dataset: Went through every undergrad major and included exams, tests & textbooks of the subject. [to create a "wow" effect for every educated person no matter the field]
Guess: GPT-3's batch-size is written deliberately misleading in the paper (millions) when in real life it is much smaller. This is because NEARLY ALL THE TIME smaller batches leads to better performance. So: the training here at some point probably was done in small batches. twitter.com/i/web/status/1โฆ