How to get URL link on X (Twitter) App
By 1971, about a hundred thousand people had signed up for flights to the moon en.wikipedia.org/wiki/First_Moo…
Link:
https://twitter.com/matei_zaharia/status/1681467961905926144This from a VP at OpenAI is from a few days ago. I wonder if degradation on some tasks can happen simply as an unintended consequence of fine tuning (as opposed to messing with the mixture-of-experts setup in order to save costs, as has been speculated).
https://twitter.com/npew/status/1679538687854661637
Peer review is like democracy: the worst system except for all the ones we've tried before. We can't throw it out yet, but we should be trying our hardest to figure out what comes next. Unlike democracy, we can easily experiment with scientific publishing.
The most dangerous mis- and dis-information today is based on bad data analysis. Sometimes it's deliberately misleading and sometimes it's done by well meaning people unaware that it takes years of training to get to a point where you don't immediately shoot yourself in the foot.
https://twitter.com/justinsulik/status/1669302237326110723Overall it's not a bad paper. They mention in the abstract that they chose an LLM-friendly task. But the nuances were unfortunately but unsurprisingly lost in the commentary around the paper. It's interesting to consider why.
https://twitter.com/VICENews/status/1664366486587154435
For the record, based on the published details this is a mind-bogglingly stupid story even by the standards of the AI doom genre.
A nice prompt injection explainer by @simonw simonwillison.net/2023/May/2/pro…
It's a standard engagement prediction recommendation algorithm. All major platforms use the same well known high-level logic, even TikTok: knightcolumbia.org/blog/tiktoks-s…
https://twitter.com/acgt01/status/1643612079704637440
https://twitter.com/peakcooper/status/1639716822680236032More than a third of people in the US use the Internet to self-diagnose (in 2013; likely much higher now). jamanetwork.com/journals/jama/…
https://twitter.com/florian_tramer/status/1639301437875273749Perhaps people at OpenAI assume that the models are improving so fast that the flaws are temporary. This might be true in some areas, but unlikely in security. The more capable the model, the greater the attack surface. For example, instruction following enables prompt injection.