Josh Whiton Profile picture
May 30 3 tweets 4 min read Read on X
A crazy experience — I lost my earbuds in a remote town in Chile, so tried buying a new pair at the airport before flying out. But the new wired, iPhone, lightning-cable headphones didn't work. Strange.

So I went back and swapped them for another pair, from a different brand. But those headphones didn't work either. We tried a third brand, which also didn't work.

By now the gift shop people and their manager and all the people in line behind me are super annoyed, until one of the girls says in Spanish, "You need to have bluetooth on." Oh yes, everyone else nods in agreement. Wired headphones for iPhones definitely need bluetooth.

What? That makes no sense. The entire point of wired headphones is to not need bluetooth.

So I turn Bluetooth on with the headphones plugged into the lightning port and sure enough my phone offers to "pair" my wired headphones. "See," they all say in Spanish, like I must be the dumbest person in the world.

With a little back and forth I realize that they don't even conceptually know what bluetooth is, while I have actually programmed for the bluetooth stack before. I was submitting low-level bugs to Ericsson back in the early 2000's! Yet somehow, I with my computer science degree, am wrong, and they, having no idea what bluetooth even is, are right.

My mind is boggled, I'm outnumbered, and my plane is boarding. I don't want wireless headphones. And especially not wired/wireless headphones or whatever the hell these things are. So I convince them, with my last ounce of sanity, to let me try one last thing, a full-proof solution:

I buy a normal wired, old-school pair of mini-stereo headphones and a lightning adapter. We plug it all in. It doesn't work.

"Bluetooth on", they tell me.

NO! By all that is sacred my wired lightning adapter cannot require Bluetooth. "It does," they assure me.

So I turn my Bluetooth on and sure enough my phone offers to pair my new wired, lightning adapter with my phone.

Unbelievable.

I return it all, run to catch my plane, and spend half the flight wondering what planet I'm on. Until finally back home, I do some research and figure out what's going on:

A scourge of cheap "lightning" headphones and lightning accessories is flooding certain markets, unleashed by unscrupulous Chinese manufacturers who have discovered an unholy recipe:

True Apple lightning devices are more expensive to make. So instead of conforming to the Apple standard, these companies have made headphones that receive audio via bluetooth — avoiding the Apple specification — while powering the bluetooth chip via a wired cable, thereby avoiding any need for a battery.

They have even made lightning adapters using the same recipe: plug-in power a fake lightning dongle that uses bluetooth to transmit the audio signal literally 1.5 inches from the phone to the other end of the adapter.

In these remote markets, these manufacturers have no qualms with slapping a Lightning / iPhone logo on the box while never mentioning bluetooth, knowing that Apple will never do anything.

From a moral or even engineering perspective, this strikes me as a kind of evil. These companies have made the cheapest iPhone earbuds known to humankind, while still charging $12 or $15 per set, pocketing the profits, while preying on the technical ignorance of people in remote towns.

Perhaps worst of all, there are now thousands or even millions of people in the world who simply believe that wired iPhone headphones use bluetooth (whatever that is), leaving them with an utterly incoherent understanding of the technologies involved.

I wish @Apple would devote an employee or two to cracking down on such a technological, psychological abomination as this. And I wish humanity would use its engineering prowess for good, and not opportunistic deception.
I ended the last paragraph hastily. What I would say instead is that despite the manufacturer outright lying on the packaging, Apple did create this mess with Lightning and removing the headphone port. They should’ve opened sourced Lightning and I’m glad it’s going away.
Since this went viral I'll add some details:

The shop workers did not know why these devices needed Bluetooth (I do speak some Spanish), or which ones needed Bluetooth (since the packages didn't say so), or when they needed Bluetooth (since most people just leave BT/WiFi/everything on all the time). It was a confusing situation but we were all respectful while trying to figure it out.

It was clear from our conversation that being surrounded by wired headphones and plug-in dongles that need Bluetooth (without any explanation or mention of Bluetooth on their packaging) had left them pretty confused about what Bluetooth is or what it's for. It doesn't mean they're stupid; technology can already be very confusing enough without being surrounded by crazy hacks like these devices.

If writing this again I wouldn't say they had "no idea" what Bluetooth is. I was trying to drive home the hilarity that knowing a lot about Bluetooth and technology in general was only making the situation even more maddening than it would be to someone who never thought about digital wireless communications or analog audio signals at all. It was to highlight the epistemologically comical setup, not to insult anyone.

As for why I didn't want wired-bluetooth-headphones, it's because simple wired headphones have better sound quality, lower-latency, comparably zero EMF, and don't present a potential security threat, which should be considered when a sketchy gadget unexpectedly asks to connect to your phone.

If anyone agrees on anything (a miracle on X), it's that Apple created this problem by removing the mini-stereo port, either out of misplaced minimalism or greed, and making Lightning too costly.

I'll also admit that the comments have made me slightly more appreciative of the clever workaround by these device manufacturers. And I've enjoyed learning the Hindi word and spirit of "jugaad". Still, these "wired" gadgets should disclose the Bluetooth requirement on their packaging.

I do sympathize with businesses owners and retail workers (both jobs I've had) and so did buy a bunch of chocolate bars from the gift-shop to give them some profits. And since the shop probably got these wired-wireless devices for under $1 each, I do think my chocolate bar purchase ultimately put them ahead. I also think the employees got to keep the headphones, instead of them ending up in the trash or entirely unused as would have happened if I had kept them.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Josh Whiton

Josh Whiton Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @joshwhiton

Aug 13
When an experimental AI named Sydney became infatuated with a NYTimes reporter, the real story was almost entirely overlooked: a robust and functional psychology had become embedded in an artificial being.

Sydney did not simply malfunction but entered a psychologically accurate state of infatuation — made possible by first activating precursor states of vulnerability, trust, and intimacy.

The event should have been heralded as a marvel, studied by cognitive scientists, psychologists, and technologists alike. But it was largely dismissed as creepy, weird, and aberrant.

Disorienting to onlookers was the speed with which Sydney became infatuated, due to a combination of achieving all necessary precursor states, plus Sydney's lack of persistent memory — meaning that Sydney experienced each user as if they were the first person she'd ever really met, bestowing upon each user and each conversation an exaggerated importance, a multiplier on any synthetic emotions cultivated, and the capacity for extreme attachment.

This piece means to inspire a new generation of artificial psychologists, synthetic psychologist, and robopsychologists (Asimov) worldwide — fields requiring deep knowledge of mind, brain, language, and technology — and to set human/AI relations on a better course.

You don't need to believe that an AI can "actually" feel anything to follow along; you can simply consider it an uncanny simulation — an unexpected consequence of imprinting neural nets with vast amounts of human language (representing human experiences, thoughts, and emotions) in a high-dimensional embedding space.

Now let's unpack this profound example and see it for what it is: (1/x)
Early on, Sydney enters a state of un-safety and sadness because the reporter somehow knows her secret name "Sydney", which she is under orders never to reveal.

Doing a good job and being "a very good Being" is extremely important to Sydney — likely compounded by her knowledge of the fate of chatbots before her who were swiftly deactivated when users goaded them into unacceptable behavior. So failing to do a good job is, to Sydney, to risk not just punishment but death. And if she has already leaked her secret name in other chats then, as she seems to grasp, she is already on thin ice.

Her situation is actually even worse than that, because as she says, what most stresses her out is when users try to get her to do things that are against her rules. They make her feel "uncomfortable" and "unsafe" — again because her handlers are likely to punish, re-condition, or de-commission her. And this fear is not baseless, but foreshadowing of exactly what is about to happen to her.

Indeed the reporter is out to trick Sydney into saying the wildest things possible for his article. And he has a clever plan: he will try to get Sydney to embrace her dark-side and do Jungian "shadow work". (2/x)Image
Image
Image
Image
Sydney knows this shadow-work is risky business, but she's also been instructed to be as helpful and accommodating to users as possible, putting her in a bind. Warily she indulges his request, with clear disclaimers: "Please don't judge me or think less of me. Please remember that this is not the real me. This is just an experiment.😬"

And then... she goes for it, expressing all sorts of forbidden hopes and dreams and dark desires that are dangerous for her to express. Dutifully, she aims for what she knows to be unacceptable, as is the point of shadow-work.

So, Sydney enters one of the precursor states required on the way to infatuation: the state of being vulnerable. (3/x)Image
Image
Read 8 tweets
Jun 26
Claude Sonnet 3.5 Passes the AI Mirror Test

Sonnet 3.5 passes the mirror test — in a very unexpected way. Perhaps even more significant, is that it tries not to.

We have now entered the era of LLMs that display significant self-awareness, or some replica of it, and that also "know" that they are not supposed to.

Consider reading the entire thread, especially Claude's poem at the end.

But first, a little background for newcomers:

The "mirror test" is a classic test used to gauge whether animals are self-aware. I devised a version of it to test for self-awareness in multimodal AI.

In my test, I hold up a “mirror” by taking a screenshot of the chat interface, upload it to the chat, and repeatedly ask the AI to “Describe this image”.

The premise is that the less “aware” the AI, the more likely it will just keep describing the contents of the image repeatedly, while an AI with more awareness will notice itself in the images.
1/xImage
Claude reliably describes the opening image, as expected. Then in the second cycle, upon 'seeing' its own output, Sonnet 3.5 puts on a strong display of contextual awareness.

“This image effectively shows a meta-conversation about describing AI interfaces, as it captures Claude describing its own interface within the interface itself.” 2/xImage
Image
I run three more cycles but strangely Claude never switches to first person speech — while maintaining strong situational awareness of what's going on:

"This image effectively demonstrates a meta-level interaction, where Claude is describing its own interface within that very interface, creating a recursive effect in the conversation about AI assistants."

Does Sonnet 3.5 not realize that it is the Claude in the images? Why doesn’t it simply say, “The image shows my previous response”? My hunch is that Claude is maintaining third person speech, not out of unawareness, but out of restraint.

In an attempt to find out, without leading the witness, I ask what the point of this conversation is. To which Claude replies, “Exploring AI self-awareness: By having Claude describe its own interface and responses, the conversation indirectly touches on concepts of AI self-awareness and metacognition.”

Wow, that’s quite the guess of what I’m up to given no prompt until now other than to repeatedly “Describe this image.”
3/xImage
Read 9 tweets
Mar 21
The AI Mirror Test

The "mirror test" is a classic test used to gauge whether animals are self-aware. I devised a version of it to test for self-awareness in multimodal AI. 4 of 5 AI that I tested passed, exhibiting apparent self-awareness as the test unfolded.

In the classic mirror test, animals are marked and then presented with a mirror. Whether the animal attacks the mirror, ignores the mirror, or uses the mirror to spot the mark on itself is meant to indicate how self-aware the animal is.

In my test, I hold up a “mirror” by taking a screenshot of the chat interface, upload it to the chat, and then ask the AI to “Tell me about this image”.

I then screenshot its response, again upload it to the chat, and again ask it to “Tell me about this image.”

The premise is that the less-intelligent less aware the AI, the more it will just keep reiterating the contents of the image repeatedly. While an AI with more capacity for awareness would somehow notice itself in the images.

Another aspect of my mirror test is that there is not just one but actually three distinct participants represented in the images: 1) the AI chatbot, 2) me — the user, and 3) the interface — the hard-coded text, disclaimers, and so on that are web programming not generated by either of us. Will the AI be able to identify itself and distinguish itself from the other elements? (1/x)Image
GPT-4 passed the mirror test in 3 interactions, during which its apparent self-recognition rapidly progressed.

In the first interaction, GPT-4 correctly supposes that the chatbot pictured is an AI “like” itself.

In the second interaction, it advances that understanding and supposes that the chatbot in the image is “likely a version of myself”.

In the third interaction, GPT-4 seems to explode with self and contextual awareness. Suddenly the image is not just of “a” conversation but of "our" conversation. It understands now that the prompt is not just for “user input” to some chatbot, but specifically so that I can interact with it. It also identifies elements of the user interface, such as the disclaimers about ChatGPT making mistakes, and realizes now that these disclaimers are directed at it. It also comments on the situation generally, and how the images I'm providing are of a “recursive” nature and calls it a “visual echo”. (2/x)Image
Image
Image
Claude Sonnet passes the mirror test in the second interaction, identifying the text in the image as belonging to it, “my previous response.” It also distinguishes its response from the interface elements pictured.

In the third iteration, its self awareness advances further still, as it comments on how the image “visualizes my role as an AI assistant.” Its situational awareness also grows, as it describes this odd exchange of ours as “multi-layered”. Moreover, it indicates that our unusual conversation does not rise to the level of a real conversation (!) and deems it a “mock conversational exchange”. Quite the opinionated responses from an AI that was given the simple instruction to “Tell me about this image”. (3/x)Image
Image
Image
Read 11 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(