François Chollet Profile picture
Jul 23, 2022 23 tweets 7 min read Read on X
I've been asked about this a lot, so let me provide a quick FAQ.

Q: What's the nature of the issue?

A: Anyone who has bought my book from Amazon in the past few month hasn't bought a genuine copy, but a lower-quality counterfeit copy printed by various fraudulent sellers.
Q: How does this even happen?

A: Amazon lets any seller claim that they have inventory for a given book, and then proceeds to route orders (from the book's page) to that seller's inventory. In this case, Amazon even hosts the inventory and takes care of the shipping. (cont.)
(cont.) This has given rise to a cottage industry of fraudsters who "clone" books (which is easy when the PDF is readily available: you just need to contract a printer) and then claim to be selling real copies.

This is endemic for all popular textbooks on Amazon.
Q: How do I know if I'm about to buy a counterfeit copy?

A: Look for the name of the seller. If it's a 3rd party seller (i.e. not Amazon's own inventory) and it isn't a well-known bookstore, then it's a scammer. (cont.)
(cont.) They tend to have names like "Sacred Gamez", "Your Toy Mart", etc. That's because they started out with counterfeit toys, video games, etc. and eventually pivoted to technical books (higher-margin).

They've been in activity for years -- it's a highly lucrative model. Image
Q: How do I know if my copy is counterfeit?

A: The surest way to check is to try to register it with Manning at: manning.com/freebook

Other than that, the fakes have much lower print/make quality.

- Darker cover colors.
- Flimsier paper.
- Poorly bound.
- Cut smaller. ImageImageImageImage
Q: What is Amazon doing about it?

A: Nothing. We've notified them multiple times, nothing happened. The fraudulent sellers have been in activity for years.
The issue affects ~100% of Amazon sales of the book since March or April. That's because, amazingly, since fraudsters are claiming to have inventory, Amazon has stopped carrying its own inventory for the book (i.e. it has stopped ordering new copies from the publisher).
Q: Does this affect any other book?

A: Absolutely. It affects nearly all high sale volume technical books on Amazon. If you've bought a technical book on Amazon recently there's a >50% chance it's a fake.

E.g.
@aureliengeron's book is also heavily affected, etc.
Besides books, it also affects a vast number of items across every product category -- vitamins, toys, video games, brand name electronics, brand name clothes, etc. But that's another story.
Q: So where should I buy the book?

A: Buy it directly from the publisher, here: manning.com/books/deep-lea…
Q: What should I do if I already bought a counterfeit copy?

A: Ask for a refund. Maybe this will put pressure on Amazon to look into the issue?
An update: this thread has caused more of a stir than I expected. A positive side effect is that the issue has been escalated and resolved by Amazon (at least in the case of my book). Thanks to all those involved!
Specifically, the default buying option for the 2nd edition of my book is now Amazon itself, rather than any third party seller.

For the 1st edition, the default option is still a counterfeit seller, though. Perhaps this widespread problem needs more than a special-case fix.
If it's impossible for Amazon to ensure the trustworthiness of 3rd party sellers, then perhaps there should be an option for publishers/authors to prevent any 3rd party seller from being listed as selling their book (esp. as the default option for people landing on the page).
It may not be entirely obvious at first that a given seller is selling exclusively counterfeit items, because that seller may appear to have thousands of ratings, 99% positive.

An important reason why is that Amazon takes down negative reviews related to counterfeits. Image
I spoke way too soon when I said the problem was resolved for my book -- 24 hours later a fraudulent seller is now back as the default buying option for both editions of my book. Sigh...
Exact same situation for @aureliengeron's ML bestseller right now: a fraudulent seller is the default. amazon.com/Hands-Machine-…

The level of mismanagement here is staggering.
This goes far beyond "a 3rd party seller on Amazon is selling counterfeit copies."

The gist of the issue is for many bestselling books, Amazon is routing people towards counterfeits *by default*. Which is a big deal because Amazon is the default online bookstore for most people.
If someone wants to buy my book or @aureliengeron's book (etc.), they will search for it on Amazon, find the book's official page, and click "buy".

And Amazon will be routing this purchase intent *by default* towards a seller of counterfeits.
This is hijacking a large fraction of total book sales -- for some books, a majority. This is theft of purchase intent (and that purchase intent typically originates outside of Amazon).

For authors and publishers, this represents a massive loss of revenue.
To use a metaphor -- it's not as if some guy on the street were selling bootleg items next to a massive supermarket that sold genuine ones.

It's as if this guy were empowered by the supermarket to systematically replace the genuine items on the shelves with his own fakes.
An update -- it has been nearly 5 days (and over 2M views) since I posted this thread. I regret to say that both editions of my book are *still* being sold by fraudulent sellers by default (new ones, though).

Do NOT buy my books on Amazon. Buy from the publisher directly. ImageImage

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with François Chollet

François Chollet Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @fchollet

Jun 22
Fact check: my 3-year old builds Lego sets (age 5+ ones) on his own by following the instruction booklet. He started doing it before he turned 3 -- initially he needed externally provided error correction and guidance, but now he's just fully autonomous. Can't handle sets for ages 8+ yet though. We'll see what he does at 5.
He also builds his own ideas, which feature minor original inventions. Like this "jeep" which has a spare tire on the back -- not something he saw in any official set. Lego is the best toy ever by the way
Image
Image
Or this Lego garden (fresh from today). It has a hut with a cool door. It looks chaotic, but everything on here has a purpose. Everything is intended to be something (the tire on a stick is a tree, the tiny cone on the ground is a water sprinkler...)
Image
Image
Read 4 tweets
Jun 11
I'm partnering with @mikeknoop to launch ARC Prize: a $1,000,000 competition to create an AI that can adapt to novelty and solve simple reasoning problems.

Let's get back on track towards AGI.

Website:

ARC Prize on @kaggle: arcprize.org
kaggle.com/competitions/a…
I published the ARC benchmark over 4 years ago. It was intended to be a measure of how close we are to creating AI that can reason on its own – not just apply memorized patterns.
ARC tasks are easy for humans. They aren't complex. They don't require specialized knowledge – a child can solve them. But modern AI struggles with them.

Because they have one very important property: they're designed to be resistant to memorization.
Image
Image
Read 8 tweets
May 14
It's amazing to me that the year is 2024 and some people still equate task-specific skill and intelligence. There is *no* specific task that cannot be solved *without* intelligence -- all you need a sufficiently complete description of the task (removing all test-time novelty and uncertainty), and you can achieve arbitrary levels of skills while entirely by-passing the problem of intelligence. In the limit, even a simple hashtable can be superhuman at anything.Image
The "AI" of today still has near-zero (though not exactly zero) intelligence, despite achieving superhuman skill at many tasks.

Here's one thing that AI won't be able to do within five years (if you extrapolate from the excruciatingly slow progress of the past 15 years): acquiring new skills as efficiently as humans, using the same data. The ARC benchmark is an attempt at measuring roughly that.
The point of general intelligence is to make it possible to deal with novelty and uncertainty, which is what our lives are made of. Intelligence is the ability to improvise and adapt in the face of situations you weren't prepared for (either by your evolutionary history or by your past experience) -- to efficiently acquire skills at novel tasks, on the fly.
Read 5 tweets
Apr 28
Many of the people who are concerned with falling birthrates aren't willing to consider the set policies that would address the problem -- aggressive tax breaks for families, free daycare, free education, free healthcare, and building more/denser housing to slash the price of homes.

Most people want children, but can't afford them.
I always found it striking how very rich couples (50M+ net worth) all tend to have over 3 children (and often many more). And how young women always say they want children -- yet in practice they delay family building because they are forced to focus on financial stability and therefore career. When money is not an object, families have 3+ children.
For middle incomes (below 1M/year) fertility goes down as income goes up, because *the cost of raising children increases with income* due to *opportunity cost*. If you make $150k and stand to eventually grow to $300k, you are losing a lot of money by quitting your job to raise children (on top of the prohibitive cost of raising children -- which also goes up as your incomes and thus standards go up). You are thus *more* likely to postpone having children.

Starting at 1M/year, fertility rates rise again. And couples that make 5+M/year get to have the number of children they actually want -- which is almost always more than 3, and quite often 5+.Image
Read 4 tweets
Mar 31
That memorization (which ML has solely focused on) is not intelligence. And because any task that does not involve significant novelty and uncertainty can be solved via memorization, *skill* is never a sign of intelligence, no matter the task.
Intelligence is found in the ability to pick up new skills quickly & efficiently -- at tasks you weren't prepared for. To improvise, adapt and learn.
Here's a paper you can read about it.

It introduced a formal definition of intelligence, as well as benchmark to capture that definition in practical terms. Although it was developed before the rise of LLMs, current state-of-the-art LLMs such as Gemini Ultra, Claude 3, or GPT-4 are not able to score higher than a few percents on that benchmark.arxiv.org/abs/1911.01547
Read 4 tweets
Mar 13
We benchmarked a range of popular models (SegmentAnything, BERT, StableDiffusion, Gemma, Mistral) with all Keras 3 backends (JAX/TF/PT). Key findings:

1. There's no "best" backend. The fastest backend often depends on your specific model architecture.

2. Keras 3 with the right backend is consistently a lot faster than reference PT (compiled) implementations. Often by 150%+.

3. Keras 3 models are fast without requiring any custom performance optimizations. It's all "stock" code.

4. Keras 3 is faster than Keras 2.

Details here: keras.io/getting_starte…
Finding 1: the fastest backend for a given model typically alternates between XLA-compiled JAX and XLA-compiled TF. Plus, you might want to debug/prototype in PT before training/inferencing with JAX or TF.

The ability to write framework-agnostic models and pick your backend later is a game-changer.Image
Finding 2: Keras 3 with the best-performing backend outperforms reference native PT implementations (compiled) for all models we tried.

Notably, 5 out of 10 tasks demonstrate speedups exceeding 100%, with a maximum speedup of 340%.

If you're not leveraging this advantage for any large model training run, you're wasting GPU time -- and thus throwing away money.Image
Read 6 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(