There were two patents on ConvNets: one for ConvNets with strided convolution, and one for ConvNets with separate pooling layers.
They were filed in 1989 and 1990 and allowed in 1990 and 1991.
1/N
We started working with a development group that built OCR systems from it. Shortly thereafter, AT&T acquired NCR, which was building check imagers/sorters for banks. Images were sent to humans for transcription of the amount. Obviously, they wanted to automate that.
2/N
A complete check reading system was eventually built that was reliable enough to be deployed.
Commercial deployment in banks started in 1995.
The system could read about half the checks (machine printed or handwritten) and sent the other half to human operators.
3/N
The first deployment actually took place a year before that in ATM machines for amount verification (first deployed by the Crédit Mutuel de Bretagne in France).

Then in 1996, catastrophe strikes: AT&T split itself up into AT&T (services), Lucent (telecom equipment), and NCR.
4/N
Our research group stayed with AT&T (wih AT&T Labs-Research), the engineering group went with Lucent, and the product group went with NCR.

The lawyers, in their infinite wisdom, assigned the ConvNet patents to NCR, since they were selling products based on them
5/N.
But no one at NCR had any idea what a ConvNet was!

I became a bit depressed: it was essentially forbidden for me to work on my own intellectual production 😭

I was promoted to Dept Head had to decide what to do next.
This was 1996, when the Internet was taking off.
6/N
So I stopped working on ML. Neural nets were becoming unpopular anyways.
I started a project on image compression for the Web called DjVu with Léon Bottou.
And we wrote papers on all the stuff we did in the early 1990s.
7/N
It wasn't until I left AT&T in early 2002 that I restarted work on ConvNets.

I was hoping that no one at NCR would realize they owned the patent on what I was doing. No one did.

I popped the champagne when the patents expired in 2007! 🍾🥂
8/N
Moral of the story: the patent system can be very counterproductive when patents are separated from the people best positioned to build on them.

Patents make sense for certain things, mostly physical things.
But almost never make sense for "software", broadly speaking.
9/N, N=9

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Yann LeCun

Yann LeCun Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @ylecun

10 Jun
Very nice work from Google on deep RL- based optimization for chip layout.
Simulated annealing and its heirs are finally dethroned after 40 years.
This uses graph NN and deConvNets, among other things.
I did not imagined back in the 90s that (de)ConvNets could be used for this.
This is the kind of problems where gradient-free optimization must be applied, because the objectives are not differentiable with respect to the relevant variables. [Continued...]
In this application, RL is used as a particular type of gradient-free optimization to produce a *sequence* of moves.
It uses deep models learn good heuristics as to what action to take in every situation.

This is exactly the type of setting in which RL shines.
Read 4 tweets
12 Mar
@mcCronjaeger @BloombergME The list is much too long for a Twitter thread.
I'll leave that for FB's comm people to do.
@mcCronjaeger @BloombergME More importantly, the whole premise of the article is wrong.
The SAIL / Responsible AI group's role *never* was to deal with hate speech and misinformation.
That's in the hands of other groups with *hundreds* of people in them.
In fact, "integrity" involves over 30,000 people...
@mcCronjaeger @BloombergME So the central theme of the article, that RespAI wasn't given the necessary resources to do its job is patently false.

Second, AI is heavily used for content moderation: filtering hate speech, polarizing content, violence, bullying, etc...
Read 10 tweets
13 Jan
Electricity production in Europe in 2020.

Right:
Each colored point-cloud is a country
Each point (x,y) is 1 hour of electricity production with x=energy produced in kWh; y=CO2 emission in g/kWh.

Left:
bar graphs of the mix of production methods for select countries.

1/N
France: low overall CO2 emissions, low variance on emissions, relying essentially on nuclear energy with a bit of hydro [reminder: nuclear produce essentially no CO2].
2/N
Germany: despite having a large proportion of renewables, has high emissions and a high variance of emissions: when there is no wind nor sun, it has to rely on fossil fuel, having abandoned and phased out nuclear production.
3/N
Read 6 tweets
23 Jun 20
I'm an immigrant.

I came first to work at Bell Labs on a J-1 visa, because I thought I'd stay only a year or two.
But I stayed longer and got an H1-B visa.
Then I got a green card....
1/N

nytimes.com/2020/06/22/us/…
I hesitated to take up citizenship during the GW Bush years, waiting for the country to become respectable again.
But after Bush's re-election, I just wanted to be able to vote and kick out the neocon bastards.
So I became a citizen just in time to vote for Barack Obama.
2/N
As an immigrant, scientist, academic, liberal, atheist, and Frenchman, I am a concentrate of everything the American Right hates.
3/N
Read 8 tweets
22 Jun 20
@timnitGebru If I had wanted to "reduce harms caused by ML to dataset bias", I would have said "ML systems are biased *only* when data is biased".
But I'm absolutely *not* making that reduction.
1/N
@timnitGebru I'm making the point that in the *particular* *case* of *this* *specific* *work*, the bias clearly comes from the data.
2/N
@timnitGebru There are many causes for *societal* bias in ML systems
(not talking about the more general inductive bias here).
1. the data, how it's collected and formatted.
2. the features, how they are designed
3. the architecture of the model
4. the objective function
5. how it's deployed
Read 17 tweets
5 Feb 20
We often hear that AI systems must provide explanations and establish causal relationships, particularly for life-critical applications.
Yes, that can be useful. Or at least reassuring....
1/n
But sometimes people have accurate models of a phenomenon without any intuitive explanation or causation that provides an accurate picture of the situation. In many cases of physical phenomena, "explanations" contain causal loops where A causes B and B causes A.
2/n
A good example is how a wing causes lift. The computational fluid dynamics model, based on Navier-Stokes equations, works just fine. But there is no completely-accurate intuitive "explanation" of why airplanes fly.
3/n
Read 9 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(