So not only was DUNE fantastic, it was the most talked-about movie on r/movies in 2020.
Here is how I figured that out with the @CohereAI platform, using entity extraction and few-shot learning!
os.cohere.ai/playground/lar…
1/n
Cohere offers a generative language modelling tool, powered by a neural network.
We trained it on tons of text from the web, so it’s great at predicting the next word in a sequence of words.
Apr 30, 2020 • 7 tweets • 3 min read
my new paper with @rishabh_467 @geoffreyhinton, Rich Caruana and Xuezhou Zhang is out on arxiv today! It’s about interpretability and neural additive models. Don't have time to read the paper? Read this tweet thread instead :)
arxiv.org/abs/2004.13912
1/7
We present Neural Additive Models (NAMs) a simple extension to Generalized Additive Models (GAMs) making use of Neural Nets
NAMs handle each input dimension separately
Specifically we learn
E[y] = g(x) = β + f1(x1) + f2(x2) + · · · + fK(xK)
Where each f is a neural net
2/7
Jul 8, 2019 • 8 tweets • 4 min read
@YaoQinUCSD , @colinraffel ,@sabour_sara , Gary Cottrell, @geoffreyhinton and I have released a full version of our workshop paper on capsule networks and adversarial attack detection! Check it out, or read this thread if you are busy :) 1/7
arxiv.org/pdf/1907.02957…
The problem with adversarial examples is that they dont look like what they are classified as. Capsule networks output both a classification and a reconstruction of the input conditioned on the classification. A reconstruction of an adversarial looks different from the input 2/7
Feb 7, 2019 • 9 tweets • 4 min read
My new paper with @NicolasPapernot and @GeoffreyHinton is out on arXiv today. It’s about the similarity structure of representations space, outlier data (e.g. adversarial attacks) and generative models. Don’t have time to read the paper? Read this instead! arxiv.org/abs/1902.01889
Our paper focused on a loss we call Soft Nearest Neighbor Loss (SNNL). It measures the entanglement of labeled data points. Data with high SNNL has muddled up classes, while the classes of a data set with low SNNL are easy to separate.
Nov 19, 2018 • 7 tweets • 3 min read
1/7 Our new paper on adversarial attack detection and capsule networks with @sabour_sara and Geoff Hinton is out on arxiv today! arxiv.org/abs/1811.06969 it will be presented at the #NeurIPS Workshop on Security. Don't have time to read the paper? Read this thread instead! :)
2/7 The problem with adversarial examples is that they dont look like what they are classified as. Capsule networks output both a classification and a reconstruction of the input conditioned on the classification. A reconstruction of an adversarial looks different from the input