Nick Frosst Profile picture
cofounder @cohere - singer @goodkidband
May 18, 2022 10 tweets 5 min read
We made a website that generates full @wizards_magic Magic the Gathering cards using @CohereAI and @WOMBO

We made it to the front page of hacker news yesterday

Try it out at urzas.ai @urzas_ai!

this is a thread about how we did it! The first thing we did was generate cards with a baseline model using prompt engineering

it made readable cards but they were often overpowered or uninterpretable.

if you have a cohere account you can see the prompt here!
os.cohere.ai/custom-preset?…
Nov 30, 2021 6 tweets 3 min read
So not only was DUNE fantastic, it was the most talked-about movie on r/movies in 2020.

Here is how I figured that out with the @CohereAI platform, using entity extraction and few-shot learning!

os.cohere.ai/playground/lar…

1/n
Cohere offers a generative language modelling tool, powered by a neural network.

We trained it on tons of text from the web, so it’s great at predicting the next word in a sequence of words.
Apr 30, 2020 7 tweets 3 min read
my new paper with @rishabh_467 @geoffreyhinton, Rich Caruana and Xuezhou Zhang is out on arxiv today! It’s about interpretability and neural additive models. Don't have time to read the paper? Read this tweet thread instead :)

arxiv.org/abs/2004.13912

1/7
We present Neural Additive Models (NAMs) a simple extension to Generalized Additive Models (GAMs) making use of Neural Nets

NAMs handle each input dimension separately

Specifically we learn
E[y] = g(x) = β + f1(x1) + f2(x2) + · · · + fK(xK)
Where each f is a neural net

2/7 Image
Jul 8, 2019 8 tweets 4 min read
@YaoQinUCSD , @colinraffel ,@sabour_sara , Gary Cottrell, @geoffreyhinton and I have released a full version of our workshop paper on capsule networks and adversarial attack detection! Check it out, or read this thread if you are busy :) 1/7

arxiv.org/pdf/1907.02957… The problem with adversarial examples is that they dont look like what they are classified as. Capsule networks output both a classification and a reconstruction of the input conditioned on the classification. A reconstruction of an adversarial looks different from the input 2/7
Feb 7, 2019 9 tweets 4 min read
My new paper with @NicolasPapernot and @GeoffreyHinton is out on arXiv today. It’s about the similarity structure of representations space, outlier data (e.g. adversarial attacks) and generative models. Don’t have time to read the paper? Read this instead! arxiv.org/abs/1902.01889 Our paper focused on a loss we call Soft Nearest Neighbor Loss (SNNL). It measures the entanglement of labeled data points. Data with high SNNL has muddled up classes, while the classes of a data set with low SNNL are easy to separate.
Nov 19, 2018 7 tweets 3 min read
1/7 Our new paper on adversarial attack detection and capsule networks with @sabour_sara and Geoff Hinton is out on arxiv today! arxiv.org/abs/1811.06969 it will be presented at the #NeurIPS Workshop on Security. Don't have time to read the paper? Read this thread instead! :) 2/7 The problem with adversarial examples is that they dont look like what they are classified as. Capsule networks output both a classification and a reconstruction of the input conditioned on the classification. A reconstruction of an adversarial looks different from the input