Discover and read the best of Twitter Threads about #llama

Most recents (5)

🍎 Pour l’IA, le monde n’en a que pour OpenAI, Microsoft ou encore Stability.

Pourtant une entreprise, qui n’a pas encore annoncé son « #ChatGPT » / LLM pourrait être un game changer…

@Apple de @tim_cook pourrait changer le paysage de l’IA pour de bon… vous en pensez quoi ? Image
En effet, 1️⃣ on a une tendance claire à l’allègement des modèles LLM et la semaine dernière on a eu #WebGPT, donc une démocratisation du LLM sur des terminaux légers est possible :
2️⃣ Avec une base installée de 2 milliards d’appareils actifs, un historique autour de Siri, et une capacité d’intégration hors norme, il y a fort à parier, comme le pense @SullyOmarr, que Apple travaille sur son AppleGPT, un #LLM optimisé pour ses appareils… d’autant que :
Read 7 tweets
How to RLHF #LLAMA if you don't have hundreds of GPUS? Do it in a parameter-efficient way.
I'm happy to finally share our parameter-efficient fine-tuning #PEFT survey! It took quite a bit more time to make than I expected, but I feel good about the result
arxiv.org/abs/2303.15647 Taxonomy of PEFT
PEFT methods can target several things: storage efficiency, multitask inference efficiency, and memory efficiency are among them. We are interested in the case of fine-tuning large models, so memory efficiency is a must.
We distill over 40 PEFT papers, provide a taxonomy and comparison of 30 methods, and describe 20 methods in detail (with pseudocode!). Table comparing peft methods in terms of efficiency
Read 9 tweets
🔥Excited to release LLaMA-Adapter! With only 1.2M learnable parameters and 52K instruction data, LLaMA-Adapter turns a #LLaMA into an instruction-following model within ONE hour, delivering high-quality responses!

🚀Paper: arxiv.org/abs/2303.16199
🚀Code: github.com/ZrrSkywalker/L…
We adopt learnable adaption prompts and prepend them to the input text tokens at higher transformer layers. A zero-init attention mechanism with zero gating adaptively injects the new instructional cues into LLaMA, while effectively preserving its pre-trained knowledge.
With efficient training, LLaMA-Adapter generates high-quality responses, comparable to Alpaca with fully fine-tuned 7B parameters.
Read 5 tweets
A metal ion orients SARS-CoV-2 mRNA to ensure accurate 2′-O methylation of its first nucleotide
» doi.org/10.1038/s41467…
The #SARSCoV2 #coronavirus is able to utilize the changes in metal ion concentrations to disguise itself in the human host body thus evading immune responses. Image
Efficacy of #clarithromycin on #COVID19 pneumonia without oxygen administration; protocol for multicenter, open-label, randomized-controlled, 3-armed parallel group comparison, exploratory trial #CAMECOVID
» doi.org/10.1101/2021.0…
Japan Registry of Clinical Trials jRCTs071210011 Image
The influence of #HLA genotype on the severity of #COVID19 infection
» doi.org/10.1111/tan.14…
A genetic link has been discovered explaining why some people catch #Covid but don't get sick. The gene is found three times as often in people who are #asymptomatic. Image
Read 199 tweets
🔝5 Concerns about #SARSCoV2 #Biology: A Call to Pause, Deliberate, and Revise Policy─This review is intended both as a basic resource and to initiate an open and critical dialog about SARS-CoV-2 biology for an independent and public call to action. jjcouey.medium.com/5-concerns-abo…
Structural basis of ribosomal #frameshifting during translation of the #SARSCoV2 #RNA #genome
» doi.org/10.1126/scienc…
A unique feature of the SARS-CoV-2 genome controls protein synthesis and presents an "Achilles heel" of the virus. Image
Brainstem neuropathology in two cases of #COVID19: #SARSCoV2 trafficking between #brain and #lung
» doi.org/10.1007/s00415…
Neuropathologic evidence strongly suggests that the pathophysiology of COVID-19 related respiratory failure includes a neurogenic component. Image
Read 213 tweets

Related hashtags

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!