Joris de Jong Profile picture
Jun 18 12 tweets 2 min read Twitter logo Read on Twitter
AI and Safety:

@owasp has released a list of the top 10 most critical vulnerabilities found in artificial intelligence applications based on large language models (LLMs).

These vulnerabilities include prompt injections, data leakage, and unauthorized code execution.

A 🧵

#AI Photo by Pixabay from Pexels
1. Prompt injections:

This involves bypassing filters or manipulating the LLM using carefully crafted prompts that make the model ignore previous instructions or perform unintended actions.
2. Data Leakage:

Data leakage occurs when an LLM accidentally reveals sensitive information through its responses. #cybersecurity
3. Inadequate sanboxing:

Inadequate sandboxing can lead to potential exploitation, unauthorized access, or unintended actions by the LLM.
4. Unauthorized code execution:

Unauthorized code execution occurs when an attacker exploits an LLM to execute malicious code, commands, or actions on the underlying system through natural language prompts.
5.Server-side request forgery vulnerabilities:

Server-side request forgery (SSRF) vulnerabilities occur when an attacker exploits an LLM to perform unintended requests or access restricted resources such as internal services, APIs, or data stores.
6. Overreliance:

Overreliance on LLM-generated content can lead to the propagation of misleading or incorrect information.
7. Inadequate AI alignment:

Inadequate AI alignment occurs when the LLM’s objectives and behavior do not align with the intended use case, leading to undesired consequences or vulnerabilities.
8. Insufficient access controls:

Insufficient access controls occur when access controls or authentication mechanisms are not properly implemented.
9. Improper error handling:

Improper error handling occurs when error messages or debugging information are exposed in a way that could reveal sensitive information or potential attack vectors to a threat actor.
10. Tranining data poisoning:

Training data poisoning is when an attacker manipulates the training data or fine-tuning procedures of an LLM to introduce vulnerabilities, backdoors, or biases.
Security leaders/teams and their organizations are responsible for ensuring the secure use of generative AI chat interfaces.

Regular updates and human oversight are essential to ensure LLMs function correctly and catch any security issues.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Joris de Jong

Joris de Jong Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @JorisTechTalk

Jun 17
The power of natural language interaction is taking over!

Companies are bringing AI applications to life with large language models (LLMs). The adoption of language model APIs is creating a new tech stack in its wake.

Key takeaways from research by @sequoia

🧵 Image
1/ Nearly every company in the Sequoia network is building language models into their products.

From code to data science, chatbots to sales, and even grocery shopping and travel planning, the possibilities are endless.
2/ The new stack for these applications centers on language model APIs, retrieval, and orchestration, but open source usage is also growing.

Companies are interested in customizing models to their unique context, and the stack is becoming increasingly developer-friendly.
Read 7 tweets
Jun 16
Are you using ChatGPT to study?

Smart move, but you can do better.

Enter @LangChainAI. The AI prepares practice questions based on your study material.

Try it yourself on @streamlit. Link and code below. ⬇️

High-level overview of the app 🧵

#AI #buildinpublicn #indiehacker
1⃣ Enter your OpenAI API Key and upload your study material.

@LangChainAI will process the files and generate your questions.

WARNING: Large files (>50 pages) will clean out your wallet quickly.

I've used a 15-page file which cost me around $0.15.
2⃣ Choose the question you want to be answered.

By doing a semantic search on a Vector Database, @LangChainAI will generate the answers to your questions, based on the study material.

You can choose as many as you like.
Read 6 tweets
Jun 16
Putting @LangChainAI to the test.

I've used LangChain and OpenAI to generate 9 questions and answers to the paper: "Eight Things to Know about Large Language Models" by @sleepinyourhat

Inspiration by @gael_duval

Code in last Tweet. Let's learn! 🧵

#AI #langchain #LLM Image
Question 1:

What are the eight potentially surprising claims about large language models discussed in the text? Image
Question 2:

How do large language models (LLMs) become more capable with increasing investment? Image
Read 11 tweets
Jun 14
ChatGPT is great for studying.

But did you know you can use it to help you prepare for specific exams?

With the power of @LangChainAI you can!

Here's a 9-step breakdown including code: 🧵

#AI #study #buildinpublic #indiehacker Image
1/9 A high-level overview:

1. Input: PDF of study material
⬇️
2. Process documents
⬇️
3. Generate questions based on study material and exam guidelines
⬇️
4. Answer the questions based on the study material

You could combine step 3 & 4, but separating them = better results👇
2/9 Let's dive into the code:

First, load and process data for question generation.

I'm using the new gpt-3.5-turbo-16k model for a larger context window.

Result: Less calls to the model and higher context-awareness Image
Read 10 tweets
Jun 13
Are you excited about the future of #AI programming?

Meet Mojo🔥, the new programming language optimized for AI, with a 35,000x performance boost over Python

Created by @Modular_AI , led by the great @clattner_llvm. (Google, Tesla, Apple, Swift)

Thread👇

#Mojo #coding Image
Mojo🔥 is a superset of Python

It provides the usability of one of the world's most popular languages, but with the performance of C & C++.

In fact, Mojo code has demonstrated 30k times speedup over Python!
But Mojo🔥 isn't just about speed.

It's about building a universal platform to tackle the complexity of modern AI & hardware systems.

Mojo's commitment to the Python ecosystem makes it unique among programming languages.
Read 12 tweets
Jun 12
Want to get into AI, but don't know where to start?

Here's a list of the 10 best free courses on EdX.

🧵

P.S.: I stole this list from @mashable 🤫

#AI #indiehacker #coding
1. AI Chatbots without programming by @IBM

This course will teach you how to build, analyze, deploy and monetize chatbots - with the help of IBM Watson and the power of AI.

edx.org/course/AI-chat…
2. AI for Everyone: Master the Basics by @IBM

Learn what Artificial Intelligence (AI) is by understanding its applications and key concepts including machine learning, deep learning and neural networks.

edx.org/course/artific…
Read 11 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(