Another week, another set of tools. This week let's look at Snapchat, Google Earth, and YouTube today. This will include 2 #python tools and 1 web app. Shall we?
(1/5)
The first #OSINT tool is made by @djnemec and it's a #python tool called Snapchat Story Downloader. It allows you to create a db of locations of interest then extract Snapchat stories from those locations indefinitely. Classifier too. Great!
The second #OSINT tool is a #python tool I made in response to @raymserrato who was looking to automate screenshot capturing of Google Earth. Earthshot will open and screenshot a list of coordinates you specify on a CSV. It's slow though!
The third #OSINT tool is a web app called you-tldr; and it allows you to quickly scan the contents of a YouTube video including transcription, summaries, editing, etc. Includes timestamps in the transcription too!
Remember #OSINT != tools. Tools help you plan and collect data, but the end result of that tool is not OSINT. You have to analyze, verify, receive feedback, refine, and produce a final, actionable product of value before you can call it intelligence.
Thanks for reading!
(5/5)
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Another week, another workflow. This week we’re going to look at YouTube and how to deconstruct videos, analyze the outputs, and apply automation and AI-lite to the results at scale.
Let’s go!
(1/10)
Step 1: Select a Video of Interest
Go to YouTube and find any video. Don’t pick anything too long for this workflow because it’ll be difficult to work with. Keep it under 15 minutes or 200 mb if you can. Great use cases are wartime videos, human rights violations, etc.
(2/10)
Step 2: Download the Video
You can inspect the element and extract the video, use a Python script, or use a web app. I’ll recommend a quick web app called YT1s that allows you to put in a YouTube link and download the video.
This week we’ll discuss how to find date/time information of web content even if it’s not obvious. This will help you establish a timeline of content or determine if an article has been altered since the original publication.
Let’s get started.
(1/8)
Step 1: Check the URL
This is a no brainer, but a lot of web content will include the original date it was published in the URL. Keep in mind that this could be an updated URL. We’ll look at other data to determine that next.
(2/8)
Step 2: Check the Sitemap
Simply add “/sitemap.xml” to the end of a URL to check if a sitemap is available. The sitemap usually includes the date/time stamp of when all content was updated on the website. This is great for finding different URL types on a website too!
It’s time for another round of OSINT tools to help you improve your efficiency and uncover new information. A quick overview:
[+] Reversing Information
[+] Automating Searches with #Python
[+] Testing/Using APIs
RT for Reach! 🙏
(1/5) 👇
The first #OSINT tool is called Mitaka. It’s a browser extension that will reverse multiple data points right from your browser. Right-click what you want to reverse and Mitaka will show you what sources are available. Improve your efficiency!
The second #OSINT tool is called Sitedorks from @zarcolio. It’s a #Python tool to automate your Google queries. Enter a query and it’ll open up to 25+ tabs—checking your parameters across a variety of website categories.
The first #OSINT tool is a bookmarklet I built called Forage and is meant to expand your search across popular social media sites. It takes the username from FB/IG, Twitter, and LinkedIn and expands your search across the web. Expect updates!
The second #OSINT tool is a browser extension called TabTrum that takes a snapshot of your active tabs and saves them so you can reopen that same combination again with one click. It's saved me SO much time with work and OSINT investigations.
I think I'm going to stick with 2 tools a week. One web-based, the other script-based. This week it's about archives and scaling your work in search engines.
👇 (1/4)
The first tool is from @ODU and it's a web-based tool that will tell you the date a website started and show the earliest archive from multiple sources.
The next tool I found through @s0md3v and it's called degoogle which lets you extract results directly. The reason why I included this is that the author claimed to not have run into a captcha for weeks and Somdev said he had 0 with 120+ requests.
In addition to OSINT Tool Tuesday, I'm going to start doing Workflow Wednesday where I unpack a process, instead of a tool, for open source intelligence. This week I'm going to talk about how to deconstruct a new social media platform.
👇 (1/9)
Step 1: Map the platform without an account.
You want to see what you can access without registering. Explore the platform from the website but also check out what's indexed by Google and other search engines using site:, -site:, inurl:, intext:, and other operators.
👇 (2/9)
Step 2: Understand the platform's privacy policies and other fine print.
You want to see what the risks are for registering an account including what information is collected and shared. You also want to know what other users can view once you've registered.