It’s time for another round of OSINT tools to help you improve your efficiency and uncover new information. A quick overview:
[+] Reversing Information
[+] Automating Searches with #Python
[+] Testing/Using APIs
RT for Reach! 🙏
(1/5) 👇
The first #OSINT tool is called Mitaka. It’s a browser extension that will reverse multiple data points right from your browser. Right-click what you want to reverse and Mitaka will show you what sources are available. Improve your efficiency!
The second #OSINT tool is called Sitedorks from @zarcolio. It’s a #Python tool to automate your Google queries. Enter a query and it’ll open up to 25+ tabs—checking your parameters across a variety of website categories.
The third #OSINT tool is @getpostman. I recently did a poll and 46% of OSINT professionals aren’t using/testing APIs in their workflow. Postman has streamlined my API testing/managing process. Start by importing a curl command from anywhere.
Remember #OSINT != tools. Tools help you plan and collect data, but the end result of that tool is not OSINT. You have to analyze, receive feedback, refine, and produce a final, actionable product of value before you can call it intelligence.
Thanks for reading!
(5/5)☝️
As a quick bonus, my #OSINT bookmarklet Forage now checks Reddit and YouTube after extracting the username from the URL bar of most popular social media platforms. Give it a spin and let me know what you think.
This week we’ll discuss how to find date/time information of web content even if it’s not obvious. This will help you establish a timeline of content or determine if an article has been altered since the original publication.
Let’s get started.
(1/8)
Step 1: Check the URL
This is a no brainer, but a lot of web content will include the original date it was published in the URL. Keep in mind that this could be an updated URL. We’ll look at other data to determine that next.
(2/8)
Step 2: Check the Sitemap
Simply add “/sitemap.xml” to the end of a URL to check if a sitemap is available. The sitemap usually includes the date/time stamp of when all content was updated on the website. This is great for finding different URL types on a website too!
The first #OSINT tool is a bookmarklet I built called Forage and is meant to expand your search across popular social media sites. It takes the username from FB/IG, Twitter, and LinkedIn and expands your search across the web. Expect updates!
The second #OSINT tool is a browser extension called TabTrum that takes a snapshot of your active tabs and saves them so you can reopen that same combination again with one click. It's saved me SO much time with work and OSINT investigations.
I think I'm going to stick with 2 tools a week. One web-based, the other script-based. This week it's about archives and scaling your work in search engines.
👇 (1/4)
The first tool is from @ODU and it's a web-based tool that will tell you the date a website started and show the earliest archive from multiple sources.
The next tool I found through @s0md3v and it's called degoogle which lets you extract results directly. The reason why I included this is that the author claimed to not have run into a captcha for weeks and Somdev said he had 0 with 120+ requests.
In addition to OSINT Tool Tuesday, I'm going to start doing Workflow Wednesday where I unpack a process, instead of a tool, for open source intelligence. This week I'm going to talk about how to deconstruct a new social media platform.
👇 (1/9)
Step 1: Map the platform without an account.
You want to see what you can access without registering. Explore the platform from the website but also check out what's indexed by Google and other search engines using site:, -site:, inurl:, intext:, and other operators.
👇 (2/9)
Step 2: Understand the platform's privacy policies and other fine print.
You want to see what the risks are for registering an account including what information is collected and shared. You also want to know what other users can view once you've registered.
[#OSINT] You can use Twint to find indirect relationships between users. By matching the “conversation_id” to multiple queries, you can discover more insight.
For example, let’s say you’re trying to find violent users on Twitter that are threatening an influencer. Twitter only limits you to search for “influencer name” + “violent keyword”. Using Twint, you can search for all “violent keywords” and then match it to influencer mentions
You can also mine replies within Twint. It’s not in the Wiki, but adding “c.To =“ to your python module will allow you to pull tweets sent “to” someone. By finding the accounts that most mention said influencer, you can take this a a step further