Cyber Detective💙💛 Profile picture
Every day I write about #osint (Open Source Intelligence) tools and techniques. Also little bit about forensics and cybersecurity in general. Work in @netlas_io

Sep 6, 2021, 14 tweets

From this thread thread you will learn about 12 key #OSINT-services for gathering information about a website.

I'll show them with an example of most famous russian search engine "yandex.ru" and it's subdomains.

Step #1

Collect basic information about domain

IP address lookup, whois records, dns records, ping, traceroute, NSlookup.

centralops.net

Step 2

Find out what technology was used to create the site: frameworks, #javascript libraries, analytics and tracking tools, widgets, payment systems, content delivery networks etc.

builtwith.com

Step 3

Get a list of sites belonging to the same owner (having the same Yandex.Metrika and Google Analytics counter numbers, as well as other common identifiers)

builtwith.com/relationships/

Find sites with the same Facebook App ID

analyzeid.com

Step 4

Map subdomains.

dnsdumpster.com/#domainmap

Step 5

Looking for email addresses associated with the domain or subdomains

hunter.io/search/

or

snov.io/email-finder

Step 6

Collect data on search engine rankings and approximate traffic.

alexa.com/siteinfo/
similarweb.com

Step 7

Download documents (PDF, docx, xlsx, pptx) from the site and analyze their metadata. This way you can find the names of the organization's employees, user names in the system and emails.

github.com/laramies/metag…

Step 8

Use Google Dorks to look for database dumps, office documents, log files, and potentially vulnerable pages.

dorks.faisalahmed.me

Step 9

Calculate a website fingerprint for searching it in Shodan, Censys, BinaryEdge, Onyphe and others "hackers" search engines.

mmhdan.herokuapp.com

Step 10

Looking for old versions of the site in archives and caches of search engines (sometimes in this way you can find addresses and contact information of the owners, which are currently already hidden from the site).

cipher387.github.io/quickcacheanda…

Step 11

Partially automate the process of finding important data in the archives. Download archive copies of pages from web.archive.org with Waybackpack
github.com/jsvine/wayback…
Search it for phone numbers, emails and nicknames using Grep for OSINT
github.com/cipher387/grep…

Step 12

Find out the approximate geographical location of the site

iplocation.net/ip-lookup

(There is a separate 12-step thread about gathering information about a place)

This short thread is over.

But there are dozens of times more tools for gathering information about domains. In my OSINT-collection there are already more than 60 of them:

cipher387.github.io/osint_stuff_to…

Follow @cyb_detective to learn about new tools every day.

Share this Scrolly Tale with your friends.

A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.

Keep scrolling