How to automated search YouTube videos with Python without using API key?

Use this package pypi.org/project/youtub…

#osintautomation

⬇️🧵 Image
You can also use this package to search for YouTube channels by keywords.

🧵⬆️⬇️ #osintautomation Image
You can also search for videos by keyword for a particular channel.

🧵⬇️⬆️ #osintautomation Image
And here are pictures of the results of the 3 code samples above.

If you want to run these examples yourself, you can copy them from this repository on Github

github.com/cipher387/Pyth…

🧵⬆️⬇️ #osintautomation ImageImageImage
@threader compile
@threadreaderapp unroll
@threadRip unroll
@PingThread unroll
@ThreadReaders unroll
@TurnipSocial save
@Readwiseio save thread
@tresselapp save thread
@rattibha unroll
@getnaked_bot unroll

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Cyber Detective🇺🇦

Cyber Detective🇺🇦 Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @cyb_detective

Sep 7
One of the best ways to improve reverse image search results is refine the site in the query.

For example, if you know there is a hotel in the picture, add to the uploaded image:

site:hotels.com

Examples of sites to improve reverse image for different objects 🧵⬇️ Image
HOTELS

hotels com
booking com
tripadvisor com
expedia com

TRAINS

railcolornews com
trains com
rrpicturearchives net
railarchive net

PUBLIC TRANSPORT

busworld org
urban-transport-magazine com
gondolaproject com (for unusual transport)

🧵⬆️⬇️ Image
CARS
kbb com

AIRCRAFTS
radarbox com
flightgear org

LITTLE KNOWN ACTORS

imbd com
(any local casting database)

COMPUTERS/SMARTPHONES

pcmag com
gsmarena com

🧵⬆️⬇️ Image
Read 5 tweets
Aug 26
WARC (Web ARChive) for #osint and more

Quick basic guide (1/11) 🧵

(WARC - the general format for webarchiving, which is used by archive.org, the Australian Web Archive, and many other well-known Internet archives)
The WARC (Web ARChive) file format offers a convention for concatenating multiple resource records (data objects), each consisting of a set of simple text headers and an arbitrary data block into one long file (с).

More details:

iipc.github.io/warc-specifica…

🧵 (2/11)
To find the warc files (for the testing tools from the tweets below) just type in Google:

inurl:warc.gz site:archive․org

(archive.org can be replaced with the address of another archive or library. Once again, the WARC format is very widespread)

🧵(3/11)
Read 12 tweets
Aug 22
BazzellPy

github.com/dmw94/bazzellpy

Unofficial(!) #Python library for automation work with IntelTechniques Search Tools inteltechniques.com/tools/

Quick review with code examples 🧵⬇️

#osint
First, the BazzellPy generates quick links to gather information about social network accounts:
Twitter (pic 1)
Instagram (2)
Facebook (3)

Bazzellpy 🧵⬇️⬆️
There are functions for generating links to search for a target query in various communities (Discord, Reddit, Hackernews), as well as in search engines (including the .onion zone search engines)

Bazzellpy 🧵⬆️⬇️
Read 7 tweets
Aug 17
Yagooglesearch

(Yet Another Google Seatch #python library)

"Simulates real human Google search behavior to prevent rate limiting by Google and if HTTP 429 blocked by Google, logic to back off and continue trying" (c)

github.com/opsdisk/yagoog…

Quick manual 🧵⬇️
1. (check that you have Python and PyPi installed)

2. Install Yagooglesearch:

pip install yagooglesearch

🧵⬇️
3. Create file main․py and copy code from README․md into it (view pic)

4. Replace the query with the one you want (query='')

5. Run in command line:

python main․py >results.txt

⬇️🧵
Read 7 tweets
Aug 15
Stweet

#opensource #python library for scraping tweets (by user, by hashtag, by keyword). NO LOGIN OR API KEY REQUIRED.

github.com/markowanga/stw…

Quick manual 🧵🧵🧵⬇️
1. (check that you have Python and PyPi installed)

2. Install Stweet:

pip install -U stweet

3. Create file main․py and copy code from README․md into it (view pic)

🧵
Let's try to scrape tweets containing a certain hashtag:

Put target hashtag in "search_tweets_task = st.SearchTweetsTask(all_words='')" (If the tag is popular, scraping can take a long time)

Comment out the function call(#) for try_user_scrap and try_tweet_by_id_scrap

🧵
Read 6 tweets
Aug 9
Goverment data in #OSINT

8 types of government open databases that many countries have and that will be useful to investigate.

Each link is just an example. In the last tweet I will tell how to find government resources for a particular country.

(1/10) 🧵🧵🧵
Business Registries

On these sites you can find out by person's name the companies that belong to him, addresses, phone numbers, occupation, etc (data set varies by country).

Ex (Czech Republic): or.justice.cz/ias/ui/rejstri…

(2/10) 🧵🧵🧵
Cadastral maps

Such maps show the boundaries of the land plot, information about its owners (individual, legal entity, state) and cadastral number (for which you can find additional information about target in other sources).

Ex (Finland): asiointi.maanmittauslaitos.fi/karttapaikka/

(3/10)🧵🧵🧵
Read 11 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(