Getting started on the Open #Bioinformatics Research Project initiative
ππ§΅π See thread below
1. Watch the introductory video on the Open Bioinformatics Research Project initiative for:
- Intro to the initiative
- High-level overview of the dataset
- Ideas for which types of analysis to perform
4. Complementary tools
To perform EDA and ML model building it may be helpful to use @RDKit_org and PaDEL (as well as PaDELPy).
Install it via:
π pip install rdkit-pypi
π pip install padelpy
5. Watch related project tutorial videos
- 6 Part #Bioinformatics from Scratch
6. Additional supplemental tutorial videos
- How to use PaDELPY to calculate molecular descriptors and fingerprints
- 2 minute overview of using #machinelearning for #drugdiscovery
7. Perhaps some background knowledge? Here are hour-long lecture and podcast videos
- Computational #DrugDiscovery 101
- How to Build #Bioinformatics Tools
2/ 1. Craft your own personal learning plan
Earlier this year I made a video that details the steps you can take to craft your own personal learning plan for your data journey. Everyone's plan is different, make your own! Here's how...
3/ 2. Work on data projects using datasets that is interesting to you
When starting out, I found that working on datasets that's interesting to you will help you engage in the process. Be persistent and work on the project to completion (end-to-end).
How? DataβModelβ Deployment
Hereβs a cartoon illustration Iβve drawn a while back:
The #machinelearning learning curve
ππ§΅π See thread below
2/ Starting the learning journey
The hardest part of learning data science is taking that first step to actually start the journey.
3/ Consistency and Accountability
After taking that first step, it may be challenging to maintain the consistency needed to push through with the learning process. And thatβs where accountability steps in.
Hi friends, hereβs my new hand-drawn cartoon illustration βοΈ
Quickly deploy #machinelearning models
ππ§΅π See thread below
2/ Deployment of machine learning models is often overlooked especially in academia
- We spend countless hours compiling the dataset, processing the data, fine tuning the model and perhaps interpreting and making sense of the model
- Many times we stop at that
- Why not deploy?
3/ Topics include:
- Overview of Data science
- Probability and Statistics
- Data cleaning
- Feature engineering
- Modeling
- Classical Machine learning
- Deep learning
- SQL
- Python data structures
2/ Why Do We Need Pandas?
The Pandas library has a large set of features that will allow you to perform tasks from the first intake of raw data, its cleaning and transformation to the final curated form in order to validate hypothesis testing and machine learning model building.
3/ Basics of Pandas - 1. Pandas Objects
Pandas allows us to work with tabular datasets. The basic data structures of Pandas that consists of 3 types: Series, DataFrame and DataFrameIndex. The first 2 are data structures while the latter serves as a point of reference.
1/ #MachineLearning Crash Course by Google
- Free course
- Learn and apply fundamental machine learning concepts
- 30+ exercises
- 25 lessons
- 15 hours to complete
- Real-world case studies
- Explainers of ML algorithms