This page contains information about all machine learning articles on theJavaGeek. Although this blog was mainly focused on java related technologies, we’ve decided to write about machine learning and artificial intelligence too.
Machine learning and artificial intelligence are hot topics these days and they are changing the way of programming. In traditional programming approach, we write programs which accept certain inputs and provide outputs based on explicit coding done.Machine learning applications are able to gather knowledge from data given to them. They use complicated mathematics and statistical analysis under the hood.
In this tutorial series, we will focus more on applications of machine learning instead of theoretical aspect. We will use various python libraries which have implemented complicated mathematics and statistical part.
Setup and Introduction:
In this section we will learn how to download, install and configure tools necessary for machine learning using python. These articles will help you set things up for the awesome learning we are going to have.
Regression is a statistical technique for estimating relationships between variables. In this section we will learn about different types of regression libraries in python. We can use them to predict certain results by providing input data.
Regression comes under supervised learning. Machine is trained with both input data and output data. The machine learning algorithm then tries to create a mapping function based on inputs and outputs given in training data. This mapping function is then applied to new input data and predicts output.
Regression algorithms are used to predict a certain numeric value. For example, salary, age, temperature. We will see such examples in listed articles.
- Simple Linear Regression
- Multiple Linear Regression
- Backward elimination for multiple linear regression
- polynomial regression
- Support Vector Regression
- Decision Tree Regression
- Random Forest Regression
Classification comes under supervised learning. Output variable of a classification problem, is a label or a category. For example, an email can be a spam or not a spam. An animal can be of a dog or cat. Classification can be done in multiple categories too. A vehicle can be a train, a bus, a car of a bike. We will train certain machine learning algorithms with input and output labels and predict class for newly provided input. Let us go through these algorithms one by one.
- Logistic Regression classification
- K Nearest Neighbors classification.
- Support Vector Machines
- Naive Bayes classification.
- Decision Tree Classification.
- Random Forest Classification.
- Confusion Matrix.
Clustering is a form of unsupervised learning. We don’t provide correct input and output to machine learning algorithms. We only provide input data to them and they will figure out patterns and structures from data. We will only focus on clustering problems in this section.
We can identify groups or clusters in given data. For example, We have data for customers of a shopping mall and we want to identify and group them according to money they have spent. This is a generic example clustering problem because we are not teaching the algorithm with customer data and group. We are only providing customer data and algorithm will identify the groups on its own.
Let us learn some clustering algorithms and identify groups in our data.
Natural Language Processing:
We, humans communicate via text and speech. The way we communicate doesn’t really have too many strict rules. It takes many years for a human to learn our language well and communicate effectively with other humans.
We are trying to make computers able to understand this form of communication. It is termed as natural language processing. We will try to teach machine learning algorithms about text communication so that algorithm can find a meaning out of it.
We will learn few concepts in natural language processing and try to implement them in this section.
Human brain has individual neurons and they are connected in huge networks. Human brain is able to process information accurately and quickly using these neural networks.
Deep Learning is arguably the most exciting branch of machine learning. It is inspired by working of human brain. Machine Learning field is trying to mimic how our brains work and implement algorithms to work like them. We will study about a neuron in machine learning and how it can be used to create huge neural networks. There will be input layer and output layer and all intermediate layers are hidden layers. These can form a huge network and hence its termed as deep learning.
We can solve the same problems like regression and classification using deep learning. It increases accuracy of predictions. Buckle up and be ready to learn the most exciting concepts in deep learning.
Feature extraction techniques do not remove existing independent variables from dataset. On the contrary, they create new variables from existing variables. We will learn about PCA and LDA feature extraction techniques in this section.
We would not have been able to learn so much and write all these articles without Machine Learning course from Udemy. It is definitely worth spending time and money. Course instructors are awesome. They have explained all concepts with passion and it reflects in us too. We were able to learn a lot of things from it. Some of above articles are inspired from their machine learning course material and we would definitely recommend buying it.