From some recent years, everybody around us starts talking about machine learning its applications, consequences, etc. Especially there are large numbers of students, professionals who are attracted towards it. Consequently, a large number of online platforms came into limelight. Then the question is
When did Machine Learning start?
Ever since human starts thinking about computing there are a lot of ups and downs. But the major changes in Machine learning starts around the middle of the 20th century. So down below is a brief summary of the History of machine learning progress happens to date.
In 1943 Author Warren S. McCulloch and Walter Pitts published an article “A logical calculus of the ideas immanent in nervous activity”. In this article, they briefly explained about the neural network functioning and described it as “Mimicking the brain”.
In the year 1950 major work in the field of AI had happened when Alan Turing Creates the “Turing Test” for the machines. This test is still used for the comparison of various machine learning models. According to this test if a machine is able to fool the human that it is human, then the model is said to possess intelligence.
In 1951 Stochastic Neural Analog Reinforcement Calculator abbreviated as SNARC first ever Artificial Neural Network built by Marvin Minsky and Dean Edmunds. In this, they make use of 3000 vacuum tubes to simulate 40 neurons. This machine was able to learn.
In this year IBM’ s Poughkeepsie Laboratory was working on very first machine learning programs. Arthur Samuel joined them and later in the same year Arthur Samuel developed a program that can play checkers and also based upon the environments and agents this machine learning program able to learn its own.
The First Artificial Intelligence Program known as Logic Theorist was developed by Allen Newell and Herbert Simon, which successfully able to solve 38 out of first 52 theories as discussed in Whitehead and Russell’s Principia Mathematica.
This year came with the discovery of perceptron, a supervised learning algorithm for binary classifiers by Frank Rosenblatt.
Arthur Samuel makes use of the term “Machine Learning” stating that the machine can be able to learn to play checkers better than the human taught the program to do it.
The machine starts playing tic-tac-toe developed by Donald Michie which make use of reinforcement learning to play the game and learn from it. This news helps other researchers to think more about other machine learning algorithms to solve these kinda problems.
In the year 1967 algorithm named as the Nearest neighbor was developed.this make use of the maps to solve the problem. Later this was widely used for basic pattern recognition. Now we make use of the Nearest Neighbor algorithm to solve various problems.
Seppo Linnainmaa publishes a general method for automatic Differentiation in 1970. That method corresponds to the modern version of the backpropagation which is widely used in various algorithms.
Stanford cart was developed which was able to navigate in the room and can also find the obstacles found on the room.
Kunihiko Fukushima publishes a paper on Neocognitron, which is a multilayered artificial neural network. Neocognitron has been widely used for pattern recognition and handwritten character recognition.
Gerald Dejong introduces a fairly new concept known as Explanation Based Learning. According to this algorithm first, it analyzes the data and then depends upon data it makes a general rule which algorithm can follow and easily discard the unimportant data.
Hopfield Networks a kind of recurrent neural network was invented by John Hopfield. These Neural Networks are served as a content addressable memory systems.
Terry Sejnowski developed a program which can be able to pronounce the words the same as a baby. He named it as NetTalk.
Christopher Watkins developed the Q-learning a model-free reinforcement learning algorithm. It means that we don’t need to train the model. But the model gets trained itself based upon the reward he got after completing a particular task.
Tin Kam Ho publishes a paper in which she described an algorithm that can be used for Regression, classification and other tasks, known as Random Forest Algorithm.
Support Vector Machine Algorithm was developed by Vladimir Vapnik and Corinna Cortes.
World Chess Champion Kasparov was beaten by IBM’ Deep Blue.
LSTM was developed by Jürgen Schmidhuber and Sepp Hochreiter, through which the efficiency and practical use of neural network increases.
Netflix made a competition in which participants needs to develop an algorithm which could beat their own recommendation system. Team “The Ensemble” was able to make by an improvement of 10.09%. but Netflix never implemented that algorithm.
Geoffrey Hinton gave the new Term “Deep Learning” which make use of Artificial neural networks in order to identify text and objects in images and videos.
Large visual database, known as Imagenet is created.
Kaggle a website for various machine learning competition was launched, which is widely used by various data scientist all over the world.
IBM Watson by making use of NLP, various machine learning algorithms and information (Data) beat the two human champions in Jeopardy.
The Google Brain team led by Andrew Ng and Jeff Dean creates a neural network that able to detect the cat from youtube videos. for this, they make a neural network which can learn from unlabeled data.
Deepface was published by Facebook which could be able to identify the faces with an accuracy of 97.35%.
Microsoft and Amazon launch their own machine learning platforms.
Google’s AlphaGo program becomes the first program the human in the Chinese board game Go.
In three days OpenAI’s Dota 2 bot won 7215 against the humans. In this bot have an overall accuracy of 99.4%. The data seems very interesting which means that might be in the real future machine can be able to pass all the test of Turing.
We are hopeful that, now you will get an idea when did machine learning start.
No, it’s just the beginning towards the era in which we will able to completely mimic the brain functionality. Nowadays the data generated is far greater than the data generated in the whole mankind history, which makes it easier for the computer programs to learn more. Let’s see what’s new exciting will happen in the field of machine learning.