logo

Select Sidearea

Populate the sidearea with useful widgets. It’s simple to add images, categories, latest post, social media icon links, tag clouds, and more.
hello@youremail.com
+1234567890

AI Explained

AI Explained

What is AI?

 

Artificial Intelligence (AI) is the field of computer science dedicated to solving cognitive problems commonly associated with human intelligence, such as learning, problem solving, and pattern recognition. AI, In the most general of terms, refers to an area of computer science that makes machines do things that would require intelligence if done by a human. This includes tasks like learning, seeing, talking, socializing, reasoning, problem solving even physically moving. This definition seems a little vague. For long time, experts argued on what AI is and what is not. But we have to be careful of seeing AI and as an automation. what’s the difference between AI and automation? Well, automation is what we can do with computers, and AI is what we wish we could do. As soon as we figure out how to do something, it stops being AI and starts being automation.

 

Early Days

 

During the Second World War, a well-known British computer scientist Alan Turing worked to crack the ‘Enigma’ machine which was used by German forces to send messages securely. Alan Turing and his team created the Bombe machine that was used to decipher Enigma’s messages. The Enigma and Bombe Machines laid the foundations for Machine Learning. According to Turing, a machine that could converse with humans without the humans knowing that it is a machine would win the “imitation game” and could be said to be “intelligent”.

What most people think of as ‘true AI’ hasn’t experienced rapid progress over the decades. A common theme in the field has been to overestimate the difficulty of foundational problems. Significant AI breakthroughs have been promised ‘in 10 years’ for the past 60 years. In addition, there is a tendency to redefine what ‘intelligent ‘means after machines have mastered an area or problem.

In 1951, a machine known as Ferranti Mark 1 successfully used an algorithm to master checkers. It can play checkers and chess and beat an amateur. The Mark 1 is used as a prototype for the Ferranti Mark 1. Unlike modern computers, the Mark 1 had to be programmed by entering digits. Each instruction is represented by a single character. Later at Manchester University students reprogram the Ferranti Mark 1 so it can beat the (human) Kentucky state champion of checkers. This is still seen as the starting point of AI versus human war.

 

These Days

 

AI, or artificial intelligence, is huge right now. “Unsolvable” problems are being solved; billions of dollars are being invested into it. Every aspect of life uses AI more than ever before. It seems it look like it’s not going to stop soon.

AI becomes “smarter” and learns faster with more data, and every day, businesses are generating this fuel for running machine learning and deep learning solutions, whether collected and extracted from a data warehouse like Amazon Redshift, ground-truthed through the power of “the crowd” with Mechanical Turk, or dynamically mined through Kinesis Streams. Further, with the advent of IoT, sensor technology exponentially adds to the amount of data to be analyzed — data from sources and places and objects and events that have previously been nearly untouched.

AI Trends

 

  1. Automated Machine Learning

 

Machine learning is a collection of algorithms that can learn from and make predictions based on recorded data, optimize a given utility function under uncertainty, extract hidden structures from data and classify data into concise descriptions. Machine Learning is often deployed where explicit programing is too rigid or is impractical. Unlike regular computer code that is developed by software developers to try to generate a program code-specific output based on given input, machine learning uses data to generate statistical code (an ML model), that will output the “right result” based on a pattern recognized from previous examples of input (and output, in the case of supervised techniques). The accuracy of an ML model is based mainly on the quality and quantity of the historical data.

 

2. Deep learning:

 

Deep Learning is a new area of Machine Learning research, which has been introduced with the objective of moving Machine Learning closer to one of its original goals: Artificial Intelligence. It is also a form of artificial intelligence which develops set of instructions or algorithms called neural network mimicking human brain structure and function.s With the concept of deep learning, algorithms are no longer limited to create an explainable set of relationships as would a more basic regression.  Instead, deep learning relies on these layers of non-linear algorithms to create distributed representations that interact based on a series of factors.

 

 

3. Facial Recognition

 

A facial recognition system is a technology capable of identifying or verifying a person from a digital image or a video frame from a video source. There are multiple methods in which facial recognition systems work, but in general, they work by comparing selected facial features from given image with faces within a database. It is also described as a Biometric Artificial Intelligence based application that can uniquely identify a person by analyzing patterns based on the person’s facial textures and shape.

 

 

Use cases

 

Image and Video Classification, Segmentation

 

Convolutional Neural Networks out-perform humans on many vision tasks including object classification.  Given millions of labeled pictures, the system of algorithms can begin identifying the subject of the image.  Many photo-storage services include facial recognition, driven by Deep Learning.

Content Personalization

 

Content personalization is the act of tailoring different types of content to each individual consumer, based on their personal data available in our database which is collected before from the web, phone any other means. Mostly used information are location, search queries, ads they clicked on, website visit and purchase history, etc. is compared against a set of variables you’ve put into place, including:

  • Sex
  • Age
  • Location (city, country, region)
  • Device (smartphone, tablet, iOS, Android, Windows, Mac, Linux, etc.)
  • Visitor frequency
  • Date and time of day, proximity to payday
  • Referring URL
  • Purchase history (whether they’ve purchased before, what it was, how much it cost)
  • Sessions behavior (navigation clicks, page views, etc.)

Provide a more personalized customer experience by using predictive analytics models to recommend items or optimize website flow based on prior customer actions.

Content personalization is heavily used by big tech companies, like Google, Amazon, Facebook , Twitter and Netflix.

 

 

Anomaly Detection

 

Identify items, events or observations which do not conform to an expected pattern or other items in a dataset.

Anomaly detection (or outlier detection) is the identification of rare items, events or observations which raise suspicions by differing significantly from most of the data. Typically, anomalous data can be connected to some kind of problem or rare event such as e.g. bank fraud, medical problems, structural defects, malfunctioning equipment etc. This connection makes it very interesting to be able to pick out which data points can be considered anomalies, as identifying these events are typically very interesting from a business perspective.

Anomaly detection is used in many areas:

In Banking, Banks might ask user to authenticate himself if His card is used to draw cash from previously unused ATM, or far from the place the user mostly draw from, or if his credit card is used to draw money from multiple ATMs within a short amount of time.

 

 

Natural Language Training and Understanding

 

Natural Language Processing (NLP) is “ability of machines to understand and interpret human language the way it is written or spoken”. The objective of NLP is to make computer/machines as intelligent as human beings in understanding language.

Natural Language Processing seeks to teach the system to understand human language, tone, and context. And make the computer speak whatever we want with dynamic control and real person sounding.

With NLP, it is possible to perform certain tasks like Automated Speech and Automated Text Writing in less time. Due to the presence of large data (text) around, why not we use the computers untiring willingness and ability to run several algorithms to perform tasks in no time.

Examples are SIRI by Apple and Alexa by Amazon.

As we saw above, AI can touch every aspect of our life.And It can be a tool that we can use to make our life easier.

admin