Artificial Intelligence (AI) - SkillBakery Studios

Breaking

Post Top Ad

Post Top Ad

Thursday, October 29, 2020

Artificial Intelligence (AI)




The most common terminology which we are listening now days is AI and the involvement of AI in various sectors, AI refers to Artificial intelligence (AI).
AI is the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. The term AI may also be applied to any machine that exhibits traits associated with a human mind such as learning and problem-solving.
In other words Artificial intelligence is a wide-ranging branch of computer science, which is used for building smart machines capable of performing tasks that typically require human intelligence.
 
History-
 
In the start of early 20th century, science fiction movies, gave us an idea or concept of AI robots, which later was presented as a humanoid robot.
 
By the second half of the 20th century, scientists, mathematicians, and philosophers with the concept of AI (or artificial intelligence) started to form a cultural society or group with the same belief.
 
One such a person was Alan Turing, a young British polymath who explored the mathematical possibility of artificial intelligence. Turing suggested that humans use available information as well as reason in order to solve problems and make decisions, so why can’t machines do the same thing? This was the logical framework of his 1950 paper, Computing Machinery and Intelligence in which he discussed how to build intelligent machines and how to test their intelligence.
 
During the start of his research, leasing a computer was too expensive, only prestigious groups and companies could afford such expenses. A proof of concept, as well as advocacy from high profile people, were needed to persuade funding sources that machine intelligence was worth pursuing.
 
Five years later, the proof of concept was initialized through Allen Newell, Cliff Shaw, and Herbert Simon’s, logic theorist. The Logic Theorist was a program designed to mimic the problem-solving skills of a human and was funded by Research and Development (RAND) Corporation.
 
From 1957 to 1974, AI flourished. Computers could store more information and become faster, cheaper, and more accessible. Machine learning algorithms also improved and people got better at knowing which algorithm to apply to their problem.
 
In the 1980s, AI was reignited by two sources: an expansion of the algorithmic toolkit, and a boost of funds. John Hopfield and David Rumelhart popularized “deep learning” techniques which allowed computers to learn user experience. On the other hand, Edward Feigenbaum introduced expert systems that mimicked the decision-making process of a human expert. The program would ask an expert in a field how to respond in a given situation, and once this was learned for virtually every situation, non-experts could receive advice from that program.
 
During the 1990s and 2000s, many of the landmark goals of artificial intelligence had been achieved. In 1997, the reigning world chess champion and grandmaster Gary Kasparov was defeated by IBM’s Deep Blue, a chess-playing computer program. This highly publicized match was the first time a reigning world chess champion lost to a computer and served as a huge step towards an artificially intelligent decision-making program. In the same year, speech recognition software, developed by Dragon Systems, was implemented on Windows. This was another great step forward but in the direction of the spoken language interpretation endeavor.
 
What is more expected?
 
As we all know that AI is already in the process of development and improvement, we all that when we call at a call-center the first interaction is done by machines only and at your request, it changes the language immediately and even speaks in different languages fluently. We have heard many times that companies are testing driverless cars.

 

 

Types of AI-
 
Artificial intelligence has been categorized into four.
Reactive Machines- The first category of AI systems are purely reactive, and it has the ability neither to form memories nor to use past experiences to inform current decisions. Deep Blue, IBM’s chess-playing supercomputer, which beat international grandmaster Garry Kasparov in the late 1990s, is the perfect example of this type of machine.

Limited Memory- The second category is defined as limited memory, and as the name suggests it has a limited memory, which means it does not makes decisions with the details or memory of the past. The same is being used and tested in Self-driving cars. For example, they observe other cars’ speed and direction. That can’t be done in just one moment, but rather requires identifying specific objects and monitoring them over time.

These observations are added to the self-driving cars’ preprogrammed representations of the world, which also include lane markings, traffic lights, and other important elements, like curves in the road. They’re included when the car decides when to change lanes, to avoid cutting off another driver or being hit by a nearby car.

But these simple pieces of information about the past are only transient. They aren’t saved as part of the car’s library of experience it can learn from, the way human drivers compile experience over years behind the wheel.
 
Theory of Mind- The third category is called as the Theory of mind and this is going to differentiate between the machines we have and the machines we will build in the future. However, it is better to be more specific to discuss the types of representations machines need to form, and what they need to be about.

Machines in the next, more advanced, class not only form representations about the world, but also about other agents or entities in the world. In psychology, this is called “theory of mind”– the understanding that people, creatures, and objects in the world can have thoughts and emotions that affect their own behavior.

Self-Awareness- The final step of AI development is to build systems that can form representations about themselves. Ultimately, AI researchers will have to not only understand consciousness but build machines that have it.

This is, in a sense, an extension of the “theory of mind” possessed by Type III artificial bits of intelligence. Consciousness is also called “self-awareness” for a reason. (“I want that item” is a very different statement from “I know I want that item.”) Conscious beings are aware of themselves, know about their internal states, and are able to predict the feelings of others. We assume someone honking behind us in traffic is angry or impatient because that’s how we feel when we honk at others. Without a theory of mind, we could not make those sorts of inferences.

For more topics visit www.skillbakery.com 

No comments:

Post a Comment

Post Top Ad