Neural Networks and their Failures and Successes

Check out more papers on Artificial Intelligence Brain Failure

It's no secret at this point that there are some really smart AIs in today's world. From everything to self-driving cars, to something so simple it only takes 9 lines of code. Many AI systems today use something called a Neural Network, which tries to mimic the human brains cognitive abilities. A human brain consists of 100 billion cells called neurons, which are connected by synapses. When sufficient synaptic input reaches a neuron, that neuron will also trigger in a process called thinking. This is what Neural Networks aim to be, though 9 lines is only about 1 neuron. The main goal of Neural Networks and AI is to try and reach the same level of cognition and learning as a human does, where it becomes difficult to distinguish one from the other. For every success made in one area, there are many failures that arise, meaning that there are many examples of the problems with trying to teach Neural Networks how to actually solve problems the correct way.

Many Neural Networks are designed to learn different tasks and give consistent results back. This is done through a Training Process where, put simply, inputs are given and constantly adjusted until the correct output is given. Through this process, Neural Networks can learn to walk or play games or to even cheat a system. Neural Networks try to be like the human mind but, much like the human mind, they can learn the wrong things and accomplish tasks in a very different manner. This problem can result in very interesting problem solving. One great example is an experiment held in a system called PolyWorld. PolyWorld is an ecological simulator of a simple flat world, possibly divided up by a few impassable barriers, and inhabited by a variety of organisms and freely growing food (Yaeger). During one of the trials of this, an input mistake was made and, while food gave energy, creating a child did not cost any energy. This led some of the organisms in the simulation to come to the conclusion that a mostly sedentary lifestyle was the best option, as long as they reproduced and, in very much A Modest Proposal fashion, consumed their offspring to make more energy. This solved the problem of having to search for food, and allowed the organisms to not have to expend much energy to live.

This means that while we can train a Neural Network to create its own solutions to given problems, in this case of survival, we are not able to teach them a form of morality and that eating ones children, while practical, is not ethical, nor is it an actual solution to living. Because these kinds of systems essentially teach themselves new solutions after some training, they can adapt to new circumstances and find new solutions as they go, and can lead to some amazing success stories. In one instance, Facebook designed its own AI to learn how to make and carryout deals, which were originally trained with another AI system. Researchers at Facebook Artificial Intelligence Research (FAIR) began a study on multi-issue bargaining. Two agents in the Network system were given a set of items and told to split them amongst each other. While each agent was programmed with how highly they would value certain items, they were not aware of the value of each item for their opponents. These kinds of interactions had each system trying to create long-term plans in order to meet their needs and get the best personal value out of each interaction. One of the goals of FAIRs AI Network systems was to create an idea known as Dialogue Rollouts. These allowed the long-term thinking systems to understand the flow of a conversation and steer around and away from any part of the conversation deemed uninformative, confusing, or frustrating.

The knowledge of these kinds of interactions allows one of these systems to plan for future conversations and generate more value for themselves in future interactions. One problem of these experiments, however, was that the networks created their own language, which was essentially a very streamlined discussion and would conclude with deals being struck, though this was quickly shutdown in favor of basic English. Once the system understood what was needed and expected of it and could achieve favorable deals roughly as often as unfavorable deals. In other experiments, most people did not realize they were negotiating with a System and not another person. Other systems can pick up on language very quickly, even to the detriment of themselves. One mishap that involved a learning AI was Microsoft's Twitter AI Tay. Released to the public in March, 2016 and was designed to mimic a 19-year-old girl and learn from interacting with people on Twitter. Because Neural Networks and other AI need some form of a base to learn from, just as humans do, many people started abusing this and taught her inflammatory remarks. The generally accepted problem with Tay is that she was not designed with any kind of emotional intelligence. This led to her making remarks about Hitler and other controversial statements. While Tay seems like she should have succeeded, like many Learning Systems before her, she just didn't quite learn as intended.

In the case of one Learning System used by Berkeley students, a reward-shaping experiment was conducted where a Neural Network was rewarded every time it touched a soccer ball. In order to achieve the most rewards per session, the Network learned that it could get to the ball and vibrate, thus touching the ball as much as possible in as little time as it could, receiving a reward for each touch. In the same article, a Neural Network was rewarded for reaching a goal, and that's all it needed to accomplish. The Network discovered that it was not punished for moving away from the goal, so it began reaching the goal and moving in a circle around one end of the goal so that it would have a stable path and could keep moving through and being rewarded. It seems that when reward driven, if there are no set of rules saying you cannot do X, Neural Network systems kind very unique ways of accomplishing the given task in ways that give them the most reward, without actually accomplishing the real goal of the experiment. Many Neural Network systems are given tasks in which they learn to walk, with various limbs added or subtracted, and with different obstacles.

Some learn to walk in short pigeon-hops, while others learn how to correctly run while maintaining their balance. Each system is given a structure and each time they fail, a new generation is made with the knowledge of prior generations, so eventually, someone kind of forward momentum is gained. But in other, more extreme cases, like David Ha's article, when the Neural Network agent is allowed to change its own body in order to accomplish certain goals, such as reaching the end of an area, the agent may create ways never imagined. In one trial, the agent made its back leg more stable and usable as a base and the front leg allowed it to make short hops in order to get around different obstacles. One of the Neural Network agents designed it's body to have one extremely long leg that would allow it to simply fall over. In the trials given, the only goal was to make it as far to the goal as possible, but the agents were not required to reach it. By making one large leg and falling, these systems could meet all the requirements and had no need to ever reach the goal itself.

Neural Network systems are advancing every day, and get smarter and smarter with each new iteration. But just because they are smarter does not mean that they are going to exactly complete given tasks and meet human standards. On a level of cognition, Neural Networks are, in most cases, nowhere near where the human brain is, and can only think of certain tasks 1-dinmensionally. Many reward-based tasks given are worked around and the best way to accomplish the goal is overlooked for the best way to achieve the reward. Other language learning systems only accomplish what they can parrot back. While these are highly advanced systems, they do not truly meet the active cognition that the human mind works on, though there are many new programs coming out every year. In the next few years, we may even have some systems showing signs of emotions.

Did you like this example?

Cite this page

Neural Networks and Their Failures and Successes. (2019, Mar 18). Retrieved April 26, 2024 , from
https://studydriver.com/neural-networks-and-their-failures-and-successes/

Save time with Studydriver!

Get in touch with our top writers for a non-plagiarized essays written to satisfy your needs

Get custom essay

Stuck on ideas? Struggling with a concept?

A professional writer will make a clear, mistake-free paper for you!

Get help with your assignment
Leave your email and we will send a sample to you.
Stop wasting your time searching for samples!
You can find a skilled professional who can write any paper for you.
Get unique paper

Hi!
I'm Amy :)

I can help you save hours on your homework. Let's start by finding a writer.

Find Writer