The possible And Limitations Of Artificial Intelligence

The possible And Limitations Of Artificial Intelligence




Everyone is excited about artificial intelligence. Great strides have been made in the technology and in the technique of machine learning. However, at this early stage in its development, we may need to curb our enthusiasm slightly.

Already the value of AI can be seen in a wide range of trades including marketing and sales, business operation, insurance, banking and finance, and more. In short, it is an ideal way to perform a wide range of business activities from managing human capital and analyzing people’s performance by recruitment and more. Its possible runs by the thread of the complete business Eco structure. It is more than apparent already that the value of AI to the complete economy can be worth trillions of dollars.

Sometimes we may forget that AI is nevertheless an act in progress. Due to its beginning, there are nevertheless limitations to the technology that must be conquer before we are indeed in the brave new world of AI.

In a recent podcast published by the McKinsey Global Institute, a firm that analyzes the global economy, Michael Chui, chairman of the company and James Manyika, director, discussed what the limitations are on AI and what is being done to alleviate them.

Factors That Limit The possible Of AI

Manyika noted that the limitations of AI are “purely technical.” He identified them as how to explain what the algorithm is doing? Why is it making the choices, outcomes and forecasts that it does? Then there are functional limitations involving the data in addition as its use.

He explained that in the time of action of learning, we are giving computers data to not only program them, but also aim them. “We’re teaching them,” he said. They are trained by providing them labeled data. Teaching a machine to clarify objects in a photograph or to concede a variance in a data stream that may indicate that a machine is going to breakdown is performed by feeding them a lot of labeled data that indicates that in this batch of data the machine is about to break and in that collection of data the machine is not about to break and the computer figures out if a machine is about to break.

Chui identified five limitations to AI that must be conquer. He explained that now humans are labeling the data. For example, people are going by photos of traffic and tracing out the cars and the lane markers to create labeled data that self-driving cars can use to create the algorithm needed to excursion the cars.

Manyika noted that he knows of students who go to a public library to label art so that algorithms can be produced that the computer uses to make forecasts. For example, in the United Kingdom, groups of people are identifying photos of different breeds of dogs, using labeled data that is used to create algorithms so that the computer can clarify the data and know what it is.

This course of action is being used for medical purposes, he pointed out. People are labeling photographs of different types of tumors so that when a computer scans them, it can understand what a tumor is and what kind of tumor it is.

The problem is that an excessive amount of data is needed to teach the computer. The challenge is to create a way for the computer to go by the labeled data quicker.

Tools that are now being used to do that include generative adversarial networks (GAN). The tools use two networks — one generates the right things and the other distinguishes whether the computer is generating the right thing. The two networks compete against each other to permit the computer to do the right thing. This technique allows a computer to generate art in the style of a particular artist or generate architecture in the style of other things that have been observed.

Manyika pointed out people are currently experimenting with other techniques of machine learning. For example, he said that researchers at Microsoft Research Lab are developing in stream labeling, a course of action that labels the data by use. In other words, the computer is trying to interpret the data based on how it is being used. Although in stream labeling has been around for a while, it has recently made major strides. nevertheless, according to Manyika, labeling data is a limitation that needs more development.

Another limitation to AI is not enough data. To combat the problem, companies that develop AI are acquiring data over multiple years. To try and cut down in the amount of time to gather data, companies are turning to simulated environments. Creating a simulated ecosystem within a computer allows you to run more trials so that the computer can learn a lot more things quicker.

Then there is the problem of explaining why the computer decided what it did. Known as explainability, the issue deals with regulations and regulators who may probe an algorithm’s decision. For example, if someone has been let out of jail on bond and someone else wasn’t, someone is going to want to know why. One could try to explain the decision, but it certainly will be difficult.

Chui explained that there is a technique being developed that can provide the explanation. Called LIME, which stands for locally interpretable form-agnostic explanation, it involves looking at parts of a form and inputs and seeing whether that alters the outcome. For example, if you are looking at a photo and trying to determine if the item in the photograph is a pickup truck or a car, then if the windscreen of the truck or the back of the car is changed, then does either one of those changes make a difference. That shows that the form is focusing on the back of the car or the windscreen of the truck to make a decision. What’s happening is that there are experiments being done on the form to determine what makes a difference.

Finally, biased data is also a limitation on AI. If the data going into the computer is biased, then the outcome is also biased. For example, we know that some communities are unprotected to more police presence than other communities. If the computer is to determine whether a high number of police in a community limits crime and the data comes from the neighborhood with heavy police presence and a neighborhood with little if any police presence, then the computer’s decision is based on more data from the neighborhood with police and no if any data from the neighborhood that do not have police. The oversampled neighborhood can cause a skewed conclusion. So reliance on AI may consequence in a reliance on inherent bias in the data. The challenge, consequently, is to figure out a way to “de-bias” the data.

So, as we can see the possible of AI, we also have to recognize its limitations. Don’t fret; AI researchers are working feverishly on the problems. Some things that were considered limitations on AI a few years ago are not today because of its quick development. That is why you need to regularly check with AI researchers what is possible today.




leave your comment

Top