....
Artificial intelligence

Artificial Intelligence: definition, advice, comparisons and Examples of AI technology

Artificial intelligence

Artificial Intelligence: definition, advice, comparisons and Examples of AI technology

What is Artificial Intelligence?

Artificial intelligence (AI) is a process of mimicking human intelligence that relies on the creation and application of algorithms executed in a dynamic computing environment. Its purpose is to enable computers to think and act like human beings.

To achieve this, three components are needed:
  • Computer systems
  • Data with management systems
  • Advanced AI algorithms (code)

To get as close as possible to human behavior, artificial intelligence needs a large amount of data and processing capacity.

Who uses artificial intelligence?

Automotive, banking-finance, logistics, energy, industry… No sector of activity is spared by the rise of artificial intelligence. And for good reason, machine learning algorithms are available at all levels depending on business issues.

Comparative

Infrastructures or libraries for machine learning, deep learning, automated machine learning environments, data science studio… Tools abound in the field of artificial intelligence. Hence the importance of comparing the strengths and weaknesses of each to make the right choice.

Types of artificial intelligence

According to Arend Hintze, assistant professor of integrative biology, computer science and engineering at Michigan State University, there are four types of artificial intelligences, some of which do not yet exist. First, the “reactive machines”. Examples include Deep Blue, the IBM chess program that beat Garry Kasparov in the 1990s. Deep Blue can identify pieces on the board and make predictions.

However, he has no memory, so he does not learn from his past experiences. He only analyzes the possible moves and chooses the most strategic one. Thus, Deep Blue cannot be applied to other situations. Then there are “limited memory”. It’s AI systems can use past experiences to inform future decisions. Some decision-making functions of self-driving cars are designed this way. However, these observations are not retained indefinitely.

Next comes “theory of mind,” a type of AI that does not yet exist. This psychology term refers to the understanding of the beliefs, desires and intentions of others, which influence their decisions. Finally, the last category concerns “self-awareness”, which does not yet exist. That is, artificial intelligences with a sense of self and consciousness. Thus, they would be able to understand their current state, but also deduce what others are feeling.

What are the benefits of AI?

In the automotive industry, artificial intelligence drives autonomous vehicles via deep learning models (or neural networks).In banking-finance, it estimates investment or trading risks.

For transportation industries, it calculates the best routes and optimizes flows within warehouses.

In both energy and retail, it forecasts customer consumption with a view to optimizing stocks and distribution. Finally, in industry, it makes it possible to anticipate equipment breakdowns (whether for a robot on an assembly line, a computer server, an elevator, etc.) even before they occur. Objective: carry out preventive maintenance operations.

On a daily basis, artificial intelligence is also used to implement intelligent assistants (chatbot, callbot, voicebot) or smartphone cameras to take a snapshot in all circumstances.

Technical backstage

Obviously, the digital giants have not waited to exploit the full potential that artificial intelligence can bring them. With volumes of personal data never reached in history, they compete in inventiveness in the use of learning algorithms articulated around psychographic segmentation to meet the most diverse needs: research, advertising targeting, talent detection, voice interface…

Examples of AI

ChatGPT Flaws | Threatens to put several professions out of work, this time in the writing sector

Examples of AI technology

Artificial intelligence is integrated into different types of technologies, of which here are six examples.

1. Automation

It is what makes a system or process work automatically. For example, RPA (Robotic Process Automation) can be programmed to perform repetitive tasks faster than humans.

2. Machine learning

Machine learning is the science of making a computer do things without programming it. Deep learning is a subset of this, which can be thought of as the automation of predictive analytics. There are three different types. First, supervised learning, where data sets are labeled so that patterns are detected and then reused. Then, unsupervised learning, where data sets are not labeled, but are sorted based on similarities or differences. And finally, reinforced learning, where the datasets are not labeled, but the AI receives feedback after taking action.

3. Computer vision

It is a technology that captures and analyzes visual information using a camera. It is used in signature identification or the analysis of medical images.

4. NLP (Natural language processing)

Natural language processing is the processing of human language by a program. Spam detection is an old example. However, current approaches are based on machine learning. They therefore include text translation, sentiment analysis and voice recognition.

5. Robotics

It is about the design and manufacture of robots. They are then used in assembly lines for automobile production, or by NASA to move large objects in space. Researchers are now trying to incorporate machine learning to build robots that can interact in social contexts.

6. Self-driving cars

These vehicles combine computer vision, image recognition and deep learning. Thus, artificial intelligence develops an automated ability to drive a vehicle. And this, while staying in a given lane and avoiding unexpected obstacles, such as pedestrians.

Safety and ethical concerns

The concept of self-driving cars raises questions of safety and ethics. Vehicles can be hacked. And in the context of an accident, the responsibility is not clear. Additionally, self-driving cars can be put in a situation where a crash is unavoidable, forcing the AI to make an ethical decision on how to minimize the damage. Another major concern is the risk of abuse of artificial intelligence tools.

Indeed, hackers are beginning to use sophisticated machine learning tools to access sensitive systems. This further complicates the issue of security. Finally, video and audio creation tools based on deep learning were quickly diverted to the creation of deepfakes, this image synthesis technique that allows intelligent permutation of faces.

Despite the potential risks, there is little regulation regarding artificial intelligence. Where laws exist, they apply only indirectly to AIs. Thus, the GDPR (General Data Protection Regulation) and regulations regarding security breaches; imposes strict limits on the way companies can use consumer data. This regulation therefore hinders learning and certain artificial intelligence functionalities intended for consumers.

Read also: Data Breach | Personal datas in US, Europe and Asia and Fines, Settlement Examples

However, it is based on data-intensive algorithms, often personal, and its use requires compliance with certain precautions.

Financial risk management in business: how to put it in place

Why does an AI make mistakes?

Given the complexity of systems using artificial intelligence, the sources of error can be multiple.

System design errors

We first distinguish errors in the design of the system, they can be linked to several causes.

A lack of representativeness
If some real cases have not been taken into account in the training data, we speak of a lack of representativeness.

Example: some facial recognition algorithms trained on datasets where people of certain ethnic backgrounds were in short supply.

Too approximate an assumption

As a mathematical abstraction, the algorithm is based on assumptions, some of which may prove to be too approximate.

Example: Teacher performance assessment algorithms in the United States caused many complaints because the assumption that student grades were direct evidence of a teacher’s performance was too simplistic.

Wrong criteria used

When training the algorithm, it is evaluated on the completion of a task according to certain criteria, or metrics. The criteria and the final threshold chosen have important consequences on the quality of the final system.

Example: a low threshold actually corresponds to a higher error rate deliberately accepted by the system designer. For a medical diagnostic algorithm, for example, we especially want to avoid false negatives because in the event of a false positive, it is always possible for us to carry out more tests. It may thus be chosen to use a low threshold for false positives (which increases their number) if this makes it possible to have a higher threshold for false negatives (which will reduce their number).

Errors related to the terms of use

Errors can also occur due to the conditions of use of the AI system.

Poor data quality
The quality of the data supplied to the system during its use modifies its performance.

Example: this can be observed when using a voice assistant in a noisy environment: the quality of the assistant’s understanding is then reduced.

Defects related to the material or its constraints

When the system is dependent on physical components such as sensors, the quality of the system output will depend on the state of these components.

Example: a system for detecting incivility by video surveillance may be subject to more errors if deployed on a fleet of cameras with insufficient resolution.

Other default risks

Finally, like any complex system, artificial intelligence systems are not exempt from the classic failures of computer systems that can occur on the physical infrastructures where the calculations are made, during the communication of information, or even because of a human error.

Where artificial intelligence systems differ from more traditional computerized systems is in the difficulties posed by the identification of the problem: we speak of explainability. Indeed, and in particular in so-called “deep” systems, such as neural networks, the number of parameters used makes it often impossible to understand where the error comes from. To limit this risk, it is recommended to keep certain data useful to the system for a proportionate period: this is traceability.


ChatGPT the Chatbot With a Million Users that Impresses the Web


Sources: CleverlySmart, PinterPandai, arXiv (Cornell University), Michigan State University

Photo credit: sujins via Pixabay

Leave a Reply

Your email address will not be published. Required fields are marked *


The reCAPTCHA verification period has expired. Please reload the page.

AB Consulting: Elevate Your Business with Tailored Growth Strategies

Maximize your business potential with AB Consulting’s customized solutions in organizational development, stakeholder management, and leadership development. Achieve sustainable growth and excellence with strategies designed for your unique challenges.

Social Links