What is Artificial Intelligence(AI): The Definitive Guide 2022

0
15155

The abbreviation of artificial intelligence is AI. It is a late-model technology science that researches and exploits methods, technologies, theories, and application systems for mimicking, extending, and expanding human intelligence.

1. What is artificial intelligence?

What is artificial intelligence

As part of computer science, artificial intelligence tries to grasp the nature of intelligence to produce a late-model sort of intelligent machine. Such intelligent machines can react similarly to human intelligence. Language recognition, robotics, natural language, image recognition, and expert systems are all major research in the field of artificial intelligence.

Since the age of artificial intelligence technology, its technology and theory are more and more perfect, and the application of the scope is constantly expanding. It is easy to imagine that the future technologies applied by AI will serve as the receptacle of human intelligence. Artificial intelligence can mimic the information course of human intelligence and knowledge. AI may eventually surpass human intelligence because it can think like a human, although it is not human intelligence.

2. How does artificial intelligence work?

How does artificial intelligence work?

The principle of artificial intelligence can be simply described as artificial intelligence being equal to mathematical calculation.

“Algorithms” determine how intelligent a machine is. Initially, it was discovered that 1s and 0s could be represented by turning circuits on and off. Then many circuits are organized together, and different arrangements can represent many things, such as colors, shapes, and letters. Coupled with logic elements (transistors), the pattern of “input (press the switch button) – calculation (current through the line) – output (the light is on)” is formed.

To achieve more complex computing, it eventually became, “large-scale integrated circuits” – chips.

After the circuit logic is nested layer by layer and packaged layer by layer, our method of changing the current state becomes “programming language”. That’s what programmers do.

Whatever the programmer tells the computer to execute, it executes, and the entire process is fixed by the program.

So, for a computer to perform a task, the programmer must first fully understand the flow of the task.

3. Is artificial intelligence a software technology?

Is artificial intelligence a software technology?

Artificial intelligence is not an autocephalous technology. Taking an intelligent robot as an example, it must include intelligent technologies, such as recognition, judgment, language, and walking. Artificial intelligence is closely related to life and various fields.

The abbreviation of artificial intelligence is AI. It is a late-model technology science that researches and exploits methods, technologies, theories, and application systems for mimicking, extending, and expanding human intelligence.

As a branch of computer science, it intends to create intelligent machines that can respond in a similar way to human intelligence through the nature of cognitive intelligence. Language recognition, robotics, natural language, image recognition, and expert systems are all major research in the field of artificial intelligence.

Since the age of artificial intelligence technology, its technology and theory are more and more perfect, and the application of the scope is constantly expanding. It is easy to imagine that the future technologies applied by AI will serve as the receptacle of human intelligence. Artificial intelligence can mimic the information course of human intelligence and knowledge. AI may eventually surpass human intelligence because it can think like a human, although it is not human intelligence.

4. What technologies does artificial intelligence consist of

What technologies does artificial intelligence consist of?

Artificial intelligence has the following five core technologies:

Computer Vision: This technology uses sequences to break up image analysis tasks into manageable chunks. These sequences are made up of techniques such as machine learning and image processing operations.

Machine learning: Machine learning refers to the automatic discovery of patterns from data. Patterns can be predicted as long as they are found. The prediction rate of this model is proportional to the amount of data processed.

Natural Language Processing: Computers have the same ability as humans to process text. This is natural language processing. Automatic tabulation of contract terms, and perhaps automatic recognition of relevant people, places, etc. in the text are examples of natural language processing.

Robotics: In recent years, core technologies such as algorithms have become more and more perfect, and important breakthroughs have been made in robotics. For example, medical robots, family robots, and unmanned robots are all important applications of robotics technology.

Biometrics: Biometrics started as forensic technology. Biometrics combines acoustics, computing, and biostatistics to use human characteristics such as the face, fingerprints, voice, veins, etc., for personal identification.

5. Types of AI

Types of AI

There are 3 types: 1. Weak AI; 2. Strong AI; 3. Super AI.

• Weak AI: Replacing humans in a single field of work.

• Strong artificial intelligence: Can replace the average person to complete most of the work in life. This is what all AI businesses are currently trying to achieve.

• Super artificial intelligence: Based on strong artificial intelligence, it learns like a human, and it performs multiple upgrades and iterations every day. And the level of intelligence will completely surpass that of human beings.

6. What are the advantages and benefits of artificial intelligence

What are the advantages and benefits of artificial intelligence?

Advantage:

• In terms of production, human labor will be relieved, and low-cost and high-efficiency robots and artificial intelligence can replace various human abilities.

• It can improve environmental problems to a certain extent because fewer resources can meet a larger demand than before.

• Artificial intelligence can broaden human’s horizons to understand the world and improve human’s ability to adapt to the world.

Shortcoming:

• Artificial intelligence replaces human beings to do various things, the unemployment rate of human beings will increase significantly, and human beings will be in a state of being helpless to survive.

• If artificial intelligence cannot be used reasonably, it may be used by bad people to commit crimes, then human beings will fall into panic.

• If we cannot control and utilize AI well, we will be controlled and utilized by AI, then human beings will die and the world will become panic.

7. Challenges for AI

Challenges for AI

AI algorithm bias

The quality of data determines the quality of artificial intelligence because intelligent systems operate on specific data. As we explore the depths of artificial intelligence, the inevitable biases that come with data become apparent. Prejudice can be understood as ethnic, community, gender, or racial prejudice. For example, algorithms today identify candidates who are suitable candidates for a job interview or individuals who qualify for a loan. If the algorithms that make such important decisions become biased over time, it could have dire, unfair, and unethical consequences.

Black box problem

Artificial intelligence algorithms are the same as black boxes. We know very little about the inner workings of artificial intelligence algorithms. For example, we know what the prediction system predicts, but we don’t know how the system came to those predictions. This makes the AI ​​system a bit unreliable.

Some techniques for solving black-box problems are being developed, such as LIME (Local Interpretable Model Agnostic Interpretation). LIME makes predictions interpretable, it adds additional information to each algorithm’s final prediction, making the algorithm reliable.

High-efficiency algorithm requirements

Models for artificial intelligence require a huge computing capacity to practice. With the learning algorithms becoming increasingly prevalent, it is crucial to guarantee that such algorithms work efficiently, requiring additional arrangements of GPUs and cores. These are the reasons why AI systems are not deployed in fields such as astronomy, where AI can be used for asteroid tracking.

Sophisticated AI integration

Adding add-ins to a website or modifying Excel is easier than integrating AI with existing enterprise infrastructure. It is especially important that AI integration does not negatively impact current output and ensures that AI is compatible with current program requirements. In addition, to simplify the management of AI infrastructure, it is important to establish an AI interface. Having said that, it can be a bit challenging for the parties involved to make a seamless transition to AI.

Lack of understanding of implementation strategies

While AI is poised to impact industries, a major challenge for AI is that it has only a vague understanding of implementation strategies. To ensure that processes can be continuously improved, companies can select goals that are in line with actual development in areas that benefit from artificial intelligence, and transmit feedback to the artificial intelligence system.

Legal Issues

Organizations need to be wary of legal issues with AI. An AI system that collects sensitive data, whether it is harmless or not, has the potential to break the law. Organizations should consider the negative impact of using AI to collect data, even though it is legal.

8. What is artificial intelligence with example?

What is artificial intelligence with example?

Self-driving car

They move on and work with a lot of data to learn how to deal with traffic patterns and make decisions at the moment.

These autonomous vehicles do not require passengers to take control at all times, and use machines and artificial intelligence to learn to move.

Smart assistant

Let’s start with the ubiquitous smart digital assistant. Nowadays, we’re discussing Cortana, Google Assistant, and Siri.

Face Detection and Recognition Technology

The iPhone’s Face ID unlock, which uses face recognition technology, and the virtual filter Snapchat, which uses face detection technology to recognize faces, are both examples of AI applications today.

Text editor

Today, many text editors rely on AI to create an optimal writing experience.

For example, the NLP algorithm recognizes incorrect syntax usage and sends it to a text editor to correct. In addition, to autocorrect, some writing tools offer readability and plagiarism ratings.

Social media

Right now, social media like Instagram and Facebook run on AI. These media platforms use AI to screen and identify users’ preferences and provide content based on their preferences to keep users active.

Chatbot

Obtaining inquiries directly from customer representatives can be time-consuming. That’s why artificial intelligence comes into being.

Computer scientists train chatbots, or chatbots, to use natural language processing to mimic the conversational style of customer representatives.

Recommendation algorithm

Media platforms like YouTube and Netflix operate with the help of intelligent recommendation systems in AI.

The system first uses various ways to get users’ hobbies and behavior of data. The system then predicts the preferences of those users through deep analysis algorithms and machine learning.

Search algorithm

The search algorithm ensures that the top results on the Search Engine Results Page (SERP) have an answer to our query. But how does this happen?

Search companies control algorithms to identify high-quality, valid data. It then presents a list of find results to show the answers and provide the best experience for the user.

9. AI Solutions

AI Solutions

Generally speaking, artificial intelligence solutions (Artificial Intelligence, referred to as artificial intelligence) are the intelligence expressed by artificial intelligence machines, and the subject of artificial intelligence can be regarded as the feasibility, theory, method, technology, application system, and application system of research and development of intelligence. Ethics is an emerging science and engineering discipline. At this point, the core lies in “intelligence”, because the only advanced intelligence that humans currently know is human beings themselves. Therefore, artificial intelligence attempts to create a machine system that “thinks or acts like a human” by exploring the nature of human intelligence. The former solves problems such as logic, reasoning, and seeking optimal solutions, while the latter is achieved through learning, cognition, reasoning, decision-making, and other actions.

There is no generally accepted consensus on the precise definition of AI solutions, from which the stage characteristics of the early stages of AI development can also be seen. First of all, in general, the theoretical construction of artificial intelligence is still very immature, and technological breakthroughs are also being explored. This is because humans have little understanding of the mechanism and components of the operation of intelligence itself; second, technological applications are ahead of conceptual theory. There are certain barriers in different directions and sub-fields, and the mainstream technical route of artificial intelligence cannot be seen at present, because the operation mechanism and constituent elements of artificial intelligence itself are ahead of conceptual theory.

AI will provide a powerful impetus for the innovative development of the digital economy. At the level of content production, AI technologies such as machine learning models and generative AI will have an impact on content production. Various digital content such as text, images, audio, video, and virtual scenes can be generated independently, which will promote the generation of digital content. With the flourishing development of sexual AI, new forms of digital content interaction and generation are created. In addition, generative AI and the impact of content production brought by AI will also make future Internet applications such as VR/AR and Metaverse a desirable reality.

10. Digital avatars and generative AI are a growing trend

Digital avatars and generative AI are a growing trend

Two trends in AI are strongly interlinked to the development of the digital economy and the Internet.

One direction is generative AI. This technology is leading the way for the future of AI. One of the five most impactful technologies in 2022 is generative AI, according to Gartner. The technology predicts that generative AI will create one-tenth of all data produced by 2025.

Machine learning algorithms and AI use training data to generate new images, videos, text, and so on. This is productive AI. Generative AI generates new similarities by learning and exporting data from intrinsic patterns. At present, generative AI can generate high-quality creative content with almost no human involvement, realize picture style transformation, text to image, picture to the emoticon, picture or video repair, synthesis of realistic human speech, generation of human faces or other visual objects, create a 3D virtual environment, etc. Humans only need to set the scene, and the generative AI will output the desired results autonomously, which will not only bring about zero marginal cost content production changes but also avoid bias from human thought and experience.

The second development direction is the digital virtual human. A digital humanoid character created by computer 3D image software is a digital avatar. Compared with the virtual characters such as “Avatar” in the previous special effects of film and television, combined with technologies such as AI synthesis and real-time motion capture, the virtual human can interact with users in a more intelligent and real-time manner in terms of language, expressions, and actions. Virtual humans gradually become hybrid kinematics, AI and VR, computer graphics, and interdisciplinary frontier cross-field. It is also moving from online culture and entertainment to offline.

The first trend of virtual human evolution is to integrate into conversational AI systems, transforming chatbots such as traditional virtual assistants into an approachable, no longer abstract human image, and enhancing the emotional exchange in the process of communication with people. The second aspect is the greater variety and simplicity of the tools. Users can use 30 minutes to create a unique image, in the system according to the basic image will be modified parameters.

11. The history of artificial intelligence

Gestational stage

This stage mainly refers to before 1956. Since ancient times, man has wanted to replace some of his mental work with other machines, which would provide him with the ability to conquer nature. Between 384 BC and 322 BC, the great philosopher Aristotle set forth the laws of formal logic in his book The Theory of Instruments, from which the basic basis for deductive reasoning was formed. This is a major research achievement that has a significant impact on the generation and development of artificial intelligence. The British philosopher Bacon (F. Bacon) systematically put forward the induction method, and also put forward the epigram that “knowledge is power”. This has had important implications for the study of human thought processes and the shift towards knowledge at the center of research in artificial intelligence since the 1970s.

The German mathematician and philosopher G. W. Leibniz proposed the idea of ​​universal notation and inferential computation, and he believed that a universal symbolic language and calculus for reasoning on this symbolic language could be established. This thought is not only the trigger point of modern machine thinking design thought but also the basic point of the generation and development of mathematical logic.

British logician C. Boole devoted himself to formalizing and mechanizing the laws of thinking and created Boolean algebra. In his book “Law of Thinking”, he first used symbolic language to describe the basic reasoning laws of thinking activities.

In 1936, the British mathematician A.M. Turing proposed the Turing Machine, a satisfactory mathematical model of a computer. With it, the electronic digital computer appeared in the eyes of the world.

Formation stage

This stage mainly refers to 1956-1969. In the summer of 1956, J. McCarthy, a young mathematics assistant at Dartmouth University and now a professor at Stanford University, and M. L. Minsky, young mathematics and neuroscientist at Harvard University and a professor at MIT, IBM N. Rochester, head of the Corporate Information Research Center, and C. E. Shannon, a mathematics researcher at Bell Labs Information Department, jointly initiated the initiative, inviting T. Moore of Princeton University, O. Selfridge and A. L. Newell of the RAND Corporation, and A. L. Newell of the RAND Corporation and Carnegie Mellon University Newell), Simon (H. A. Simon, etc.) held a two-month academic seminar at Dartmouth University in the United States to discuss the issue of machine intelligence. McCarthy has been called the father of artificial intelligence because he proposed the formal adoption of the technical term “artificial intelligence” at the conference. The conference marked the true emergence of artificial intelligence as an emerging discipline. It is of historical significance.

 Since then, several artificial intelligence research organizations have been formed in the United States, such as Newell and Simon’s Carnegie-RAND Collaborative Group, Minsky and McCarthy’s MIT Research Group, and Samuel’s IBM Engineering Research Group.

Stages of development

This stage mainly refers to after 1970. In the 1970s, many countries carried out research on artificial intelligence, and a large number of research results emerged. For example, in 1972, A. Comerauer of the University of Marseille in France implemented the logic programming language PROLOG; E. H. Shorliffe of Stanford University and others began to propose the American expert system MYCIN for diagnosis and treatment in 1972.

However, like the development of other emerging disciplines, the development path of artificial intelligence is not smooth. The United Kingdom and the United States interrupted funding for most machine translation projects at the time. In other aspects, such as problem-solving, neural networks, machine learning, etc., they have also encountered difficulties, making the research of artificial intelligence a dilemma for a while.

The pioneers of artificial intelligence research reflect carefully and summarize the experience and lessons of the previous research. In 1977, Feigenbaum proposed the concept of “knowledge engineering” at the Fifth International Joint Conference on Artificial Intelligence, which played an important role in the research and construction of knowledge-based intelligent systems. Most people accepted Feigenbaum’s view of knowledge center AI research. Since then, the research of artificial intelligence has ushered in a new era of vigorous development centered on knowledge.