Basic Introduction to Artificial Intelligence (Comprehensive Popularization of Artificial Intelligen
本文由 网易云 发布 原文阅读地址: 人工智能的全面科普-网易云博客人们在日常生活中接触人工智能的频率越来...
This article is published by NetEase Cloud. The original reading address is: Comprehensive Science Popularization of Artificial Intelligence - NetEase Cloud Blog. People are increasingly exposed to artificial intelligence in their daily lives, and there is a JD smart refrigerator that can help users buy groceries; A machine that can perform automatic translation; There are also robot assistants such as Siri, Alexa, and Cortana; And unmanned vehicles, AlphaGo, and others have brought artificial intelligence technology to a state of "visible and tangible".
Artificial intelligence is surging forward with unprecedented momentum, and the total amount of financing in related fields has been steadily increasing year by year, reaching a booming level of billions of dollars by 2016. So what is artificial intelligence? What aspects are involved in this field? What are the goals and tasks that artificial intelligence needs to accomplish? The following content will be introduced one by one.
1. What is AI defined by Alan Turing? AI refers to the science of enabling computers to perform tasks that require human intelligence. Scholars at Stanford University believe that AI is the science and engineering of intelligent machines, especially in the case of intelligent computer programs. Wikipedia defines AI as the intelligence exhibited by artificially manufactured systems, and the term also refers to the study of whether such intelligent systems can be implemented, And how to achieve it in the scientific field.
No matter how it is defined, intelligence cannot be separated. However, so far, humans have not been able to provide a unified definition of intelligence. Generally speaking, intelligence is only referred to as the manifestation of human intelligence, which was originally Professor Zhong Yixin, the Chairman of the Chinese Artificial Intelligence Society. He believed that human intelligence includes three aspects: problem discovery, problem definition, and problem solving, while artificial intelligence currently only achieves the degree of problem solving.
The author believes that intelligence is an order, a manifestation of information, and the ability to develop the world in an orderly direction. Unfortunately, according to the principle of entropy increase, no matter how hard intelligent agents make efforts, the entire universe always develops in the direction of entropy increase, that is, becoming increasingly disorderly and chaotic. It is unknown whether this is a deliberate arrangement by God or whether there is another heaven and earth outside the universe observed by humans.
2. The History of Artificial Intelligence In the early 1950s, artificial intelligence focused on the so-called strong artificial intelligence, hoping that machines could complete any intellectual task like humans. The development of strong artificial intelligence stalled, leading to the emergence of weak artificial intelligence. That is, the problem of applying artificial intelligence technology to narrower fields. Prior to the 1980s, research on artificial intelligence had been divided between these two paradigms, and the two camps were opposite.
However, around 1980, machine learning began to become mainstream, aiming to equip computers with the ability to learn and construct models, so that they can make predictions and other behaviors in specific fields
In history, there were three schools of AI: symbolism, also known as logicism, psychology or computerism. The main principles of AI were the assumption of the physical symbol system (that is, the symbol operating system) and the principle of limited rationality.
Connectionism, also known as bionics school or physiology school, is mainly based on the connection mechanism and learning algorithm behaviorism between neural networks, also known as evolutionism or cybernetics school. Its principle is cybernetics and perception action control systems.
符号主义认为人工智能源于数理逻辑其早在1956年首先采用“人工智能”这个术语后来又发展了启发式算法->专家系统->知识工程理论与技术,并在20世纪80年代取得很大发展连接主义认为人工智能源于仿生学,特别是对人脑模型的研究。
In the 1960s and 1970s, the connectionist research on brain models represented by perceptron had a boom. Due to the limitations of theoretical models, biological prototypes and technical conditions at that time, brain model research fell into a low tide from the late 1970s to the early 1980s.
It was not until Professor Hopfield published two important papers in 1982 and 1984, proposing the use of hardware to simulate neural networks, that connectionism resurfaced. In 1986, Rumelhart et al. proposed the backpropagation algorithm (BP) algorithm in multi-layer networks.
Since then, there has been the research of convolutional neural network (CNN). Connectionism has gained momentum. From model to algorithm, from theoretical analysis to engineering implementation, it has laid the foundation for neural Network Computer to go to the market. In 2006, Hinton published a paper in Science and related journals, and put forward the concept of deep belief network (DBN) for the first time, pushing deep learning to the academic community and becoming a very popular research direction in the field of artificial intelligence.
Behaviorism believes that AI originated from cybernetics and cybernetics thought has become an important part of the trend of thought of the times as early as the 1940s and 1950s. It has influenced cybernetics and self-organizing systems proposed by early AI workers Wiener and McCulloch, and engineering cybernetics and biological cybernetics proposed by Qian Xuesen and others, affecting many fields.
Cybernetics linked the working principle of the nervous system with information theory, control theory, logic and computers. The early research work focused on simulating human intelligent behavior and role in the control process, such as the research of cybernetics systems such as self optimization, self adaptation, self stabilization, self-organization and self-learning, and the development of "cybernetics animals".
By the 1960s and 1970s, the research on these cybernetics systems had made some progress, sowed the seeds of intelligent control and intelligent robots, and in the 1980s, intelligent control and intelligent robot system behaviorism emerged as a new school of artificial intelligence at the end of the 20th century, which aroused many people's interest.
The representative author of this school first introduced Brooks' hexapod walking robot, which is regarded as a new generation of "cybernetics animal" and a control system that simulates insect behavior based on the perception action mode 3. The goal of AI AI includes eight aspects: reasoning, knowledge representation, automatic planning, machine learning, natural language understanding, computer vision, robotics and strong AI.
Knowledge representation and reasoning include: propositional calculus and resolution, predicate calculus and resolution, which can be used to derive some formulas or theorems. Automatic planning includes robot planning, action and learning, state space search, hostile search, planning, etc. Machine learning is a research field developed from a sub goal of AI, which is used to help machines and software learn by themselves to solve problems encountered.
Natural language processing is another research field developed from a sub goal of AI. Computer vision is a field developed from the goal of AI to help machines communicate with real people. robotics is also born from the goal of AI to identify and recognize objects that can be seen by machines. It is used to endow a machine with actual forms to complete actual actions.
We often see terms or knowledge related to artificial intelligence, machine learning, and data mining, and we also see many articles and discussions on the relationship between the three. Generally speaking, artificial intelligence is a large research field; Machine learning is a goal of artificial intelligence, providing many algorithms; And data mining is the part that leans towards algorithmic applications.
The three complement each other and also require knowledge support from other fields. Please refer to the following diagram for specific relationships
4. Methods of artificial intelligence In order to achieve the goal of artificial intelligence, the following is a review of various methods and achievements of academic and industrial research 4.1 Knowledge representation and reasoning knowledge representation, including: knowledge-based systems, representing common sense knowledge and other traditional knowledge representation has been very mature, including description logic, as well as the Semantic Web (Resource Description Framework RDF).
Knowledge reasoning is based on logic, first requiring a large dataset, such as freebase; Secondly, automated tools for relationship extraction are needed; Finally, a reasonable knowledge storage structure is required. For example, the Resource Description Framework, RDF, the concept of knowledge map proposed by Google, is a kind of knowledge engineering, which has a huge knowledge base and various services based on the knowledge base.
The knowledge ontology studied in the industry in the early years is also a kind of knowledge engineering. The research results include FrameNet, WordNet, HowNet and other specific knowledge ontology examples. Please refer to the following figure
IBM developed the Watson Q&A system in 2011, and Google proposed the Knowledge Graph in 2012 as Google's two important technological reserves. One is deep learning, which formed the Google brain; Another is the knowledge graph, which is used to support the next-generation search and online advertising business. Facebook companies use knowledge graph technology to construct interest graphs, which are used to connect people, share information, and so on. Based on this, they have built a graph search.
Other industrial applications include: SIRI, EVI, Google Now, Dbpedia, freebase and other basic technical architectures of a general knowledge engineering. Please refer to the following figure
4.2 Automatic Planning: First of all, we need to talk about Finite State Machines (FSMs), which are generally applied to game robots, network protocols, regular expressions, lexical grammar analysis, automatic customer service, etc. The following diagram shows a simple game robot state transition and action diagram.
Secondly, there is state space search, and the simplest and most crude is blind search, just like Tesla's evaluation of Edison: "If a needle falls into a haystack and is asked to find it, he will not hesitate to pick it out one by one." The optimized and improved version is heuristic search, with applications such as the A * algorithm such as chess Deepblue and Go AlphaGo.
AlphaGo uses deep learning based on Monte Carlo Tree Search (MCTS), supervised learning, reinforcement learning and other methods "Monte Carlo Tree Search" is a kind of heuristic search strategy, which can expand the search tree based on random sampling of the search space, and always ensure that the best strategy in the current sampling is selected to constantly approach the global optimum, Determine how each move should be taken to create better opportunities.
In addition, it also includes: planning, action, and learning, adversarial search, logic based planning methods, state calculus, and other content
4.3 Machine learning In a letter to shareholders, Google CEO Sandel Picai hailed machine learning as the real future of artificial intelligence and computing. It can be imagined that machine learning plays an important role in the field of artificial intelligence research. Machine learning methods include: supervised learning, unsupervised learning, semi supervised learning and enhanced learning.
The algorithms include: regression algorithm (least square method, LR, etc.), instance based algorithm (KNN, LVQ, etc.), regularization method (LASSO, etc.), decision tree algorithm (CART, C4.5, RF, etc.), Bayesian method (naive Bayes, BBN, etc.), kernel based algorithm (SVM, LDA, etc.), clustering algorithm (K-Means, DBSCAN, EM, etc.), association rules (Apriori, FP Route), genetic algorithm, Artificial neural networks (PNN, BP, etc.), deep learning (RBN, DBN, CNN, DNN, LSTM, GAN, etc.), dimensionality reduction methods (PCA, PLS, etc.), ensemble methods (Boosting, Bagging, AdaBoost, RF, GBDT, etc.).
For students who want to learn in depth, please refer to the Table of Machine Learning Knowledge and Summary of Machine Learning Methods. Deep learning is the extension and development of artificial neural network algorithm in machine learning. Recently, deep learning research is very hot. Let's introduce neural network and deep learning. Let's talk about two layers of networks first, as shown in the following figure, where a is the value of "unit", w is the weight of "connection", and g is the activation function, Generally, the sigmoid function is used for the convenience of differentiation.
Matrix operation is used to simplify the formula in the figure: a (2)=g (a (1) * w (1)), z=g (a (2) * w (2)) Let the true value of the training sample be y, and the predicted value be z, and define the loss function loss=(z – y) 2. The objective of all parameter w optimization is to make the loss sum of all training data as small as possible. At this time, the problem is transformed into an optimization problem, which is usually solved by gradient descent algorithm.
Generally, a backpropagation algorithm is used to calculate gradients layer by layer from the back to the front, and ultimately solve for each parameter matrix
Deep learning uses multi-layer neural networks, and the computational complexity increases exponentially with the number of layers when solving the parameter matrix. Assuming that a 300 * 300 pixel image is processed, an 8-layer network with 6 nodes per layer is used. In a fully connected situation, there will be 300 * 300 * 6 ^ 8 parameters that need to be calculated and solved. Convolutional neural networks (CNN) propose convolutional operators and weight sharing to significantly reduce the number of parameters.
The other problem is gradient dispersion. Since the function derived from the sigmoid function is less than 0.25, the initial random parameter w generated by the standardized normal distribution is also between 0-1, and the gradient of each layer is solved layer by layer from the back to the front, and the gradient of the front layer is the product of the values from the back layer. Therefore, there will be a shaving index. Once the initial value is less than 1, it will rapidly decrease after multiple layers of product.
An effective solution is to use ReLU to do the activation function. This is only a brief introduction. For students who want to know more about in-depth learning, please refer to "Understanding an article and in-depth learning"
4.4 natural language processing NLPNLP is another goal of artificial intelligence, which is used to analyze, understand and generate natural language to facilitate communication between people and computer equipment, as well as communication between people. Its application fields include: machine translation, text, voice, picture conversion, chat robot, automatic summarization, emotion analysis, text classification, information extraction, etc.
The following is a brief knowledge architecture diagram of natural language processing
4.5 Robot visual vision is very important for humans. Over 90% of information obtained by humans relies on the eyes. Therefore, in order for robots to acquire the ability of humans to obtain information, the key is to solve the problem of robot vision systems. Currently, machine vision can do many things, such as recognizing faces, logos, and text; The application of detecting objects and understanding their environment, such as autonomous unmanned vehicles; Detected events, video monitoring and personnel statistics; Organizational information, such as index databases for images and image sequences; Modeling object or environment, medical image analysis system or terrain model; Automatic detection, such as in manufacturing applications.
4.6 robotics and strong artificial intelligence robotics are an interdisciplinary subject. The main research includes environment adaptation machine bionics, robot autonomous behavior, human-computer cooperation, micro nano operation robots, manufacturing equipment robots, scientific engineering robots, service robots and other robots. At present, the domestic robot industry has not yet formed a scale, such as Dajiang and Shenyang Xinsong Robot Co., Ltd., which are well commercialized.
Strong artificial intelligence is one of the main goals of artificial intelligence research. Strong artificial intelligence also refers to general artificial intelligence (AGI), or the ability to execute general intelligent behaviors. Strong artificial intelligence usually connects artificial intelligence with human characteristics such as consciousness, perception, knowledge, and consciousness.
Implementing strong artificial intelligence requires at least the following abilities: automatic reasoning, using strategies to solve problems, and making decisions in uncertain environments; Knowledge representation, including a common sense knowledge base; Automatic planning; study; Using natural language for communication; Integrating these methods to achieve the same goal, the current strong artificial intelligence mainly appears in movies or novels, such as the robot boy David in Spielberg's "Artificial Intelligence".
Finally, returning to the discussion of human intelligence and artificial intelligence, human intelligence is the energy that promotes and complements the interaction between human "implicit intelligence" and "explicit intelligence"力体系其中,“隐性智慧”主要是指人类发现问题和定义问题从而设定工作框架的能力,由目的、知识、直觉能力、抽象能力、想象能力、灵感能力、顿悟能力和艺术创造能力所支持,具有很强的内隐性,因而不容易被确切理解,更难以在机器上进行模拟;“显性智慧”主要是指人类在隐性智慧所设定工作框架内解决问题的能力,依赖于收集信息、生成知识和创生解决问题的策略并转换为行动等能力的支持,具有较为明确的外显性,因而有可能被逐步理解并在机器上模拟出来。
目前几乎所有的人工智能都只能模仿人类的解决问题的能力,而没有发现问题、定义问题的能力因此,“人工智能将全面超越人类智慧”的说法没有科学根据,目前的人工智能只是帮助人类提高生产力的工具而已相关文章导读:人工智能是如何识别一张黄图的?-网易云博客
云计算,你用对了吗?-网易云博客了解 网易云 :网易云官网:https://www.163yun.com/新用户大礼包:https://www.163yun.com/gift网易云社区:https://sq.163yun.com/
当前非电脑浏览器正常宽度,请使用移动设备访问本站!