Artificial intelligence is currently (what is artificial intelligence?) full of dry goods
author:JSPI_雪哥“百度百科”定义人工智能(Artificial Intelligence),英文缩写为AI。它是研究、开发用于模拟...
author:JSPI_ Xue Ge
"Baidu Baike Encyclopedia" defines artificial intelligence (AI), which is abbreviated as AI. It is a new technological science to research and develop theories, methods, technologies and application systems used to simulate, extend and expand human intelligence. "Wikipedia" defines artificial intelligence (AI), also known as machine intelligence, which refers to the intelligence shown by machines made by people.
Generally, artificial intelligence refers to human like intelligence technology realized by means of ordinary computer programs. The term also refers to the scientific field of "my understanding" that studies whether such intelligent systems can be realized and how to realize artificial intelligence is a cross discipline covering computer science, statistics, brain neurology, social science and many other fields.
I hope that when you swim in the ocean of artificial intelligence, you can have your own special understanding of artificial intelligence. The development of artificial intelligence has experienced ups and downs over the past 70 years. Looking back at history, we can roughly divide it into three cycles.
In ancient times, in 1942, Isaac Asimov, a science fiction writer, completed his eternal book "I, Robot". He may not have expected that this science fiction novel would become the main theoretical source to define the interaction law between human beings and robots. In 1943, McCulloch and Peters published the Logical Calculus of Internal Thoughts in Neural Activities, marking the first step for human beings to realize "artificial intelligence".
Asimov and his novel I, Robot In 1950, Turing published Computing Machinery and Intelligence
In 1956, six years after Alan Turing published his paper in the first summer, John McCarthy first proposed the concept of "artificial intelligence" at a meeting of mathematics and computational science at Dartmouth College. In 1957, Frank Rosenblatt, a psychologist and researcher, built the Mark I perceptron at Harvard University, He established a simulated neural network with the ability to learn through repeated experiments.
In 1958, John McCarthy introduced the special AI programming language LISP, which will become the mainstream AI programming language in the next three decades. In 1959, Nathaniel Rochester and Herbert Galent of IBM developed a geometric theorem proving program.
In 1960, the first industrial robot appeared, a robotic arm called "Unimate", and was used in the welding process of automotive production.
MARK I perceptron In 1963, scientists were able to calculate closed form calculus problems through programs In 1964, Tom Evans' analogy algorithm was used to solve IQ testing problems and geometric problems In 1966, the world's first chat robot was developed at MIT in the first cold winter
After a brief outbreak, the actual output of artificial intelligence was far below people's expectations. At that time, the investment in artificial intelligence in two countries, the United States and the United States, rapidly decreased, leading to the first cold winter. From a current perspective, there were three reasons why the development of artificial intelligence entered the cold winter: all research was based on "humanized thinking" to solve problems, essentially "imitation" rather than "intelligence".
Not realizing the complexity of "intelligence", algorithms and models are too simple to solve practical complex problems due to the lack of data and computing power. In the second summer of 1971, Stanford University studied the mycomycin system, which can quickly and accurately identify bacteria that cause sepsis; In 1980, the first convolutional neural network architecture "neocognitron" was proposed;
In 1982, artificial intelligence was first applied to commerce and used for order processing of digital devices. In the following 10 years, most American companies benefited from it. In view of this, the governments of the United States, Europe, and Japan invested tens of billions of dollars to promote the development of artificial intelligence, and artificial intelligence ushered in its second summer.
In the second winter, despite significant investments made by countries such as Europe, America, and Japan in the 1980s, most companies did not achieve their expected goals, to the extent that many scientists lost confidence and even the scientific community avoided using the term "artificial intelligence". In the 1990s, artificial intelligence scientists changed their development thinking and focused more on studying theoretical foundations, which were mature theories based on statistics, This also laid the groundwork for the subsequent outbreak.
In 1997, IBM developed chess artificial intelligence software called "Deep Blue" and ultimately won the great chess master world champion Gary Kasparov, making the world realize the power of "artificial intelligence" for the first time; In 2011, IBM's Watson defeated the human champion in the popular TV quiz show "Jeopardy" [82], greatly enhancing the public's impression of the most advanced technology in artificial intelligence;
In March 2016, Alpha Go competed with the world champion and professional nine-segment Go player Li Shishi in a Go human-machine battle, winning with a total score of 4-1; At the end of 2016 and beginning of 2017, the program used "Master" as its registered account on Chinese chess websites to engage in fast chess matches with dozens of Go masters from China, Japan, and South Korea, with no losses in 60 consecutive games;
In May 2017, at the Wuzhen Go Summit in China, it competed with world number one Go champion Ke Jie and won with a total score of 3-0. It is widely recognized in the Go industry that Alpha Go's chess ability has surpassed the top level of human professional Go. Returning to the present, intelligent driving, facial recognition, recommendation algorithms, intelligent translation, robots And so on, artificial intelligence products have entered our daily lives. We believe that this is not the end of artificial intelligence, but just the beginning!.
The Hierarchy of Artificial Intelligence
Looking back at the history of artificial intelligence development in infrastructure, each development of infrastructure has significantly driven the evolution of the algorithm layer and technology layer. From the rise of computers in the 1970s and the popularization of computers in the 1980s, to the increase in computer computing speed and storage capacity in the 1990s, and the rise of the Internet, which has brought about data digitization, all have had a significant driving effect.
In the 21st century, this driving effect is even more significant. The emergence of large-scale service clusters on the Internet, the accumulation of big data brought by search and e-commerce businesses, the rise of GPUs (graphics processors) and heterogeneous/low-power chips, and the improvement of computing power have led to the birth of deep learning and ignited this wave of explosion in artificial intelligence.
As of today, the biggest bottleneck and driving force of artificial intelligence is still computing power+data algorithms (the focus of our learning). Machine learning: using algorithms to enable computers to mine information from data like humans. Traditional machine learning is carried out step by step, and the optimal solution of each step does not necessarily bring the optimal solution of the final result. Moreover, feature selection is determined by humans, which requires strong professional knowledge, And it takes time and effort, largely relying on experience and luck.
Deep learning: Deep learning is a subset of machine learning, but starting from the original features and independently learning advanced feature combinations, the entire process is end-to-end, directly ensuring the optimal output result. However, the hidden layer in the middle is a dark box, and we do not know what features the machine has extracted from massive data.
Technical Direction Computer Vision
speech processing
Natural language processing
Planning Decision System
What can we share about big data/statistical analysis short videos, e-commerce, etc? Fundamentals of Programming (Haier) Programming Language Programming Ideas Python Basic Knowledge Python Advanced Programming Data Analysis and Processing Numpy and Data Analysis Pandas and Data Processing Matplotlib Crawler
C++C++Basic C++Advanced Programming OpenCV PIL Basic Knowledge of Graphic Processing Project Practice Hanlp jieba gensim Basic Knowledge of natural language processing Project Practice Machine Learning Algorithms/Mathematical Logistic Registrationsoftmax
SVM Decision Tree random forest gbgbdtxgboostcatboostlightGBMcrf Deep Learning Basic Knowledge Convolution Neural Network Cyclic Neural Network Transformer Project Actual Computer Vision natural language processing AI Algorithm Integration and Publishing Platform Construction
C++High Performance Deployment Model Reference: [1] Catherine Clifford The ‘oracle of a.i.’: These 4 kinds of jobs won’t be replaced by robots.
https://www.cnbc.com/2019/01/14/the-oracle-of-ai-these-kinds-of-jobs-will-not-be-replaced-by-robots-.html , January 2019. Accessed: 2021-6-29.
[2] Catherine Clifford. Google ceo: A.i. is more important than fire or electricity. https://www.cnbc.com/2018/02/01/
google-ceo-sundar-pichai-ai-is-more-important-than-fire-electricity.html, February 2018. Accessed: 2021-6-29.
[3] Shana Lynch. Andrew ng: Why ai is the new electricity. https://www.gsb.stanford.edu/insights/andrew-ng-why-ai-new-electricity , 2017. Accessed: 2021-6-29.
[4] BBC News. Deepfake queen to deliver channel 4 christmas message. https://www.bbc.com/news/technology
-55424730#:~:text=While%20the%20Queen%20is%20delivering,news%20in%20the%20digital%20age., December 2020. Accessed:
2021-5-17.[5] James Vincent. Tom cruise deepfake creator says public shouldn’t be worried about ‘one-click fakes’.
https://www.theverge.com/2021/3/5/22314980/tom-cruise-deepfake-tiktok-videos-ai-impersonator-chris-ume-miles-fisher , March 2021. Accessed:
2021-5-25.[6] Oxford languages and google - english : Artificial intelligence definition. https://languages.oup.com/googl
e-dictionary-en/,May 2020. Accessed: 2021-4-14.[7] Michael Aaron Dennis. Marvin minsky, american scientist: Encyclopedia britannica.
https://www.britannica.com/biography/Marvin-Lee-Minsky , January 2021. Accessed: 2021-6-29.[8] IBM Cloud Education. What is artificial intelligence (AI)?
https://www.ibm.com/cloud/learn/what-is-artificial-intelligence.Accessed: 2021-4-14.[9] Michael Chui, Martin Harrysson, James Manyika, Roger Roberts, Rita Chung, Pieter Nel, and Ashley Van Heteren. Applying
AI for social good | mc kinsey et al. Technical report. [10] S.J. Russell and P. Norvig. Artificial Intelligence: A Modern Approach. Pearson series in artificial intelligence. Pearson
education limited., 2021. [11] "Artificial Intelligence" by Tencent Research Institute/Internet Law Research Center of China Academy of Information and Communications Technology [12] "The TURBULENT PAST AND UNCERTAIN FUTURE OF ARCTICAL INTELLIGENCE" by Elizabeth Strickland
[13]《A BRIEF HISTORY OF AI: HOW TO PREVENT ANOTHER WINTER》Amirhosein Toosi/Andrea Bottino/Babak Saboury/Eliot Siegel/Arman - Rahmim September 9, 2021
[14]A M Turing. I.—computing machinery and intelligence. Mind, LIX(236):433–460, October 1950.
当前非电脑浏览器正常宽度,请使用移动设备访问本站!