Source: The content is from TMTPOST.
In a sense, the development of semiconductor industry in the past 50 years has become the fuel source of human computing revolution.
From a conceptual point of view, semiconductor is also understood as a chip, which is a highly miniaturized electronic product. It can complete a large number of mathematical operations very quickly, and use this kind of calculation to achieve the goal in the real physical world.
In short, chips are the brains for our electronic devices. They help computers and other machines evaluate alternatives and provide computing power for telephones, computers, cars, airplanes, and the Internet.
Semiconductors are very complex objects made on silicon wafers. The manufacturing of these wafers is very expensive, and the initial investment needs billions of dollars. One of the great technological miracles of human society in the past 60 years is the continuous reduction of chip size and the continuous improvement of computing performance, also known as Moore's law.
In this industry, there are only a few companies that can produce and manufacture semiconductors. Moreover, due to the complexity of technology, the cost of building semiconductor factories rises sharply, which also makes the semiconductor industry form a unique business model. In the whole chain, there are only two types of Companies: one is chip design company, such as Intel, the other is chip design company and chip OEM company, or wafer foundry company. The following figure shows the top ten Fabs in the world in the first half of 2018.
No matter whether Moore's law is invalid or not, the semiconductor industry is still developing. On the way to the 7Nm process, there are only TSMC, Intel and Samsung. Of course, Intel is also encountering quite a lot of difficulties, which also means that, from PC to Internet to Smart Phone, the semiconductor industry is still developing, with the continuous improvement of computing performance requirements, the trend of centralization of the entire semiconductor industry has basically become a foregone conclusion.
Over the past two years, a "counter trend" in the semiconductor industry has begun to emerge: self-developed chips.
In the field of smart phones, Apple has acquired chip manufacturer P.A. Semi in 2008, and launched the first generation of chip A4 processor developed by itself two years later. This processor soon became a standard product for iPhone and iPad. Then, apple added independent processors to Apple watch, Apple TV and other products. In addition, according to Guo Mingxu, a famous apple analyst, Apple will integrate its own chips into MAC series computers after 2020.
Google, on the other hand, has been promoting chip development in data centers. As of November 2018, Google has launched three generations of tensor process unit (hereinafter referred to as TPU). These products aim at the increasingly strong demand for machine learning, thus increasing Google's specific capabilities in cloud services.
Artificial intelligence (AI) is also bringing new opportunities to the semiconductor industry.
From the most basic point of view to understand AI, or machine learning, it is more like an advanced form of software, which can carry out a large number of professional mathematical calculations. As far as deep neural network is concerned, it is a very complex "voting" algorithm, which can realize decision-making through complex calculation of the weight of each variable.
The process of machine learning or deep learning is the process of computing time after time. How can we improve the computing speed? Of course, parallel computing is also very similar to image computing. Although the principle is not necessarily the same, the fact also proves that GPU, the processor of image computing, is very effective in machine learning, which has created the "miracle" of NVIDIA in the past four years.
But no one in the industry, except NVIDIA, wants to see that only GPU is suitable for machine learning. From Intel, a traditional chip company, to Google, Facebook and Amazon, they all have their own considerations.
From the functional level of AI chip, AI chip has two main requirements: training and reasoning. These two requirements are interrelated and constitute the whole process of AI chip.
But no one in the industry, except NVIDIA, wants to see that only GPU is suitable for machine learning. From Intel, a traditional chip company, to Google, Facebook and Amazon, they all have their own considerations.
From the functional level of AI chip, AI chip has two main requirements: training and reasoning. These two requirements are interrelated and constitute the whole process of AI chip.
Talking about training first. When massive annotated data is collected to the data center, engineers will start to "train" the data. In short, they will search for available models in massive data.
Reasoning, on the other hand, is to present the results reflected by the model. We often call it "machine decision". That is to say, when the user enters an ambiguous instruction, the machine can give a seemingly reasonable answer.
To realize the above functions of Google photo, you need to upload the data, that is, photos, to the Google server first, and then you can see the above recommendation after a period of time. This is because Google's data training is all in the cloud, and the reasoning results need the support of the network to present. In other words, you need the Internet to use.
Apple's approach is completely different. Based on Apple's self-developed chip and neural network processing engine, at present, both iPhone and iPad can realize local AI computing, which is also the training and reasoning of photo data. Apple places all the processes on local devices, as shown in the figure below. You can see similar functions such as photo recommendation and natural language search.
In fact, it is very difficult for us to directly judge which way is good. We can only say that each method has a certain range of use. For example, on the self driving car, the processing of AI chips must be placed locally. Only in this way can we avoid the delay of exchanging data with the cloud and avoid accidents.
From the above perspective, there are three major markets in the field of AI chip: Data Center training, data center reasoning, and device / edge reasoning.
If it is said that the chip industry in the past is very similar to the automobile industry, resulting in no opportunities for latecomers and entrepreneurs, then in the three fields created by AI chip, it provides enough imagination space and makes the capital market see the possibility. The figure below is only the data up to 2017.
If we look at the future opportunities from the perspective of the three major markets of AI chips.
First of all, the competition in the data center AI chip market will be very fierce. On the one hand, CPU will not easily exit the market. On the other hand, the data center owners are all global cloud computing giants, including Amazon, Google, Microsoft and Alibaba. Their demand of chips are very intense, but as mentioned above, they are developing their own chips independently. Although this does not mean that these companies will not purchase third-party chips, it also shows the particularity of this market.
Secondly, although the device reasoning market is huge in scale, it has very segmented fields. For example, the different forms of devices lead to great differences in application scenarios and energy consumption. The reasoning ability of mobile phones is obviously different from that of cars, which also leads to the market will eventually be very huge and complex. Of course, giants and start-ups have the opportunity to gain a place in this field.
Of course, similar to the traditional semiconductor industry, the ultimate trend of AI chips will become the game of oligarchy companies,.
Just one phone call to communicate with our expert team, 86-755-2979 2280
UTE Group is established in Dec .of 2005 and professional on connectors R&D ,manufacturing and sales.