Science Technology

Which chip has the largest number of transistors at present?

The chips with the most transistors are almost concentrated in the second half of this year. Let’s take a look at them
Ali Pingtouge Yitian 710-60 billion transistors
In October this year, Alibaba released the server-level chip – Yitian 710. It uses 5nm technology, and the number of transistors reaches 60 billion, which is also the chip with the largest number of transistors so far!
In terms of architecture, Yitian 710 adopted ARM. ARM released the processor IP core Neoverse for data center scenarios in 2018. Amazon and Nvidia successively launched servers based on ARM architecture, which proved the feasibility of ARM in the server field. Compared with X86, ARM instruction set efficiency is higher, instruction execution speed is faster, decoding circuit is simple, and cost and power consumption are relatively low. In fact, on the basis of ARM architecture, Pingtouge also adopted multi-core interconnection technology and inter-chip interconnection technology, and made special optimization for on-chip interconnection. It adopted new flow control algorithm to reduce system backpressure, effectively improve system efficiency and scalability, and effectively transform single-core high performance into the high performance of the whole system. In addition, the single chip of Yitian 710 supports 128 cores, which has obvious advantages over the 64 cores of Kunpeng 920 and Amazon AWS Graviton. DDR5 memory and PCIe5.0 are also leading in the industry. You should know that AMD and Intel have announced that they will apply PCIe5.0 to the next generation of processors, which will greatly improve the transmission rate and performance of the Yitian 710.
Combined with Alibaba’s accumulation in system optimization and software and hardware co-design, it can fully release the performance of the IT710 and serve the cloud computing market with lower cost and higher computing power.
Apple M1 Max – 57 billion transistors
Almost at the same time as the release of Yitian 710, Apple released the M1 Max chip, which is also a 5nm process. The number of transistors reached 57 billion, the number of CPU cores was still eight, two and a total of 10, and the number of nerve engine cores was 16.
The number of GPU graphics cores has doubled again to 32, and the performance has doubled or nearly doubled: 4096 execution units, a maximum of 98304 concurrent threads, 10.4TFlops floating point computing power, 327 billion texture fill rates per second, and 164 billion pixel fill rates per second. Make it the most powerful chip designed by Apple so far.
NVIDIA A100 - Tensor Core GPU
The process is TSMC’s 7nm N7, and the GA100 GPU based on NVIDIA Ampere structure provides power for A100, including 54.2 billion transistors, with a chip size of 826 square millimeters.
AMD--EPYC
AMD’s EPYC series Xiaolong server processor has already broken through the tens of billions of transistors.
Cambrian Siyuan 370
The account data of Siyuan 370 is excellent. It uses 7nm process technology, integrates 39 billion transistors, and has a maximum computing power of 256TOPS (INT8), twice the computing power of the second generation Cambrian product Siyuan 270.
Siyuan 370 also created “three first” and “one first”.
1. The first cloud AI chip supporting LPDDR5 memory in China
2. The first AI chip adopting chiplet technology in Cambrian
3. The first Cambrian cloud chip supporting mainstream encryption standards at home and abroad
4. The new inference acceleration engine MagicMind is the industry’s first inference engine based on MLIR graph compilation technology to achieve commercial deployment capability
Trillion-level Big Mac
In fact, simply discussing the number of transistors is of little significance. It depends on specific application scenarios. For example, Samsung once produced a large flash memory chip, eUFS, with 2 trillion transistors. Cerebras also released the first generation of WSE chip in 2019. This chip has 400000 cores and 1.2 trillion transistors, and uses TSMC’s 16nm process.
This year, Cerebras announced the launch of Wafer Scale Engine 2, which has 2.6 trillion transistors and 85000 AI optimized cores. At the same time, it is also the largest chip in area, with an area of 46255 square millimeters. It simply and rudely made a whole 300 mm silicon chip directly into a chip!
The second generation of Wafer Scale Engine is specially built for supercomputer tasks. The area equivalent to the size of a thin notebook cannot be used in portable electronic products or notebooks.
From the performance point of view, the more the number of transistors, the stronger the computing power, but specifically at the chip level, it will still be affected by other aspects, such as architecture, communication delay between multiple dies, etc.
Of course, here is simply comparing the number of transistors, not the size of chips. Volume rather than area is said here because with the gradual application of 3D stacking technology, it can be predicted that the number of transistors will increase in the future. When the 3nm process is mass produced, it is believed that our mobile processor and desktop processor SoC will have no problem breaking through the hundreds of billions