Advancing Artificial Intelligence And Creating The Technology Of The Future
The global artificial intelligence (AI) market is expected to reach the trillion-dollar mark by 2030, and just as it has done with the global automotive industry, Tesla looks set to absorb a considerable amount of market share. This is all thanks to Dojo, the supercomputer set to drive the most sophisticated (and fastest) AI training machine to date.
What is Project Dojo, and why does it matter?
Necessity breeds innovation: Tesla’s million-plus fleet of vehicles generates huge amounts of data, and the self-driving systems behind them require vast sums of real-world data. The computational demands for training these neural nets are huge, and since Tesla didn’t want to be limited by the general-purpose graphics processing units (GPUs) available, it decided to build something better.
It would be difficult to understate the significance of Tesla’s decision to go in-house. Not only is the company now developing bespoke hardware tailor-made for its specific needs, but it also represents a bold statement of intent to the tech monopolies currently dominating the AI hardware market. There’s a new kid on the block, and the speed of its development is a sight to behold.
Dojo was unveiled at Tesla’s AI Day last year, where Elon Musk implied it has the potential to reach exascale: 1 quintillion (1018) floating-point operations per second (flops), or 1,000 petaflops. In supercomputing terms, it’s the milestone to reach for.
Let’s start crunching numbers.
Let’s put these numbers into perspective. The earliest supercomputer (the 1964 Control Data Corporation 6600) could handle up to 3 million flops. It was remarkable for the time, yes, but approximately 147 billion times slower than the fastest known supercomputer today, with its capacity of up to 442 petaflops.