The engineers over at NVIDIA claim to have harnessed the power of Thor. Actually, that I made up. Though, a large chunk of NVIDIA’s press release about its new autonomy chip codenamed Thor is just as arbitrary (or it was just written by a mischievous Loki to confuse us all). Though, in all fairness, if there is one thing I have noticed that all chip makers are guilty of — whether it be NVIDIA, Tesla, Apple, or probably others — is that they display the performance of their new products using metrics that make it impossible to truly comprehend how much more useful their new product is compared to what is already on the market or sold by competitors. But more on that later.
First, the announcement: NVIDIA has unveiled that in 2025 it will launch a new autonomy chip called Thor. This chip will have a staggering 2,000 teraFLOPS of computational power and can be used to replace multiple kinds of processors that are currently used in a vehicle, including for: infotainment, various vehicle controls, driving, autonomy, ADAS, and more.
With the chip shortage that automakers have been suffering from for what feels like years now (oh, it has been years), this is a welcome change that could in theory solve some issues. At the same time, it’s also a rather sneaky move, as this will force NVIDIA’s automaker clients to drop most of their other chip suppliers or pay a lot of money for second redundancy. You see, for the sake of safety, it is necessary to have redundancy — that way, if one chip fails, the car can continue to operate normally. In other words, every car will likely need two of these Thor chips. This is something NVIDIA has taken into account with its “NVLink-C2C chip interconnect technology.” Though, I’m sure that NVIDIA is more than pleased that all of its automotive clients will need to buy two of these expensive powerhouse chips for each car they plan to sell.
How fast is this chip really?
All these years, NVIDIA benchmarked the performance of their autonomous chips in TOPS (Tera Operations Per Seconds) when performing tasks in INT8 (a type of 8-bit code). This time around, they decided to benchmark it in TFLOP (Tera Floating Operations Per Second) when performing FP8 (a different type of 8-bit code). I would say that they are comparing apples to oranges, but that would undercut the fact that they changed the scales on both ends of the graph. Instead I will say that they basically switched from one race track to one with a very different shape and also decided to switch from an internal combustion engine vehicle to an electric one.
NVIDIA also published a graph full of inconsistencies to show how much more powerful its new chip is by comparing Thor to NVIDIA’s previous chips. On this graph, it shows the performance of those old chips in TOPS (measured in INT8) with the new one, only rather than giving a metric in INT8 TOPS, it shows the same figure of 2,000 that we know they measured in FP8 TFLOPs. Then also for Orin, the bottom scale says 250 and the scale on the left seems to show it around 500. Whoever approved this press release and graph within NVIDIA’s PR and marketing departments deserves a stern talking to and should perhaps attend some kind of processor terminology seminar.
The only thing that is a somewhat apples-to-apples comparison in NVIDIA’s press release is the number of transistors. Thor will have 77 billion transistors instead of the 17 billion that Orin has (the chip it finally started shipping not long ago).
What about Tesla?
Most of you will likely be wondering how this compares to Tesla. Tesla’s HW3 autonomy chip can do 144 TOPS, and in the smallest footnote in the history of footnotes during the Dojo supercomputer announcement, Elon commented during the Q&A that HW4 will have 4× as much power as HW3 does, which would equal around 576 TOPS. Suffice to say, when it comes to theoretical compute power benchmarks, I think we can quite safely place HW4 somewhere in between NVIDIA’s current Orin autonomy chip and it’s future all-in-one Thor chip. That, however, is not defeat — it’s likely a much more efficient allocation of the necessary resources.
In practice
In practice, there are a lot of things about Thor that worry me. The first is that NVIDIA didn’t specify how much power the chip will use. I worry about how efficiently the simulated neural nets will work compared to the dedicated hardware NPU design Tesla has gone for. I also worry about how many automakers will have the technical know-how to even take advantage of a magnificent chip like this. You can definitely count out the likes of Ford, VW, GM, and many other legacy automakers — unless there are some very drastic employment and leadership changes before 2025.
One of NVIDIA’s current clients, XPeng, is one of the few that have the software engineering talent required to work with NVIDIA and take advantage of such a sophisticated chip. However, in quite an extraordinary feat of programming, XPeng was already able to introduce City NGP (autonomy software with similar functionality to Tesla’s FSD) with a mere 20 TOPS NVIDIA Xavier chip, something even Tesla was unable to accomplish. Now that XPeng will transition to Orin, I can hardly imagine when it will max out the 250 TOPS that chip affords the company.
If anything, NVIDIA should focus on two aspects: making its chips more power efficient, and writing more software itself so that other automakers can actually utilize its chips. If NVIDIA really wants to succeed, it will have to become more like Intel Mobileye, which offers a full suite of autonomy hardware and software that companies like NIO are more than happy to take advantage of*.
*Editor’s note: I’m not sure if NVIDIA is a whole lot different or very far away from that, based on my last interview with Danny Shapiro, Senior Director of Automotive at NVDIA, but you can listen to our conversation via one of the podcast embeds below and judge for yourself. We’re also ripe for another discussion soon since that interview is from February 2021!
Appreciate CleanTechnica’s originality and cleantech news coverage? Consider becoming a CleanTechnica Member, Supporter, Technician, or Ambassador — or a patron on Patreon.
Don’t want to miss a cleantech story? Sign up for daily news updates from CleanTechnica on email. Or follow us on Google News!
Have a tip for CleanTechnica, want to advertise, or want to suggest a guest for our CleanTech Talk podcast? Contact us here.
Source: Clean Technica