Compared with NVIDIA's graphics cards in ai artificial intelligence and deep learning ML feverish, AMD's GPU acceleration cards still focus on traditional computing, and the latest Insinct MI200 series is the same, but AMD's future RDNA architecture graphics cards will be greatly modified, integrating dedicated APD units to accelerate ML performance.
Recently, the U.S. Trademark Office announced AMD's latest GPU patent, AMD described a new architecture that can integrate additional chips on top of the GPU, called APD (accelerated processing device) acceleration processor, mainly used to improve ML performance, including memory and 1 or more ML accelerators.
In this way, AMD says that the schema can gain ML performance benefits, memory can be configured as cached or directly accessible memory mode, and can also include MLU logic operation units that can execute matrix algorithms to improve ML performance.

To put it simply, AMD's idea of doing ML accelerators is different from NVIDIA, which is to do ML in the current GPU core, while AMD is seeking to improve performance through dedicated units, which have obvious performance advantages.
Considering AMD's rich experience in chip design, it is obvious that more units will be stacked in the GPU, after AMD implemented the computing + IO core stack in the Ryzen™/Xiaolong processor, and the upcoming 3D V-Cache version of Ryzen/Xiaolong has added a cache unit, and it is logical to add ML units to the GPU.
This GPU patent will undoubtedly be used for future RDNA graphics cards, and it is not yet certain whether it will catch up with the RDNA3 architecture at the end of next year, or wait for another generation to RDNA4 graphics cards.
(7829264)