Amazon Web Services (AWS) plans to use Nvidia’s NVLink Fusion technology in its next generation AI chip, Trainium4. This step will let AWS build large server clusters that connect chips much faster. Faster inter-chip communication is vital for training complex AI models. AWS announced the move during its major cloud conference.
Meanwhile, AWS has rolled out new servers powered by Trainium3 chips. Each server packs 144 chips and delivers more than four times the computing power while using 40 percent less electricity than older servers. This big leap in performance and efficiency should help businesses run AI workloads more cheaply and quickly.
With NVLink Fusion on Trainium4, AWS aims to combine the strengths of Nvidia’s chip-linking technology with its own high-performance, cost-efficient AI infrastructure. The upgrade may attract more customers who run large AI training workloads. The new Trainium3 servers are already available for use.
In short, Amazon’s move boosts its AI server power, cuts energy costs, and steps up its competition in global cloud-AI services.
#AIinfrastructure
#AmazonAWS
#Nvidia
#NVLinkFusion
#Trainium4
#Trainium3
#AIchips
#AIservers
#CloudAI
#HighPerformanceComputing










