[ad_1]
San Francisco, January 25: Meta (previously Fb) has constructed an AI supercomputer that, it claims, would be the quickest on the planet when it is totally prepared in mid-2022. Known as the AI Analysis SuperCluster (RSC), the machine is already being utilized by Meta researchers to coach giant fashions in pure language processing (NLP) and pc imaginative and prescient for analysis, with the intention of coaching fashions with trillions of parameters within the close to future.
“RSC will assist Meta’s AI researchers construct new and higher AI fashions that may study from trillions of examples; work throughout a whole lot of various languages; seamlessly analyse textual content, photos, and video collectively; develop new augmented actuality instruments; and rather more,” Meta engineers Kevin Lee and Shubho Sengupta stated in a press release late on Monday.
The primary technology of this infrastructure, designed in 2017, has 22,000 NVIDIA V100 Tensor Core GPUs in a single cluster that performs 35,000 coaching jobs a day. Google Sued for Deceptive Collection of Location Data on Android Devices.
“We needed this infrastructure to have the ability to practice fashions with greater than a trillion parameters on knowledge units as giant as an exabyte — which, to supply a way of scale, is the equal of 36,000 years of high-quality video,” stated Meta researchers.
Early benchmarks on RSC, in contrast with Meta’s legacy manufacturing and analysis infrastructure, have proven that it runs pc imaginative and prescient workflows as much as 20 instances quicker and trains large-scale NLP fashions 3 times quicker. Meaning a mannequin with tens of billions of parameters can end coaching in three weeks, in contrast with 9 weeks earlier than.
“RSC is up and working right this moment, however its growth is ongoing. As soon as we full part two of constructing out RSC, we imagine it is going to be the quickest AI supercomputer on the planet, acting at almost 5 exaflops of blended precision compute,” stated Meta.
By 2022, Meta will work to extend the variety of GPUs from 6,080 to 16,000, which is able to enhance AI coaching efficiency by greater than 2.5x.
The storage system may have a goal supply bandwidth of 16 TB/s and exabyte-scale capability to fulfill elevated demand, the corporate added.
(The above story first appeared on LatestLY on Jan 25, 2022 12:16 PM IST. For extra information and updates on politics, world, sports activities, leisure and life-style, go online to our web site latestly.com).
[ad_2]
Source link