Modern AI and machine learning platforms are GPU intensive and require large data sets to deliver the highest levels of accuracy to the training model. They also require a high bandwidth, low latency storage infrastructure to ensure a GPU cluster is fully saturated with as much data as the application needs. Typical data sets can span from terabytes to tens of petabytes, and the data access pattern for each epoch is unique and unpredictable. This calls for a data infrastructure that can instantaneously and consistently feed large amounts of random data to multiple GPU nodes in real-time, all emanating from a single shared data pool. WekaIO Matrix is the world’s fastest and most scalable file system for these data intensive applications, whether hosted on premises or in the public cloud. It has proven scalable performance of over 10GBytes per second bandwidth to a single GPU node, delivering 10x more data than NFS and 3x more than a local NVMe SSD.
Press and media coverage
HPCWire: https://www.hpcwire.com/off-the-wire/wekaio-places-in-top-five-of-the-virtual-institutes-io-500-10-node-challenge/ The Register: https://www.theregister.co.uk/2018/11/30/wekaio/ AI Business: https://aibusiness.com/ai-time-market-crucial-heres-get-ahead/ Storage Newsletter: https://www.storagenewsletter.com/2018/08/20/company-profile-wekaio/ Computer Weekly: https://www.computerweekly.com/feature/Hybrid-cloud-file-and-object-pushes-the-frontiers-of-storage TechTarget-Enterprise Tech: https://www.enterprisetech.com/2018/05/14/accelerating-ai-workloads-with-low-latency-file-access/ The Next Platform: https://www.nextplatform.com/micro-site-content/get-supercharged-ai-ready-infrastructure-with-next-generation-storage-solutions/