© 2024 AIDIGITALX. All Rights Reserved.

How Moreh’s MoAI Optimizes AMD’s GPU Capabilities for AI

Moreh's MoAI software enhances AMD's GPU potential for AI by enabling seamless usage, surpassing NVIDIA's performance, and tackling large-scale model training, reshaping perceptions of AMD's suitability in the AI landscape.
Gangwon Jo - CEO, Co-Founder of MOREH
Gangwon Jo – CEO, Co-Founder of MOREH

AMD has been making significant strides in the AI market, especially with the introduction of their MI300X and advancements in their ROCm software. One of the challenges previously faced with AMD GPUs was the lack of a well-established software stack, but ROCm aims to address this, making it more compatible with larger GPU clusters.

Moreh, a Korean-based company, has been working closely with AMD, leveraging their flagship AI software called MoAI. This software aims to rival NVIDIA’s CUDA, providing compatibility with major machine learning frameworks like PyTorch, TensorFlow, and OpenAI’s Triton. Moreh claims that when using their MoAI platform with AMD’s MI250 Instinct accelerator, they achieved better performance compared to NVIDIA’s A100, boasting a 116% higher GPU throughput.

According to the head of AI at Moreh, they’ve been extensively using AMD’s GPUs, including over 400 MI250 GPUs and a few MI300X for training AI models. With their software, companies can seamlessly use AMD GPUs without additional coding, albeit with some differences in code requirements.

Advertisement

Moreh’s success lies in its software being optimized for AMD GPU infrastructure, showcasing better performance in AI model development compared to NVIDIA GPUs. They prioritize ease of use for customers, aiming to allow configurations and techniques without the need for programming expertise.

Addressing potential issues with large-scale AI model training on multiple GPUs, Moreh is devising techniques to reduce downtime due to GPU malfunctions, aiming to parallelize computers into different fragments for more efficient processing.

Moreover, Moreh has completed the training of its own large language model (LLM) for the Korean language with 221 billion parameters and plans to release upcoming models as open source. They acknowledge the trend of open source models and intend to provide smaller, accessible models inclusive of code, weights, inference code, and more.

Advertisement

In October, AMD and Korean telecommunications (KT) jointly invested $22 million in Moreh, boosting its valuation to $30 million. Moreh projects its revenue to reach $30 million by the end of 2023. KT, a long-term partner of Moreh since 2021, has invested in a large AMD GPU cluster, focusing on AI model development and supporting clusters and cloud systems. KT is also venturing into GPU cloud provider services and APIs for language models, specifically targeting the Korean language.

KT has attested to Moreh’s technology outperforming NVIDIA’s DGX in terms of speed and GPU memory capacity. AMD GPUs are increasingly proving skeptics wrong regarding their suitability for machine learning tasks.

Overall, Moreh’s partnership with AMD, its advanced software solutions, and focus on AI model efficiency seem to be key drivers in reshaping perceptions of AMD’s GPUs in the AI market.

NewsletterYour weekly roundup of the best stories on AI. Delivered to your inbox weekly.

By subscribing you agree to our Privacy Policy & Cookie Statement and to receive marketing emails from AIDIGITALX. You can unsubscribe at any time.

Advertisement
Advertisement
Expert
Expert

Expert in the AI field. He is the founder of aidigitalx. He loves AI.