Microsoft is reportedly developing its own AI chips to train large language models, aiming to reduce its dependence on chip maker Nvidia and avoid high costs.

Since 2019, Microsoft has been working on developing chips in secrecy, which are now being tested by both Microsoft and OpenAI employees to evaluate their performance for the latest large language models, such as GPT-4, reports The Verge.

The Information first reported the news.

Nvidia is currently the leading supplier of AI server chips, with companies racing to acquire these chips and estimates indicating that OpenAI will require more than 30,000 of Nvidia’s A100 GPUs for the commercialization of ChatGPT, according to the report.

Buy Me A Coffee

As Nvidia rushes to produce as many AI chips as possible to meet the increasing demand, Microsoft is reportedly exploring an in-house approach in the hopes of cutting costs on its AI initiatives.

Microsoft has apparently advanced development on codename Athena, a project to develop its own artificial intelligence chips, the report said.

While it’s unclear whether Microsoft will ever make these chips available to Azure cloud users, the tech giant is said to be seeking to make its AI chips more widely available inside Microsoft and OpenAI as early as next year.

The company also reportedly has a road map for the chips that include multiple future generations.

Meanwhile, Microsoft has announced the addition of AI-powered Bing capabilities to the SwiftKey app (a third-party keyboard) on iOS and Android.

Meta’s Threads Launches API for Developers

This new addition will allow users to chat with the AI chatbot directly from their mobile keyboard and search for things without having to switch between apps.