Microsoft is recently making headlines for work bringing advancements in the field of Artificial Intelligence (AI). After a huge investment in the creator of ChatGPT by OpenAI, the company is said to be working on its own AI processors ,to train large language models (LLM).
According to the information received from people working directly on the project, the software giant is working on the chip since 2019. Internally named chip Athena is currently available to few people in Microsoft and OpenAI for testing purposes.
According to reports, the chip developed by Microsoft will perform better than the other outsourced chips and will save time and money. Other known tech companies including Amazon, Google and Facebook are also making their own AI chips.
An Open AI supercomputer by Microsoft.
Microsoft already has a supercomputer for AI research start-up OpenAI to train large language models. For this Microsoft majorly depends on thousands of Nvidia A100 graphics chips strung together that supports chatGPT and BingAI chatbot. The company had invested $1billion in OpenAI in 2019 with an aim to develop a huge,cutting edge supercomputer.
The supercomputer of Microsoft provides enough computing power that trains and retains a large set of AI models with greater volume of data for a longer period.
According to Nidhi Chappell, Microsoft head of product for Azure high-performance computing and AI the major learning from the ongoing research is that larger models help in saving more data for longer periods of training with better accuracy.
Google’s AI chip: TPU
Last year Google announced its own AI chip and named it Tensor Processing Unit (TPU),designed specifically for machine learning tasks. This TPU is claimed to handle trillions of operations per second with low consumption of power.
The Tensor Processing Unit is designed to use with the company’s open-source software library for machine learning Google’s TensorFlow software.