On Tuesday, OpenAI announced another AI powerhouse named GPT-4 as a next update to the technology that supports ChatGPT and Microsoft Bing, the search engine that uses the tech.
GPT-4 is planned to be faster, larger and more accurate than the existing ChatGPT, as it has cleared several top examinations with a good score like those of the Uniform Bar exam for those who want to practise law in the US.
In the company’s announcement blog the language model powers are more creative and collaborative than the earlier ones. Whereas ChatGPT powered GPT-3.5 accepts only text inputs, GPT-4 will also use images to generate captions and analyses. But let’s go deep into technology.
OpenAI on March 14 had announced a large multimodal model named GPT-4 that will not only encompass text but also the images. Whereas GPT-3 and GPT-3.5 accepts only text claiming to be single modality. They can revert only when a user types and asks questions.
OpenAI ChatGPT-4 has the fresh ability to process images and is able to give human-level performance for various professional or educational benchmarks. The broader general knowledge and problem-solving abilities this model is enabled to solve difficult problems with greater accuracy and score a higher percentage compared to the top 10 of the test takers.
ChatGPT-4 can answer tax related questions, schedule a meeting among people and can even learn the user’s creative writing style. It is capable of handling 25,000 words of text, long-form content creation, document search and analysis and even extended conversations.
Difference between GPT-4 and GPT-3
GPT-4 can analyse images: GPT-4 is multimodal and can understand more than one modality of information. GPT-3 and GPT-3.5 could only read and write but GPT-4 can be fed with images and asked for the information output accordingly.
GPT-4 is more advanced than the Google lens that searches only information but GPT-4 can understands and analyse it as well.
GPT-4 will be more accurate: ChatGPT and Bing can occasionally go off rails while generating output and the facts can be mixed up and produce misinformation.
OpenAI have given 6 months training to GPT-4 using “adversarial testing program” lessons and ChatGPT that results in “best-ever results on factuality, steerability, and refusing to go outside of guardrails.”
GPT-4 can process multiple information: ChatGPT’s GPT-3.5 model could handle 4,096 tokens or around 8,000 words but GPT-4 can pump up to 32,768 tokens or around 64,000 words.
It is a Large Language Models(LLM) based on training on billions of parameters that results in countless data.
Where ChatGPT could process 8,000 words at a time, GPT-4 can maintain its integrity over way lengthier conversations.
GPT-4 has improved accuracy: GPT-4 may not be fully reliable but it reduces significantly hallucinations relative to previous models and scores 40 percent higher than GPT-3.5 on factual evaluations. It will be difficult for ChatGPT-4 to produce undesirable outputs.
GPT-4 understands languages other than English as well: GPT-4 is trained for LLMs in other languages and can accurately answer thousands of multiple-choice across 26 languages. It can produce outputs even in one’s’ native languages .
When can you try ChatGPT-4?
Products like Duolingo, Stripe, and Khan Academy have already integrated GPT-4 for various purposes. Free users have to wait but it is available on an immediate basis at $20 per month ChatGPT Plus subscription. The free trial is currently with ChatGPT-3.5.
For free use Microsoft has confirmed that the new Binge search Engine is based on GPT-4 and one can access GPT-4 from bing.com/chat right now.
Developers can gain access to GPT-4 through its API. A waitlist has been announced for API access that will begin accepting users later this month.