Over the past few months, there has been a huge amount of hype and speculation about the implications of large language models (LLMs) such as OpenAI’s ChatGPT, Google’s Bard, Anthropic’s Claude, Meta’s LLaMA, and, most recently, GPT4. ChatGPT, in particular, reached 100 million users in two months, making it the fastest growing consumer application of all time.
It isn’t clear yet just what kind of impact LLMs will have, and opinions vary hugely. Many experts argue that LLMs will have little impact at all (early academic research suggests that the capability of LLMs is restricted to formal linguistic competence) or that even a near-infinite volume of text-based training data is still severely limiting. Others, such as Ethan Mollick, argue the opposite: “The businesses that understand the significance of this change — and act on it first — will be at a considerable advantage.”
What we do know now is that generative AI has captured the imagination of the wider public and that it is able to produce first drafts and generate ideas virtually instantaneously. We also know that it can struggle with accuracy.
Read more here.