βΉοΈWhat steps are involved in LLMOps?
Last updated
Last updated
The process involved in LLMOps bears similarities to MLOps, but the construction of an LLM-powered application involves unique steps owing to the rise of foundation models. Rather than training LLMs from scratch, the emphasis is on fine-tuning pre-existing LLMs to fulfill specific downstream tasks.
Over a year ago, Andrej Karpathy elucidated how the development of AI products will transform in the future:
The most significant trend [...] is that the traditional approach of training a neural network from scratch for a specific task [...] is rapidly becoming obsolete due to fine-tuning, particularly with the advent of foundation models like GPT. These foundation models are trained by a limited number of institutions with extensive computational resources. Most applications are achieved via light fine-tuning of parts of the network, prompt engineering, or an optional step of data or model distillation into smaller, purpose-specific inference networks. - Andrej Karpathy (OpenAI Co-founder)
At first glance, this quote might seem daunting. However, it succinctly encapsulates the ongoing shifts in the field. In the following subsections, we'll break down and explore this quote in a step-by-step manner.