ChatGPT Fine-tuning 해보기

Fine-tuning이란?
GPT-3 has been pre-trained on a vast amount of text from the open internet. When given a prompt with just a few examples, it can often intuit what task you are trying to perform and generate a plausible completion. This is often called "few-shot learning."

Fine-tuning improves on few-shot learning by training on many more examples than can fit in the prompt, letting you achieve better results on a wide number of tasks. Once a model has been fine-tuned, you won't need to provide examples in the prompt anymore. This saves costs and enables lower-latency requests.


스터디때 실제 만들어본 과정에 대해서 시현해 보겠습니다 :)
4
1개의 답글

👉 이 게시글도 읽어보세요