
ChatGPT is limited for many practical business use cases outside of code generation. The limitation arises from the training data, and the model’s propensity to hallucinate. At the time of writing, if you try to ask the Chat-GPT questions about events occurring after September 2021, you will probably receive a response like this:
This isn’t helpful, so how can we go about rectifying it?
Option 1 — Train or fine-tune the model on up-to-date data.
Fine-tuning or training a model can be impractical and expensive. Putting aside the costs, the effort required to prepare the data sets is enough to forgo this option.
Option 2 — Use retrieval augmented generation (RAG) methods.
RAG methods enable us to give the large language model access to an up-to-date knowledge base. This is much cheaper than training a model from scratch or fine-tuning,…
…
Continue reading this article at;
https://towardsdatascience.com/build-more-capable-llms-with-retrieval-augmented-generation-99d5f86e9779?source=rss—-7f60cf5620c9—4
towardsdatascience.com
Feed Name : Towards Data Science – Medium
prompt-engineering,nlp,data-science,large-language-models,artificial-intelligence
hashtags : #Build #Capable #LLMs #Retrieval #Augmented #Generation #John..