Was wondering how we could use this approach for a RAG system.
Eg: can we modify the self_responses to make use of RAG to ensure better factuality of the system?
or, will you be able to suggest any other approaches we could try to ensure that the LLM answer(which is already retrieved from RAG) is still not a hallucinated one..