Answer question <q> using context <c>.
Say <c> is a contradictory statement to what the model was trained on.
There are certain situations where one wants the llm to use model knowledge, and some where one does not. Is there any formal research in this area?