1 year ago
Thurs May 23, 2024 4:31am PST
Show HN: Can GPT Be Trained for Truth? Exploring Hallucination Reduction
I've been experimenting with ways to encourage ChatGPT to generate more factual outputs. While we know it excels at creative text formats, factual accuracy can sometimes be...well, imaginative.

My approach involved using specific prompts to steer it towards factual responses.

I'm curious to hear from the HN community:

-Have you explored techniques for prompting factual responses in GPT models?

-Are there interesting applications for a "factually-focused" ChatGPT?

Let's discuss ways to push the boundaries of factual language models through creative prompting

read article
comments:
add comment
loading comments...