If you're getting unexpected results, here a few things you can try:
Go to the Debug page and have a look at the snippets being referenced for each query. Does GPT-3 have all the information it needs? If not, try adding a new snippet that answers the question more directly.
Go to the Settings page and try tweaking your bot's personality and the sticky context. The sticky context is good for background information that the bot should know. For example, if you're a hardware store, and your bot is supposed to direct customers to your store, you should give it that context, and perhaps your phone number.
Try phrasing your question differently to see if you get better results. Consider adding or editing snippets based on your findings.
While we try to prevent GPT-3 hallucinations, sometimes, particularly for very well known information, it just can't help itself. For example, for our Marcus Aurelius bot, if you ask
who is your wifeit might answer something along the lines
My wife was Faustina , whom I married in 145. We had a long and happy marriage, and she was a major source of support during my reign. She was the daughter of my adoptive brother and co-emperor, Antoninus.The first sentence is fine, but our dataset doesn't contain information about the happiness of the marriage (or otherwise), so this is a misbehavior by the bot. In this case the best remedy would be to add a new snippet that adds more information about the marriage.
If none of the above helps, please contact us.