While it can’t “know” its own confidence level, it can distinguish between general knowledge (12” in 1’) and specialized knowledge that requires supporting sources.
At one point, I had a chatGPT memory designed for it to automatically provide sources for specialized knowledge and it did a pretty good job.
The AI is confidently wrong, that’s the whole problem. If there was an easy way to know if it could be wrong we wouldn’t have this discussion
this paper tries to do that: arxiv.org/pdf/2404.04689
there are also several other techniques I think
While it can’t “know” its own confidence level, it can distinguish between general knowledge (12” in 1’) and specialized knowledge that requires supporting sources.
At one point, I had a chatGPT memory designed for it to automatically provide sources for specialized knowledge and it did a pretty good job.