While it can’t “know” its own confidence level, it can distinguish between general knowledge (12” in 1’) and specialized knowledge that requires supporting sources.
At one point, I had a chatGPT memory designed for it to automatically provide sources for specialized knowledge and it did a pretty good job.
They could make Siri change its voice and Genmoji based on the degree of certainty of the response:
They could sell different voice packages. Revive the ringtone market.
The AI is confidently wrong, that’s the whole problem. If there was an easy way to know if it could be wrong we wouldn’t have this discussion
this paper tries to do that: arxiv.org/pdf/2404.04689
there are also several other techniques I think
While it can’t “know” its own confidence level, it can distinguish between general knowledge (12” in 1’) and specialized knowledge that requires supporting sources.
At one point, I had a chatGPT memory designed for it to automatically provide sources for specialized knowledge and it did a pretty good job.