Main issue is Gemini traditionally uses it’s training data and the version answering your search is summarising search results, which can vary in quality and since it’s just a predictive text tree it can’t really fact check.
Yeah when you use Gemini, it seems like sometimes it’ll just answer based on its training, and sometimes it’ll cite some source after a search, but it seems like you can’t control that. It’s not like Bing that will always summarize and link where it got that information from.
I also think Gemini probably uses some sort of knowledge graph under the hoods, because it has some very up to date information sometimes.
Main issue is Gemini traditionally uses it’s training data and the version answering your search is summarising search results, which can vary in quality and since it’s just a predictive text tree it can’t really fact check.
Yeah when you use Gemini, it seems like sometimes it’ll just answer based on its training, and sometimes it’ll cite some source after a search, but it seems like you can’t control that. It’s not like Bing that will always summarize and link where it got that information from.
I also think Gemini probably uses some sort of knowledge graph under the hoods, because it has some very up to date information sometimes.
I think copilot is way more usable than this hallucination google AI…