I’ve just scanned a section of a book (in French) that unfortunately uses a very fine typeface and a lot of italics that seem to confuse the OCR.
I’m on Linux, so I’m switching off between gscan2pdf (which makes use of the remarkable unpaper program) and Master PDF Editor (a proprietary program) to clean up and deskew the scans before OCRing them (since each program has their own strengths and weaknesses). I did this, got the scanned pages looking pretty good, and then OCRed them using Tesseract (which is an option in gscan2pdf). I also tried GOCR, which produced garbage-level results.
Tesseract didn’t do too badly, but what did happen is that it occasionally mixes lines of text together–despite me trying to get them as straight as possible, and doing what I thought was a pretty good job! Also, it will put spaces in the midst of words and sentences, like this: “J e t’ai m e” which is kind of annoying to have to go through and fix, especially since there are a lot of those spaces! Can anyone recommend a better approach to this, some different software maybe, or is this the best I can reasonably hope for?
Thanks to @ZickZack@kbin.social, @brie@venera.social, & @bownage@beehaw.org for their responses. I forgot that Tesseract is mainly used from the command line; something which, despite being a Linux person, I’m not super proficient with. It looks like gscan2pdf and Master PDF OCRs got different results despite, I think, both using the same version of Tesseract.
So difference must be in the settings, which you can achieve both by using tesseract directly