So what's been happening? Well a steady stream of reports of the same bug - we run out of memory if you try to read from PDF documents with hundreds of pages - and some great feedback and suggestions via UserVoice.
I was surprised when people first used VelOCRaptor on large PDF documents, but then I'd had my mind in the world of little scanners, and reckoned without the Internet. So people have been trying to push whole PDF books through the thing, and it breaks. It took very little work to find the source of the memory leak, but fixing it is another issue. Basically the RubyCocoa system we use for the guts of the PDF reading and writing isn't up to the job, and I'm having to re-write much of that code in Objective-C - the language of Mac OS X. It's irritating, but just a fact of programmer life, so I'm biting the bullet and getting on with it. Wish me luck.