We could use some help. The fess_crawler.queue has approximately 1.6 million documents, that do not appear to be processing. Should this queue decrease in size, and is there some settings we need to adjust to make more progress?
The size of a queue depends on a target source for crawling. If you stop a crawler, data remains in the queue. You can delete it to use Clear Crawler Indices on Maintenance page.
Thank you Shinsuke. It appears that the queue is not resuming. If we delete the crawler queue, will it automatically re-index and start downloading again?
The crawler indices are temporal indexes, not for searching. So, you can delete them on the Maintenance page before starting the crawler. They will be created again when starting the crawler.
To resume it, you need to add sessionId method to the crawler job as below.