I’m trying to work out how the crawler/indexing works.
I crawled one site (which for some reason never ever finished crawling) and had 10,000+ results for a certain word.
I then crawled it again, and it’s down to <1250 results for the same word and the crawler log says nothing is getting indexed “pprocessing no docs in indexing queue”.
Any idea what’s going on here?
When you re-crawl, do all the previous indexed documents get auto purged?