Suggest Indexer resets suggested words

(from github.com/bhdzllr)
Hello,

thank you for your work with Fess.

I have a problem with the suggest indexer. The schedule */14 * * * * is used and fires the suggest indexer every 14 minutes. Sometimes the indexer runs and the words already found are deleted and the indexer starts from 0.

The suggest indexer then sometimes finds fewer words than 14 minutes before. If my app has around 18000 words, after a rerun sometimes it has about 800 words or even worse only 1.

I also noticed, that if the indexer runs and counts up to about 23000 words. When the indexer is finished the count is lower, just about 18000 words.

Is there a setting for this behavior or what can be the problem? Fess Version is 11.1.1.

Thank you.

(from github.com/marevol)
We could not reproduce it on Fess 11.4.
Could you try it on Fess 11.4?

(from github.com/bhdzllr)
Hello,
thank you for your help. I tried updating my Fess Installation (via Docker), and could not test the main issue yet, because all my crawlers, labels, schedulers, etc. and already indexed documents are gone. Is there a way to expose these files and configurations outside of the docker container to keep and reuse them if I build a new container from a new image?
Thank you.

EDIT: @marevol

This is my old mapping of volumes in my docker-compose file for version 11.1:

    volumes:
     - /data/fess/data:/opt/fess/es/data
     - /data/fess/config:/opt/fess/es/config

I tried mapping these to the new version 11.4 to reuse users, crawlers, etc., but maybe wrong, because Fess is not starting. Without the mapping it work’s but I want to start it with the data (users, crawlers, etc.) from the previous version.

    volumes:
     - /data/fess/data/node_1/nodes:/var/lib/elasticsearch/nodes
     - /data/fess/config:/var/lib/elasticsearch/config

(from github.com/bhdzllr)
It seems that the main problem with Suggest Indexer was caused by too many crawlers running at the same time which crawl large websites in combination with too little memory of the server. I created different schedules for the crawlers and updated the memory of my server and now I don’t recognize problems since a few weeks.

For the second problem with mapping: I just created my crawlers and schedulers manually after the upgrade.