(from github.com/doct)
Setting client.maxContentLength=100000000 in Config Parameters of File Crawling Configuration only affects some of the crawled files, but not others. It is not clear, when the value is used, and when not.
Applying the configured value does not depend on file type.
This observation is based on entries in the “Failure URL” System Info section.
Edit: reducing the client.maxContentLength makes the non-default value being applied more often. I suspect this to be a java vm memory issue, however, can’t find any error log entries regarding this.
Edit 2: doubling FESS_MIN_MEM to 512m and quadrupling FESS_MAX_MEM to 4g doesn’t seem to have any impact on the issue.
Properties for bug report:
file.separator=
file.encoding=UTF-8
java.runtime.version=9+181
java.vm.info=mixed mode
java.vm.name=Java HotSpot™ 64-Bit Server VM
java.vm.vendor=Oracle Corporation
java.vm.version=9+181
os.arch=amd64
os.name=Windows 10
os.version=10.0
user.country=GB
user.language=en
user.timezone=Europe/Berlin
suggest.document=true
purge.searchlog.day=-1
thumbnail.enabled=false
append.query.parameter=false
search.log=false
web.api.popularword=true
purge.userinfo.day=-1
purge.suggest.searchlog.day=30
purge.joblog.day=-1
purge.by.bots=Crawler,crawler,Bot,bot,Slurp,Yeti,Baidu,Steeler,ichiro,hotpage,Feedfetcher,ia_archiver,Y!J-BRI,Google Desktop,Seznam,Tumblr,YandexBot,Chilkat,CloudFront,Mediapartners,MSIE 6
login.link.enabled=true
user.info=false
user.favorite=false
login.required=false
result.collapsed=false
crawling.thread.count=5
ldap.memberof.attribute=memberOf
csv.file.encoding=UTF-8
crawling.incremental=true
web.api.json=true
day.for.cleanup=3
failure.countthreshold=-1
suggest.searchlog=true