The crawl of the web ends in the middle and not all indexes are collected.

(from github.com/yaf123)
Sometimes the crawl of the web ends prematurely and not all indexes are collected.

If you know the cause and how to deal with it, I think that you can tell me.

  • fess-11.1.0

  • When about 15000 Web pages are crawled, they are terminated halfway.

  • The collected index ranges from 1000 to 5000.(It changes every time)

  • There is the following description in fess.log. It is output during the time when the web crawl fails.

2017-06-09 11:01:25,594 [job_AVfb9i24B10VHuLJO2w4] WARN  Failed to evalue groovy script: return container.getComponent("crawlJob").logLevel("info").sessionId("AVfb9gM2B10VHuLJO2w0").execute(executor, ["AVfb9gM2B10VHuLJO2w0"] as String[],[] as String[],[] as String[], ""); => {executor=org.codelibs.fess.job.impl.GroovyExecutor@26194ee5}
org.codelibs.fess.exception.FessSystemException: Exit Code: 137
Output:

        at org.codelibs.fess.job.CrawlJob.executeCrawler(CrawlJob.java:393) ~[classes/:?]
        at org.codelibs.fess.job.CrawlJob.execute(CrawlJob.java:223) ~[classes/:?]
        at org.codelibs.fess.job.CrawlJob.execute(CrawlJob.java:154) ~[classes/:?]
        at sun.reflect.GeneratedMethodAccessor152.invoke(Unknown Source) ~[?:?]
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_131]
        at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_131]
        at org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite$PojoCachedMethodSite.invoke(PojoMetaMethodSite.java:192) ~[groovy-all-2.4.8.jar:2.4.8]
        at org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite.call(PojoMetaMethodSite.java:56) ~[groovy-all-2.4.8.jar:2.4.8]
        at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48) ~[groovy-all-2.4.8.jar:2.4.8]
        at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113) ~[groovy-all-2.4.8.jar:2.4.8]
        at Script1.run(Script1.groovy:1) ~[?:?]
        at groovy.lang.GroovyShell.evaluate(GroovyShell.java:585) ~[groovy-all-2.4.8.jar:2.4.8]
        at groovy.lang.GroovyShell.evaluate(GroovyShell.java:623) ~[groovy-all-2.4.8.jar:2.4.8]
        at groovy.lang.GroovyShell.evaluate(GroovyShell.java:594) ~[groovy-all-2.4.8.jar:2.4.8]
        at org.codelibs.fess.util.GroovyUtil.evaluate(GroovyUtil.java:41) [classes/:?]
        at org.codelibs.fess.job.impl.GroovyExecutor.execute(GroovyExecutor.java:31) [classes/:?]
        at org.codelibs.fess.app.job.ScriptExecutorJob.run(ScriptExecutorJob.java:91) [classes/:?]
        at org.lastaflute.job.LaJobRunner.actuallyRun(LaJobRunner.java:169) [lasta-job-0.2.5.jar:?]
        at org.lastaflute.job.LaJobRunner.doRun(LaJobRunner.java:154) [lasta-job-0.2.5.jar:?]
        at org.lastaflute.job.LaJobRunner.run(LaJobRunner.java:110) [lasta-job-0.2.5.jar:?]
        at org.lastaflute.job.cron4j.Cron4jTask.runJob(Cron4jTask.java:150) [lasta-job-0.2.5.jar:?]
        at org.lastaflute.job.cron4j.Cron4jTask.doExecute(Cron4jTask.java:137) [lasta-job-0.2.5.jar:?]
        at org.lastaflute.job.cron4j.Cron4jTask.execute(Cron4jTask.java:101) [lasta-job-0.2.5.jar:?]
        at it.sauronsoftware.cron4j.TaskExecutor$Runner.run(Unknown Source) [cron4j-2.2.5.jar:?]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
2017-06-09 11:02:23,056 [elasticsearch[Node 1][scheduler][T#1]] WARN  [gc][old][94413][42] duration [56.1s], collections [1]/[56.1s], total [56.1s]/[2.2m], memory [1.7gb]->[601mb]/[1.9gb], all_pools {[young] [65mb]->[8.8mb]/[266.2mb]}{[survivor] [8.4mb]->[0b]/[33.2mb]}{[old] [1.6gb]->[592.1mb]/[1.6gb]}
2017-06-09 11:02:23,056 [elasticsearch[Node 1][scheduler][T#1]] WARN  [gc][94413] overhead, spent [56.1s] collecting in the last [56.1s]
2017-06-09 11:02:23,335 [elasticsearch[Node 1][clusterService#updateTask][T#1]] WARN  cluster state update task [put-mapping[doc]] took [1.3m] above the warn threshold of 30s

(from marevol (Shinsuke Sugaya) · GitHub)

Exit Code: 137

It’s SIGKILL. I think OOM Killer kills Crawler process because of memory shortage.
Please check OS log files.