Does the crawler resume or restart when stopped?

(from github.com/charles-pinkston)
Crawling my site is taking a long time. I’d like to stop the crawler to allocate some additional memory. I want to know if the crawler will continue from where it was for indexing documents, or if it will start over from the very beginning.

(from github.com/marevol)
To resume a crawler, you need to set a session id to a crawler setting.
In Scheduler, change Script of Default Crawler,

return container.getComponent("crawlJob").logLevel("info").execute(executor);

to

return container.getComponent("crawlJob").logLevel("info").sessionId("SOMETHING").execute(executor);

Crawling Information are saved by each session id.
The default value of a session id is a random value.