How to ignore robots.txt?

(from github.com/monaka)
This is not an issue report but a question.

I want to make crawling web content disallowed by robots.txt.
I googled and found how to on https://osdn.jp/projects/fess/forums/18580/36349/ . But it seems obsolete.
I’m using Fess 10.0.2. What/How do I set to ignore robots.txt?

(from github.com/marevol)
.dicon file for S2Robot is replaced with .xml for Fess Crawler.
s2robot_robotstxt.dicon is crawler/robotstxt.xml, as below:
https://github.com/codelibs/fess-crawler/blob/fess-crawler-parent-1.0.6/fess-crawler-lasta/src/main/resources/crawler/robotstxt.xml
You can put it to app/WEB-INF/classes/crawler/robotstxt.xml.

(from github.com/ndrwchn)
is it still need to put container.xml as well? I saw a ‘include’ in robotstxt.xml, but there is o container.xml under Fess classes/crawler.