How to ignore robots.txt?

This is not an issue report but a question.

I want to make crawling web content disallowed by robots.txt.
I googled and found how to on . But it seems obsolete.
I’m using Fess 10.0.2. What/How do I set to ignore robots.txt?

.dicon file for S2Robot is replaced with .xml for Fess Crawler.
s2robot_robotstxt.dicon is crawler/robotstxt.xml, as below:
You can put it to app/WEB-INF/classes/crawler/robotstxt.xml.

is it still need to put container.xml as well? I saw a ‘include’ in robotstxt.xml, but there is o container.xml under Fess classes/crawler.