Robots.txt handling does not read the allow rules by default and doesn't follow standards

For some reason, the Fess does not read the allow rules from the robots.txt by default:

Also in the robots.txt hangling, the longest rule should match. With Fess this is not the case:

public void test_match_google_case1() {
	urlFilter.addInclude("http://example.com/p.*");
	urlFilter.addExclude("http://example.com/.*");

	final String sessionId = "id1";
	urlFilter.init(sessionId);

	assertTrue(urlFilter.match("http://example.com/page"));
}

With kind regards,
Ossi

I’ll change it to true.