content length への対応

環境はDebian 10で fess.debを使っております。
以下のようなerrorが出て、
ドキュメントにかかれている通り
/usr/share/fess/app/WEB-INF/classes/crawler/contentlength.xml
のdefaultMaxLengthを1048576000としていますが、
結果に変わりはありません。
/etc/fess/fess_env_crawler.properties
にclient.maxContentLength = 10485760000
を追加してみましたが、挙動は変わりません。

別の設定が必要でしょうか?御教示下さい。

org.codelibs.fess.crawler.exception.MaxLengthExceededException: The content length (16702977 byte) is over 10000000 byte. The url is https://drive.google.com/uc?id=1-1Cdw57SLmyO1Ei8Xiu1gg4qs9-OBdfF&export=download
at org.codelibs.fess.ds.gsuite.GoogleDriveDataStore.processFile(GoogleDriveDataStore.java:289)
at org.codelibs.fess.ds.gsuite.GoogleDriveDataStore.lambda$storeFiles$2(GoogleDriveDataStore.java:220)
at java.base/java.util.concurrent.ThreadPoolExecutor$CallerRunsPolicy.rejectedExecution(ThreadPoolExecutor.java:2027)
at java.base/java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:825)
at java.base/java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1355)
at org.codelibs.fess.ds.gsuite.GoogleDriveDataStore.lambda$storeFiles$3(GoogleDriveDataStore.java:220)
at org.codelibs.fess.ds.gsuite.GSuiteClient.getFiles(GSuiteClient.java:177)
at org.codelibs.fess.ds.gsuite.GoogleDriveDataStore.storeFiles(GoogleDriveDataStore.java:218)
at org.codelibs.fess.ds.gsuite.GoogleDriveDataStore.storeData(GoogleDriveDataStore.java:154)
at org.codelibs.fess.ds.AbstractDataStore.store(AbstractDataStore.java:111)
at org.codelibs.fess.helper.DataIndexHelper$DataCrawlingThread.process(DataIndexHelper.java:216)
at org.codelibs.fess.helper.DataIndexHelper$DataCrawlingThread.run(DataIndexHelper.java:202)

パラメーター欄で max_size=... で指定してみてください。

I am using AWS ElasticSearch service, and GoogleDrive plugin.

For csv file approximately 20MB. After adding max_size, org.codelibs.fess.crawler.exception.MaxLengthExceededException error changed to following error:

java.lang.OutOfMemoryError: Java heap space
at java.base/java.lang.String.(Unknown Source)
at java.base/java.lang.String.(Unknown Source)
at org.apache.lucene.util.BytesRef.utf8ToString(BytesRef.java:138)
at org.opensearch.common.bytes.AbstractBytesReference.utf8ToString(AbstractBytesReference.java:80)
at org.opensearch.common.xcontent.XContentHelper.convertToJson(XContentHelper.java:262)
at org.opensearch.common.xcontent.XContentHelper.convertToJson(XContentHelper.java:242)
at org.codelibs.fesen.client.action.HttpBulkAction.execute(HttpBulkAction.java:86)
at org.codelibs.fesen.client.HttpClient.lambda$new$23(HttpClient.java:590)
at org.codelibs.fesen.client.HttpClient$$Lambda$289/0x000000080100b260.accept(Unknown Source)
at org.codelibs.fesen.client.HttpClient.doExecute(HttpClient.java:962)
at org.opensearch.client.support.AbstractClient.execute(AbstractClient.java:433)
at org.opensearch.client.support.AbstractClient.execute(AbstractClient.java:419)
at org.opensearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:58)
at org.codelibs.fess.es.client.SearchEngineClient.addAll(SearchEngineClient.java:1115)
at org.codelibs.fess.helper.IndexingHelper.sendDocuments(IndexingHelper.java:68)
at org.codelibs.fess.ds.callback.IndexUpdateCallbackImpl.store(IndexUpdateCallbackImpl.java:133)
at org.codelibs.fess.ds.gsuite.GoogleDriveDataStore.processFile(GoogleDriveDataStore.java:388)
at org.codelibs.fess.ds.gsuite.GoogleDriveDataStore.lambda$storeFiles$2(GoogleDriveDataStore.java:226)
at org.codelibs.fess.ds.gsuite.GoogleDriveDataStore$$Lambda$956/0x00000008011a0fb8.run(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor$CallerRunsPolicy.rejectedExecution(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor.reject(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor.execute(Unknown Source)
at org.codelibs.fess.ds.gsuite.GoogleDriveDataStore.lambda$storeFiles$3(GoogleDriveDataStore.java:226)
at org.codelibs.fess.ds.gsuite.GoogleDriveDataStore$$Lambda$955/0x000000080129e000.accept(Unknown Source)
at org.codelibs.fess.ds.gsuite.GSuiteClient.getFiles(GSuiteClient.java:177)
at org.codelibs.fess.ds.gsuite.GoogleDriveDataStore.storeFiles(GoogleDriveDataStore.java:224)
at org.codelibs.fess.ds.gsuite.GoogleDriveDataStore.storeData(GoogleDriveDataStore.java:160)
at org.codelibs.fess.ds.AbstractDataStore.store(AbstractDataStore.java:122)
at org.codelibs.fess.helper.DataIndexHelper$DataCrawlingThread.process(DataIndexHelper.java:218)
at org.codelibs.fess.helper.DataIndexHelper$DataCrawlingThread.run(DataIndexHelper.java:204)

And for the same file but in Google Sheet format, approximately 7MB. After adding max_size, error is same as before:

org.codelibs.fess.crawler.exception.CrawlingAccessException: Failed to extract a text from 15tE3MLQC6pa7BQIG_8OanYRGqJSoNqCBb2pkBQqqXEM
at org.codelibs.fess.ds.gsuite.GSuiteClient.extractFileText(GSuiteClient.java:192)
at org.codelibs.fess.ds.gsuite.GoogleDriveDataStore.getFileContents(GoogleDriveDataStore.java:507)
at org.codelibs.fess.ds.gsuite.GoogleDriveDataStore.processFile(GoogleDriveDataStore.java:288)
at org.codelibs.fess.ds.gsuite.GoogleDriveDataStore.lambda$storeFiles$2(GoogleDriveDataStore.java:226)
at java.base/java.util.concurrent.ThreadPoolExecutor$CallerRunsPolicy.rejectedExecution(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor.reject(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor.execute(Unknown Source)
at org.codelibs.fess.ds.gsuite.GoogleDriveDataStore.lambda$storeFiles$3(GoogleDriveDataStore.java:226)
at org.codelibs.fess.ds.gsuite.GSuiteClient.getFiles(GSuiteClient.java:177)
at org.codelibs.fess.ds.gsuite.GoogleDriveDataStore.storeFiles(GoogleDriveDataStore.java:224)
at org.codelibs.fess.ds.gsuite.GoogleDriveDataStore.storeData(GoogleDriveDataStore.java:160)
at org.codelibs.fess.ds.AbstractDataStore.store(AbstractDataStore.java:122)
at org.codelibs.fess.helper.DataIndexHelper$DataCrawlingThread.process(DataIndexHelper.java:218)
at org.codelibs.fess.helper.DataIndexHelper$DataCrawlingThread.run(DataIndexHelper.java:204)
Caused by: com.google.api.client.http.HttpResponseException: 403 Forbidden
GET https://www.googleapis.com/drive/v3/files/15tE3MLQC6pa7BQIG_8OanYRGqJSoNqCBb2pkBQqqXEM/export?mimeType=text/csv&alt=media
{
“error”: {
“errors”: [
{
“domain”: “global”,
“reason”: “exportSizeLimitExceeded”,
“message”: “This file is too large to be exported.”
}
],
“code”: 403,
“message”: “This file is too large to be exported.”
}
}

at com.google.api.client.http.HttpResponseException$Builder.build(HttpResponseException.java:293)
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:1118)
at com.google.api.client.googleapis.media.MediaHttpDownloader.executeCurrentRequest(MediaHttpDownloader.java:244)
at com.google.api.client.googleapis.media.MediaHttpDownloader.download(MediaHttpDownloader.java:198)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeMediaAndDownloadTo(AbstractGoogleClientRequest.java:562)
at com.google.api.services.drive.Drive$Files$Export.executeMediaAndDownloadTo(Drive.java:2993)
at org.codelibs.fess.ds.gsuite.GSuiteClient.extractFileText(GSuiteClient.java:189)
… 13 more

Can there be done anything else to enable scrape of these files?

java.lang.OutOfMemoryError: Java heap space

You need to increase heap memory in fess_config.properties.

“message”: “This file is too large to be exported.”

This problem is not from Fess, and Google seems to check the file size.

1 Like