"is not a target url" error message in fess-crawler.log

For the past week, our FESS no longer crawls our website. A setup a test crawl of fess codelibs org and that works fine. Now for some reason, I cannot crawl our site

Setting logging to debug

2021-04-06 20:48:31,224 [WebFsCrawler] DEBUG Crawling https:// blogs soas ac uk/
2021-04-06 20:48:31,248 [IndexUpdater] DEBUG Starting indexUpdater.
2021-04-06 20:48:31,363 [Crawler-AVyq-y1qHWyes_9AjA9x-1-1] DEBUG Queued URL: [UrlQueueImpl [id=AVyq-y1qHWyes_9AjA9x-1.aHR0cHM6Ly9ibG9ncy5zb2FzLmFjLnVrLw, sessionId=AVyq-y1qHWyes_9AjA9x-1, method=GET, url=https://blogs.soas.ac.uk/, encoding=null, parentUrl=null, depth=0, lastModified=0, createTime=1617738511181]]
2021-04-06 20:48:31,397 [Crawler-AVyq-y1qHWyes_9AjA9x-1-1] DEBUG https:// blogs soas ac uk/ is not a target url. (0)
2021-04-06 20:48:32,900 [Crawler-AVyq-y1qHWyes_9AjA9x-1-1] DEBUG The url is null. (1)
2021-04-06 20:48:34,402 [Crawler-AVyq-y1qHWyes_9AjA9x-1-1] DEBUG The url is null. (2)
2021-04-06 20:48:35,905 [Crawler-AVyq-y1qHWyes_9AjA9x-1-1] DEBUG The url is null. (3)
2021-04-06 20:48:37,407 [Crawler-AVyq-y1qHWyes_9AjA9x-1-1] DEBUG The url is null. (4)

This contrasts with the test on codelibs org site

2021-04-06 20:54:10,495 [WebFsCrawler] DEBUG Crawling https:// fess codelibs org/
2021-04-06 20:54:10,510 [IndexUpdater] DEBUG Starting indexUpdater.
2021-04-06 20:54:11,022 [Crawler-FjOZhHgB0yl_Gn_vJi1X-1-1] DEBUG Queued URL: [UrlQueueImpl [id=FjOZhHgB0yl_Gn_vJi1X-1.aHR0cHM6Ly9mZXNzLmNvZGVsaWJzLm9yZy8, sessionId=FjOZhHgB0yl_Gn_vJi1X-1, method=GET, url=https:// fess codelibs org/, encoding=null, parentUrl=null, depth=0, lastModified=0, createTime=1617738850426]]
2021-04-06 20:54:11,091 [Crawler-FjOZhHgB0yl_Gn_vJi1X-1-1] INFO Crawling URL: https:// fess codelibs org/
2021-04-06 20:54:11,092 [Crawler-FjOZhHgB0yl_Gn_vJi1X-1-1] DEBUG Searching indexed document: https:%2F%2F fess codelibs org%2F;role=Rguest
2021-04-06 20:54:11,094 [Crawler-FjOZhHgB0yl_Gn_vJi1X-1-1] DEBUG Query DSL:
2021-04-06 20:54:11,105 [Crawler-FjOZhHgB0yl_Gn_vJi1X-1-1] DEBUG Checking the last modified: 1617676130000
2021-04-06 20:54:11,111 [Crawler-FjOZhHgB0yl_Gn_vJi1X-1-1] DEBUG Initializing org.codelibs.fess.crawler.client.http.HcHttpClient
2021-04-06 20:54:11,153 [Crawler-FjOZhHgB0yl_Gn_vJi1X-1-1] DEBUG Accessing https:// fess codelibs org/
2021-04-06 20:54:11,153 [Crawler-FjOZhHgB0yl_Gn_vJi1X-1-1] INFO Checking URL: https:// fess.codelibs org/robots.txt
2021-04-06 20:54:11,163 [Crawler-FjOZhHgB0yl_Gn_vJi1X-1-1] DEBUG CookieSpec selected: default
2021-04-06 20:54:11,176 [Crawler-FjOZhHgB0yl_Gn_vJi1X-1-1] DEBUG Connection request: [route: {s}->https:// fess codelibs org 443][total kept alive: 0; route allocated: 0 of 20; total allocated: 0 of 200]
2021-04-06 20:54:11,192 [Crawler-FjOZhHgB0yl_Gn_vJi1X-1-1] DEBUG Connection leased: [id: 0][route: {s}->https:// fess codelibs org:443][total kept alive: 0; route allocated: 1 of 20; total allocated: 1 of 200]
2021-04-06 20:54:11,194 [Crawler-FjOZhHgB0yl_Gn_vJi1X-1-1] DEBUG Opening connection {s}->https:// fess codelibs org:443
2021-04-06 20:54:11,210 [Crawler-FjOZhHgB0yl_Gn_vJi1X-1-1] DEBUG Connecting to fess codelibs org/140.227.67.233:443
2021-04-06 20:54:11,210 [Crawler-FjOZhHgB0yl_Gn_vJi1X-1-1] DEBUG Connecting socket to fess codelibs org/140.227.67.233:443 with timeout 0
2021-04-06 20:54:11,489 [Crawler-FjOZhHgB0yl_Gn_vJi1X-1-1] DEBUG Enabled protocols: [TLSv1, TLSv1.1, TLSv1.2]
Thanks for your time.

Fin

The crawl config is

ID AVyq-y1qHWyes_9AjA9x
Name 100 SOAS Blogs
URLs https:// blogs soas ac uk/
Included URLs For Crawling https:// blogs soas ac uk/.*
Included URLs For Indexing
Excluded URLs For Indexing
Config Parameters
Depth 3
Max Access Count
User Agent Mozilla/5.0 (compatible; Fess/12.0; +SOAS Blogs)
The number of Thread 1
Interval time 1000 ms
Boost 1.0
Permissions {role}guest
Virtual Hosts
Label Blogs
Status Enabled

I have disabled the url links of course

2021-04-06 20:48:31,397 [Crawler-AVyq-y1qHWyes_9AjA9x-1-1] DEBUG https:// blogs soas ac uk/ is not a target url. (0)

This URL seems to be excluded.
It might be included in Failure URL and exceed Failure Count Threshold.

Thanks shinsuke - will look into that