HI @OlivierL
This is log/nginx/kpi.error.log
2024/10/04 10:13:28 [error] 44#44: *316 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.1.161, server: kf.kobo.local, request: “GET /service_health/ HTTP/1.1”, upstream: “uwsgi://192.168.160.7:8000”, host: “kf.kobo.local”
This is log/nginx/kobocat.error.log
2024/10/03 06:37:19 [error] 43#43: *36 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.64.3, server: kc.kobo.local, request: “GET /legacy/service_health/ HTTP/1.1”, upstream: “uwsgi://192.168.64.7:8000”, host: “kc.docker.internal”
This is log/kpi/celery_kpi_worker.log
[2024-10-04 08:52:09,751: INFO/MainProcess] mingle: searching for neighbors
[2024-10-04 08:52:10,770: INFO/MainProcess] mingle: all alone
[2024-10-04 08:52:10,788: INFO/MainProcess] kpi_worker@kpi ready.
[2024-10-04 10:03:32,465: WARNING/MainProcess] /opt/venv/lib/python3.10/site-packages/celery/worker/consumer/consumer.py:507: CPendingDeprecationWarning: The broker_connection_retry configuration setting will no longer determine
whether broker connection retries are made during startup in Celery 6.0 and above.
If you wish to retain the existing behavior for retrying connections on startup,
you should set broker_connection_retry_on_startup to True.
warnings.warn(
[2024-10-04 10:03:32,490: INFO/MainProcess] Connected to redis://:**@redis-main.kobo.private:6379/1
[2024-10-04 10:03:32,491: WARNING/MainProcess] /opt/venv/lib/python3.10/site-packages/celery/worker/consumer/consumer.py:507: CPendingDeprecationWarning: The broker_connection_retry configuration setting will no longer determine
whether broker connection retries are made during startup in Celery 6.0 and above.
If you wish to retain the existing behavior for retrying connections on startup,
you should set broker_connection_retry_on_startup to True.
warnings.warn(
[2024-10-04 10:03:32,493: INFO/MainProcess] mingle: searching for neighbors
[2024-10-04 10:03:33,503: INFO/MainProcess] mingle: all alone
[2024-10-04 10:03:33,524: INFO/MainProcess] kpi_worker@kpi ready.
This is log/kpi/uwsgi.log
{address space usage: 889958400 bytes/848MB} {rss usage: 40185856 bytes/38MB} [pid: 155|app: 0|req: 25/49] 192.168.1.161 () {50 vars in 1467 bytes} [Fri Oct 4 04:22:44 2024] GET /api/v2/assets/?q=(asset_type%3Aquestion%20OR%20asset_type%3Ablock%20OR%20asset_type%3Atemplate%20OR%20asset_type%3Acollection)%20AND%20parent%3Anull&limit=100&offset=0&ordering=-date_modified&metadata=on&collections_first=true => generated 127 bytes in 29821 msecs (HTTP/1.1 200) 10 headers in 326 bytes (1 switches on core 0)
[busyness] 30s average busyness is at 52%, will spawn 1 new worker(s)
spawned uWSGI worker 4 (pid: 164, cores: 1)
[busyness] 30s average busyness is at 0%, cheap one of 4 running workers
worker 1 killed successfully (pid: 155)
uWSGI worker 1 cheaped.
[busyness] 30s average busyness is at 0%, cheap one of 3 running workers
worker 2 killed successfully (pid: 156)
uWSGI worker 2 cheaped.
This is our error log and log reports. Is there anything suspicious you found related to the issue? I would greatly appreciate your valuable guidance on resolving this.