My KOBO server keeps on crashing

Hey guys, I have been getting frequent server crashes where I get an error 504 at times and people cannot access or even upload data. To fix this I have to run python run.py. Any solution for this?

Maybe this post discussed previously helps you solve your issue:

You might need to fine tune PostgreSQL configuration as per the load your server handles

2 Likes

And how do I do that? @stephenoduor

I am facing the same issue. Kobo crashes periodically. Everything looks pretty OK in logs and docker ps. Since the need of the hour is usually to get it working ASAP I haven’t had the opportunity to do extensive debugging. Any pointers about where to look would be great

@manu_j What are your server’s specs? I thought that was the cause so I added more Ram and processors but am not really sure if that is what helped.Also make sure you upgrade your kobo version If all that does not work you could try fine tuning your postgres. This is how you can do it but make sure you have a backup.

PostgreSQL performance tuning

Tuning PostgreSQL is necessary to achieve a high-performing system but is optional in terms of getting KOBO to run. PostgreSQL is configured and tuned through the postgresql.conf file which can be edited like this:

sudo nano /etc/postgresql/10/main/postgresql.conf

and set the following properties:

max_connections = 200

Determines maximum number of connections which PostgreSQL will allow.

shared_buffers = 3200MB

Determines how much memory should be allocated exclusively for PostgreSQL caching. This setting controls the size of the kernel shared memory which should be reserved for PostgreSQL. Should be set to around 40% of total memory dedicated for PostgreSQL.

work_mem = 20MB

Determines the amount of memory used for internal sort and hash operations. This setting is per connection, per query so a lot of memory may be consumed if raising this too high. Setting this value correctly is essential for KOBO aggregation performance.

maintenance_work_mem = 512MB

Determines the amount of memory PostgreSQL can use for maintenance operations such as creating indexes, running vacuum, adding foreign keys. Increasing this value might improve performance of index creation during the analytics generation processes.

effective_cache_size = 8000MB

An estimate of how much memory is available for disk caching by the operating system (not an allocation) and isdb.no used by PostgreSQL to determine whether a query plan will fit into memory or not. Setting it to a higher value than what is really available will result in poor performance. This value should be inclusive of the shared_buffers setting. PostgreSQL has two layers of caching: The first layer uses the kernel shared memory and is controlled by the shared_buffers setting. PostgreSQL delegates the second layer to the operating system disk cache and the size of available memory can be given with the effective_cache_size setting.

checkpoint_completion_target = 0.8

Sets the memory used for buffering during the WAL write process. Increasing this value might improve throughput in write-heavy systems.

synchronous_commit = off

Specifies whether transaction commits will wait for WAL records to be written to the disk before returning to the client or not. Setting this to off will improve performance considerably. It also implies that there is a slight delay between the transaction is reported successful to the client and it actually being safe, but the database state cannot be corrupted and this is a good alternative for performance-intensive and write-heavy systems like KOBO.

wal_writer_delay = 10000ms

Specifies the delay between WAL write operations. Setting this to a high value will improve performance on write-heavy systems since potentially many write operations can be executed within a single flush to disk.

random_page_cost = 1.1

SSD only. Sets the query planner’s estimate of the cost of a non-sequentially-fetched disk page. A low value will cause the system to prefer index scans over sequential scans. A low value makes sense for databases running on SSDs or being heavily cached in memory. The default value is 4.0 which is reasonable for traditional disks.

max_locks_per_transaction = 96

Specifies the average number of object locks allocated for each transaction. This is set mainly to allow upgrade routines which touch a large number of tables to complete.

Restart PostgreSQL by invoking the following command:

sudo /etc/init.d/postgresql restart

I hope this helps.

1 Like

I am using docker containers.

Did you make these changes in postgresql docker and did it help?