Service doesn't start after Migration to new server

Hi, I have a situation while IΒ΄m tried to migrate to new server, the steps was:

In DNS change records kf. kc. ee. to new IP addr

In the old server:

  1. Moved /opt/* to new server via rsync -avzh
  2. Moves kobo-install to new server via rsync -avzh

In the new server:

  1. Install same docker and docker-compose version thant old server
  2. run ./ --setup and confirm all the information

All looks good but it freeze:

kpi_1             |   Applying registration.0004_supervisedregistrationprofile... OK
kpi_1             |   Applying sessions.0001_initial... OK
kpi_1             |   Applying taggit.0002_auto_20150616_2121... OK
kpi_1             |   Applying taggit.0003_taggeditem_add_unique_index... OK
kpi_1             | Creating superuser...
kpi_1             | Superuser successfully created.
kpi_1             | Copying static files to nginx volume...
kpi_1             | Cleaning up Celery PIDs...
kpi_1             | KoBoForm initialization completed.
enketo_express_1  |
enketo_express_1  |                 To go further checkout:
enketo_express_1  |       
enketo_express_1  |
enketo_express_1  |
enketo_express_1  |                         -------------
enketo_express_1  |
enketo_express_1  | pm2 launched in no-daemon mode (you can add DEBUG="*" env variable to get more messages)
enketo_express_1  | 2023-02-11T12:25:20: PM2 log: Launching in no daemon mode
enketo_express_1  | 2023-02-11T12:25:21: PM2 log: [PM2] Starting /srv/src/enketo_express/app.js in fork_mode (1 instance)
enketo_express_1  | 2023-02-11T12:25:21: PM2 log: App [enketo:0] starting in -fork mode-
enketo_express_1  | 2023-02-11T12:25:21: PM2 log: [PM2] This PM2 is not UP TO DATE
enketo_express_1  | 2023-02-11T12:25:21: PM2 log: [PM2] Upgrade to version 5.2.2
enketo_express_1  | 2023-02-11T12:25:22: PM2 log: App [enketo:0] online
kpi_1             | Running `kpi` container with uWSGI application server.
enketo_express_1  | 2023-02-11T12:25:22: PM2 log: [PM2] Done.
enketo_express_1  | 2023-02-11T12:25:22: PM2 log: β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
enketo_express_1  | β”‚ id  β”‚ name      β”‚ namespace   β”‚ version β”‚ mode    β”‚ pid      β”‚ uptime β”‚ β†Ί    β”‚ status    β”‚ cpu      β”‚ mem      β”‚ user     β”‚ watching β”‚
enketo_express_1  | β”œβ”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
enketo_express_1  | β”‚ 0   β”‚ enketo    β”‚ default     β”‚ 2.8.1   β”‚ fork    β”‚ 26       β”‚ 1s     β”‚ 0    β”‚ online    β”‚ 0%       β”‚ 28.8mb   β”‚ root     β”‚ disabled β”‚
enketo_express_1  | β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
enketo_express_1  | 2023-02-11T12:25:22: PM2 log: [--no-daemon] Continue to stream logs
enketo_express_1  | 2023-02-11T12:25:23: PM2 log: [--no-daemon] Exit on target PM2 exit pid=1
enketo_express_1  | 12:25:36 0|enketo  | Worker 6 ready for duty at port 8005! (environment: production)
enketo_express_1  | 12:25:36 0|enketo  | Worker 3 ready for duty at port 8005! (environment: production)
enketo_express_1  | 12:25:37 0|enketo  | Worker 2 ready for duty at port 8005! (environment: production)
enketo_express_1  | 12:25:37 0|enketo  | Worker 1 ready for duty at port 8005! (environment: production)
enketo_express_1  | 12:25:39 0|enketo  | Worker 4 ready for duty at port 8005! (environment: production)
enketo_express_1  | 12:25:40 0|enketo  | Worker 5 ready for duty at port 8005! (environment: production)
kpi_1             | [uWSGI] getting INI configuration from /srv/src/kpi/uwsgi.ini


`KoBoToolbox` has not started yet. This is can be normal with low CPU/RAM computers.

Wait for another 600 seconds?
        1) Yes
        2) No

The new server have 6 cores and 16Gb of RAM same CPU than the old server double RAM… Any idea? or clue?

I found the solution here (`kobo-install` on a fresh Ubuntu 18.04 LTS VPS - #82 by ramiz), is mandatory reissue Let’s Encrypt SSL certificates of will loop forever!

1 Like

@finlay, :clap: :heart: :partying_face: