Hello,
For a staging instance of KoBo, I would need to fully understand how the workflow on S3 works. To do so, let me share the relevant settings I have configured on my KoBo instance:
AWS Access Key []: XXXXXX
AWS Secret Key []: YYYYYY
AWS Bucket name []: test-bucket
Would you like to validate your AWS credentials? Yes
AWS credentials successfully validated
Do you want to activate backups? Yes
Do you want to use WAL-E for continuous archiving of PostgreSQL backups? Yes
PostgreSQL backup schedule?
[0 2 * * 0]: 0 21 * * *
MongoDB backup schedule?
[0 1 * * 0]: 0 21 * * *
Redis backup schedule?
[0 3 * * 0]: 0 21 * * *
AWS Backups bucket name []: test-backup-bucket
How many yearly backups to keep? [2]: 5
How many monthly backups to keep? [12]: 36
How many weekly backups to keep? [4]: 52
How many daily backups to keep? [30]: 180
PostgresSQL backup minimum size (in MB)?
Files below this size will be ignored when rotating backups.
[50]: 2
MongoDB backup minimum size (in MB)?
Files below this size will be ignored when rotating backups.
[50]: 1
Redis backup minimum size (in MB)?
Files below this size will be ignored when rotating backups.
[5]: 1
Chunk size of multipart uploads (in MB)?
[15]: 15
Use AWS LifeCycle deletion rule? No
Questions:
-
What type of information is stored in the bucket specified under
AWS Bucket name
(test-bucket
)? And in which frequency? -
Is it safe to use the same bucket for
AWS Bucket name
andAWS Backups bucket name
? -
With WAL-E enabled, do I still restore PostgreSQL backups with:
pg_restore -U kobo -d kobotoolbox -c "<path/to/postgres.pg_dump>"
? -
Regarding the backup minimum size, does it mean that once a backup task is trigerred and the zip/dump file created, if its size is lower than the specified value, it won’t be sent to the backup bucket on S3?
I noticed that specially the backups from MongoDB and Redis are less than 1MB in the beginning, but they should still be sent to the bucket. Can I simply say zero MB to force sending all backups to S3?
Thanks and Regards