Import and export failures on non-humanitarian server

Hello, everyone, and thank you for your patience. My colleague @nolive and I have been working (and continue to work) this morning to resolve the issue on that has been preventing most imports and exports from completing successfully. This has also delayed the POSTing of most submissions to external REST Services servers.

A brief explanation: requests made by a browser need to return in a relatively short amount of time, but not all tasks can complete that quickly. Some examples include:

  • Clicking the export button, where the server needs to acknowledge quickly that the export request was received, but the export itself may take up to half an hour for a large data set.
  • Receiving a submission, where the server needs to store the data and acknowledge its receipt immediately, but also needs to send a copy to the appropriate REST Services external servers, which may be slow or require multiple retries.

Our infrastructure currently has one queue for all such tasks, with many worker programs running simultaneously to complete any jobs that appear in that queue. The issue, in this case, was a very large project (over 7 million submissions) with a very slow (or even unresponsive) REST Services external server overwhelming the workers and causing the queue to be filled with over 70,000 tasks. This effectively crowded out everyone else’s tasks, manifesting as the import and export failures that you’ve likely seen if you use

We are working first to get through the backlog of tasks, which should be done in the next 15-20 minutes (current time is 15:13 UTC on 8 September) [update: this completed on schedule; all service should now be normal]. Following that, we will urgently work to separate the queues so that any future REST Services issue does not impact other tasks like imports and exports. Once that is complete, we will work on a fairness algorithm to make sure one REST Service cannot dominate the queue and prevent all other REST Services from working properly.

Thanks again,

John Milner
Lead Developer


Sorry, but how can 7 million happen from a (private) user project? Should we better restrict such level of free extensive usage and focus on support for the humanitarian projects, please?

Our own server ( ) can be used by everyone else, but we ask each user to remain below the limit of 10,000 submissions as well as 5GB file uploads per user per month. We might increase this number in the future, but for the time being we want to avoid that a small number of very heavy users are slowing down the server and user experience for everyone else.
See Help Center article also
Which Server Should I Use? — KoboToolbox documentation

Furthermore, in my opinion, users with such exhaustive demands should use their own server instead of stressing common community ressources and KoBo staff.

Kudos to you @jnm, @nolive, and all of the team :clap: :clap:


@jnm we sincerely apologies for this outage; we were unable to detect the issues immediately as it was the first time this was happening; we have been able to resolve the issue on our servers and promise it wont happen again,

1 Like

Hello @leksyde,
Would you mind to also explain to the community, please

  • why are you using the KoBo kc server for such amount of data (normal limit 10 000)?
  • why are you not using your own server directly where you seem to copy almost all submissions by REST service, see
  • if you will change this situation and comply to the suggested KoBo limits of. “10,000 submissions as well as 5GB file uploads per user per month”?

hello @wroos sincere apologies to everyone and the community;
we are currently working on deploying to our own servers soon and would be migrating soon; the project is currently being used to collect vital COVID-19 related data from the field (for which kobo made some allowance for)

Sincere apologies once again.
@jnm sincere apologies we got the email and have responded.

well done @jnm @nolive and all of the team


This queue separation is now complete; see Release Notes - version 2.022.24, 2.022.24a, 2.022.24b, 2.022.24c, 2.022.24d - #10 by nolive.

1 Like

Hi @wroos,

Thanks for being protective of the time and energies of the community and core team. The following point is valid:

The blanket COVID-related quota-exception offering ended as of March 28, 2022, and owners of future projects planning to exceed the limits of do need to contact in advance to set up an arrangement.

For what it’s worth, the REST Services queue issue has presented itself (to a lesser extent) on as well, and we’d already had queue separation on our list of necessary improvements. The priority of that task got an abrupt boost!

Best regards to all.