msuchy at redhat.com
Fri Jul 8 14:45:49 CEST 2016
I use cgit as WebUI for dist-git of Copr . There are 136000 git repositories (and growing).
My problem is that no matter how aggressive caching in /etc/cgitrc is used, it takes enormous time to generate initial
/var/cache/cgit/rc-* file where are those "repo.*" configurations. And by enormous I mean 30 minutes.
I came up with one solution. Set TTL to 2 hours and regenerate the cgitrc from cron every hour. This way the cgitrc
will never be generated by user coming from httpd request.
I can generate that cgitrc in cron job manually by running:
CGIT_CONFIG="/etc/cgitrc" /var/www/cgi-bin/cgit >/tmp/x.html
The problem is that even with --nocache it does not refresh existing /var/cache/cgit/rc-* file. The only way to refresh
the cgitrc file is to wait till it become older than TTL or delete it. But until it is regenerated the users who access
my server, will take it down by filling all apache slots with running cgit (which will traverse all git repositories).
I am thinking about implementing new option. E.g. --update-scan-path, which will force cgit to scan 'scan-path', create
the include cgitrc file in tempfile and at the and it will remove original /var/cache/cgit/rc-* and rename the newly
created cgitrc to that rc-* file. So it will be nearly atomic operation.
If you agree, I can prepare patch next week.
Miroslav Suchy, RHCA
Red Hat, Senior Software Engineer, #brno, #devexp, #fedora-buildsys
More information about the CGit