Contributors to this thread:

After a few performance improvements, I did some loading time tests for #serci, between my home network (wifi) and a random VPN where the service is running.

echo "html,redirect" | tee timestamps.csv
for i in $(seq 1 1 100); do
    HTML=$(curl -o /dev/null -s -w '%{time_total}' https://search.jayvii.de)
    REDI=$(curl -o /dev/null -s -w '%{time_total}' https://search.jayvii.de?q=test)
    echo "$HTML,$REDI" | tee --append timestamps.csv
done


Across 100 runs for loading the site's HTML (generated from pure #PHP) versus a redirect to a chosen service (here the default #MetaGer), I can measure on average 0.28s for loading the frontend (HTML) and 0.13s for processing input and issuing the redirect.

I am quite happy with this relatively low overhead, although performance may decrease a little if more services are added (currently: 47). At some point maybe an #sqlite database may be more efficient than my pre-constructed #json files which are loaded on-demand.

btw, I did check changing the order of the curl-calls in the loop above. Caching is not a factor. The results stay the same.

Also: please do not ping my server with the above script yourself, thanks.

The move to a different VPS actually increased the performance for the HTML-building of #serci substantially from 0.28s above to 0.19s over 100 trials (also, this might have to do with #PHP updates and smarter Apache configuration). Redirection does not differ. This is likely the minimal overhead I can achieve.