#php 16 hashtags

JayVii

If I were to transform yt2html to pure #PHP (and fetching feeds dynamically on each load rather than creating HTML files) instead of taking the shortcut throuh my years-old #Rstats script, this could also run on a webhost.

The only things I'd need a proper #VPS for would then be ktistec (i.e. social.jayvii.de) and stagit (src.jayvii.de) as well as my rss reader miniflux. All of the rest are either static content or simple PHP scripts (typically even DB-less): yt2rss, yt2html, tw2html, serĉi, pastesrv, ...

On the other side: If I do need a VPS either way, I might as well throw everything onto it. With the exception of ktistec, those are super low traffic anyways (and even ktistec's traffic is quite low).

JayVii

I finally transferred almost all my webtools to SimpleCSS, for example the #goaccess dashboard for privacy preserving web traffic measurements:

https://src.jayvii.de/pub/goaccess_dashboard/ (see it in action here: traffic.jayvii.de

Or other tools such as a barebones youtube-feed yt2html, twitch-feed tw2html, youtube-to-podcatcher service yt2rss, ...

I really like the idea of simple and classless #CSS. It helped me improve the visuals of my #PHP services quite a lot!

JayVii

After a few performance improvements, I did some loading time tests for #serci, between my home network (wifi) and a random VPN where the service is running.

echo "html,redirect" | tee timestamps.csv
for i in $(seq 1 1 100); do
    HTML=$(curl -o /dev/null -s -w '%{time_total}' https://search.jayvii.de)
    REDI=$(curl -o /dev/null -s -w '%{time_total}' https://search.jayvii.de?q=test)
    echo "$HTML,$REDI" | tee --append timestamps.csv
done


Across 100 runs for loading the site's HTML (generated from pure #PHP) versus a redirect to a chosen service (here the default #MetaGer), I can measure on average 0.28s for loading the frontend (HTML) and 0.13s for processing input and issuing the redirect.

I am quite happy with this relatively low overhead, although performance may decrease a little if more services are added (currently: 47). At some point maybe an #sqlite database may be more efficient than my pre-constructed #json files which are loaded on-demand.

JayVii

My keyword-based meta² search serĉi learned some new tricks and is a bit easier to use now!

Besides several categories you can browse to find keywords more easily, the backend has changed quite a bit as well! Both keyword-arrays and category-arrays are pre-generated from the configuration now. This makes it easy to configure but still maintain extremely high performance, and keep for-loop cycles low.

Everything in pure #PHP and without database, logging, cookies, etc. Give it a try and send me some feedback!

JayVii

Lately, I've been using @MetaGer #MetaGer as my main #SearchEngine, but had been really missing bangs (known from #DuckDuckGo and other search engines).

So this morning I threw together a short #PHP site that I could use as search front, which handles bangs and redirects me to the correct search engine (is this meta-meta? ;D):
https://search.jayvii.de/

If you find bugs or want to contribute, feel free to check out my git repo and contribute:
https://src.jayvii.de/pub/traserci

JayVii

I was playing with the idea of writing a very simple #selfhost|able webservice in #PHP where you can toss URLs to with a GET request and it puts out all previously collected URLs as an #RSS feed. That way I could add a very simple #ReadItLater functionality to my rss reader.

Turns out, the link-collection service I have been using for months (called linkding, check it out, it is great!) can do exactly that 😅