As traffic increased, LFQR and LFNY started attracting more and more malicious users.
As a reminder, LFQR is my small free tool for creating "dynamic" QR codes—basically, the URL never changes. LFNY is my tool for creating custom shortened links.
The thing is, both services are completely free, and maintaining security on free services... is hard.
First, at signup or login there's a Turnstile, which helps limit bot registrations. Then there's the classic email verification. But none of that stops people who want to do phishing.
So at one point I banned the countries that were using the service for that, then I spent time every day checking for malicious links. It took forever, but it kept the platform clean. But I knew this system had its limits, so I had an idea: what if we used an LLM and gave it the ability to browse a URL to see what it contains?
That's exactly what I implemented. Now when a link is added, it enters a queue and gets analyzed by an LLM that can access a headless Chrome browser to see what the target contains.
Obviously some anti-bot protections block it, but honestly, just from the URL alone you can often tell if it's malicious. For example, a URL from another link shortening platform—why would you need that?
The system has been in production for a few weeks now and it's working pretty well. I check in from time to time, and so far no false positives.