Love it, it's brilliant, but I think the rate limiting logic is not doing what the author really wants, it actually costs more cpu to detect and produce the error than returning the regular response (then my mind goes on how to actually over optimize this thing, but that's another story :-D )
It looks like it's limited to 10 requests per minute, it's less of a hug and more of a gentle brush past.
It's documented as "Per IP", but I'm willing to bet either that documentation is wrong, or it's picking up the IP address of the reverse proxy or whatever else is in-front of the application server, rather than the originator IP.
ah, yes, the "memory is no object" way of obtaining a weighted distribution. If you need that sweet sweet O(1) selection time, maybe check out the Alias Method :)
A single large file is also sadness for incorporating suggestions from collaborators as you're always dealing with merge conflicts. Better might be a folder of plain text files, where each can have multiple lines in it, and they're grouped by theme or contributor or something.
A folder of plain text files will be sadness for performance. It's a file with basically line-wise entries, merge conflicts in that will be dead easy to resolve with Git locally. It won't be single-click in GitHub, but not too much of a hassle.
In fairness, I doubt most of these kinds of meme projects have a maintainer active enough to be willing to conduct local merges, even if it's "dead easy" to do so.
Maybe then this is really a request for Github to get better/smarter merge tools in the Web UI, particularly syntax-aware ones for structured files like JSON and YAML, where it would be much easier to guess, or even just preset AB and BA as the two concrete options available when both changes inserted new content at the same point. It could even read your .gitattributes file for supported mergers that would be able to telegraph "I don't care about the order" or "Order new list entries alphabetically" or whatever.
I made a lot of things like this as a noob and threw them up on github.
As you gain experience, these projects become a testament to how far you've come.
"An http endpoint that returns a random array element" becomes so incredibly trivial that you can't believe you even made a repo for it, and one day you sheepishly delete it.
I don't think things have to be impressive to be shown. A funny little idea is all you need, no matter how simple the code. Actually I find exactly that quite neat.
Not exactly what you are asking for, but reminded me that Toxiproxy[0] exists if you want to test your applications or even HTTP clients against various kinds of failures:
I understand that one wants some rate limiting so that others don't just use this as a backend for their own service causing every single request for their service to also create an API request.
But this is as simple and resource unintensive as it gets for an HTTP server. 10 requests per minute is just silly.
Also could it be that the limit isn't enforced against the origin IP address but against the whole Cloudflare reverse proxy?
10 requests per minute per IP is plenty enough to play around with and have a little fun. For anything more than that you could (should!) host it yourself.
I guess it still works.