Don't feel that way. I initially had to jump on the bandwagon and have a nice, link-filled README on my profile, but I ended up removing it all. The profile page is already good. It has your avatar, a short description from your profile, your group memberships, a list of your repos, and a good graphical representation of your work on Github.
It is a github profile, after all. A personal webpage is better for all the other information about you, and you can just put that link in your profile description.
It's a new addition, so I'm giving it time to see how it's going to be used in the wild and although I kinda like OP's take, the ones with giant images, GIFs, too much text... I'm not so sure. Maybe it's just like this now because it's a shiny new toy and people are pushing its boundaries but will die down later.
I'm lightly skewing towards just hiding it with a custom rule and forget about them. I visit a Github profile to quickly see what the developer codes instead of a showcase of their personality which, as you said, would be better served by a personal website.
Well, if you really need the autoscale feature to handle sudden and ludicrous spikes of traffic, you will most likely also use a database that autoscales too. i.e. DynamoDB.
For most applications, it's really not needed though.
Absolutely agree, I tried it out for a short personal project and was disappointed with the bad documentation and even worse libraries that I found. I'm assuming large companies have their own internal libraries/ORMs for this, and that's how it is intended to be used.
In terms of the actual technology itself, it's very interestingly built, and as the poster above me mentioned, with a ton of caveats to the promises it makes.
It is one of very few technologies that I need to get a pen and paper and some quiet time to decide on a table schema. Most of the time I need to redo it after coming up with new query requirements.
"DynamoDB uses synchronous replication across multiple data centers for high durability and availability"
This does not seem scalable for OLTP type load on some busy store. Again I think you'll be way better of money wise hosting your own database on real hardware either on prem or colocated
You're forgetting specifics here and the amount of hardware resources thrown into it. I've already told that I am not discussing here FB/Google/Amazon scale stuff. It is their problem and not shared by more regular deployments
is there any transnational storage solution that actually handles well a traffic spike?
my understanding is that most storage solution scales well when planned, but scaling during a spike only causes more stress onto the current replicas/partitions for all the time needed to boot and shift data toward new replica/partitions, and the new instances don't contribute in moving the load until the process is complete.
so they work if you can predict a spike, but they can't handle a sudden spike, especially if the data to replicate/partition is sizeable.
You can look up words on the Kindle. And if you highlight them there are ways to access the highlighted sentences in a machine-friendly way (to i.e. export them to Anki).
I did something similar for myself using mpv's scripting interface (for Japanese immersion).
I can press a button and have the current subtitle analyzed for words with mecab, or press another button and have audio + screenshot + text sent to Anki for review.
mpv allows to seek previous/next subtitle, so even relistening a particular phrase is really convenient.
Indeed, you get high quality in context cards, something that is still missing from most Anki decks.
The code is here https://github.com/pigoz/mpv-nihongo, keep in mind that a lot of stuff is hardcoded, poorly documented, and it requires an mpv patch to work.
If there's enough interest, I can try and make it simple for someone that's not me to use.