Hacker Newsnew | past | comments | ask | show | jobs | submit | CountGeek's commentslogin

It has that. Select the media you want to delete, tap & hold, then scroll to the right in the menu and select Delete from device. At least on Android this is the way.


Jellyfin, Jellyserr on a QNAP TS-464 runs perfectly well for serving even 4k x265.


So could I in practice train it on all my psychology books, materials, reports, case study and research papers and then run it on demand on a 1xH100 node - https://getdeploying.com/reference/cloud-gpu/nvidia-h100 whenever I have a specialised question?


You could do that indeed, but the performance would be abysmal. For this kind of use-case, it would be a LOT better to use a small pre-trained model and either fine-tune it on your materials, or use some kind of RAG workflow (possibly both).


> it would be a LOT better to use a small pre-trained model and either fine-tune it on your materials, or use some kind of RAG workflow (possibly both).

I noticed NewRelic has a chat feature that does this sort of thing, it's scoped very narrowly down to their website and analytics DSL language, and generates charts/data from their db. I've always wondered how they did that (specifically in terms of set up the training/RAG + guardrails). It's super useful.


You might be able to figure that out just by asking it - see if you can get it to spit out a copy of the system prompt or tell you what tools it has access to.

The most likely way of building that would be to equip it with a "search_docs" tool that lets it look up relevant information for your query. No need to train an extra model at all if you do that.


You could but it would be significantly worse than fine-tuning or RAG with a pre-trained model, or using a smaller model since your dataset would be so small.


Yes, though it's possible a more-general core model, further enhanced with some other ways to bring those texts-of-interest into the working context, might perform better.

Those other ways to integrate the texts might be some form of RAG or other ideas like Apple's recent 'hierarchical memories' (https://arxiv.org/abs/2510.02375).


You could! But just like others have mentioned, the performance would be negligible. If you really wanted to see more of a performance boost by pretraining you could try to create a bigger chunk of data to train off of. This would be done by either creating synthetic data off of your material, or finding adjacent information to your material. Here's a good paper about it: <https://arxiv.org/abs/2409.07431>


No.



Pets remove plastic and instead poison ourselves.

  A 2023 Belgian study[0] tested 39 brands of straws (paper, bamboo, glass, stainless steel, and plastic):

  Paper and bamboo straws most frequently contained PFAS, sometimes at high levels.

  Plastic straws also contained PFAS, but less consistently.

  Stainless steel straws were PFAS-free in that study.

[0]https://www.europarl.europa.eu/doceo/document/E-9-2023-00268...


I once drunk from a pasta straw, that should also be PFAS free. Though hot liquids might cook the pasta.


I have tried Kagi for a month and it has been good. Clean results and on point for my basic searches compared to Google. Though not much different than when using my selfhosted searxng...

I am now trying to decide if I want it for the standard AI models or if I can continue scraping by the available free (restricted) offerings from various providers. There is a difference in the quality of responses between Chatgpt and when using Kagi. I guess that's what you get when using an agent...


Leaving a negative review is not outright illegal, but it can lead to legal consequences if deemed defamatory or false. If a review contains unsubstantiated claims or personal attacks, the business can pursue legal action for defamation.

The nuances of these laws can vary, so context matters significantly in legal interpretations.


Come if you want - 56% income tax - housing shortage which is leading to landlords charging 1.5k eur for just a room - ever increase in public transport (train) unreliability and degrading conditions (shorter carriages during peak times) - increasing health insurance costs with fewer benefits - diversity, inclusion and acceptance by the community - directness (although it's a fine line between being direct and rude so YMMV) - poldermodel (things may take a few iterations to get done)


California has 45% (marginal effective combined federal/state, for any nitpickers) tax rate, even higher cost of living, and no health insurance at all by default.


Are you trying to keep it to yourself? Obviously there are some very good aspects and people want to live there.


A while back this was posted on HN - https://github.com/docmost/docmost/



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: