Hacker Newsnew | past | comments | ask | show | jobs | submit | y42's commentslogin

I can't believe that domain trading is still a thing.. I am sitting on a bunch of "nice" domains; I could never imagine someone actually bought for not even vor 100 bucks... and here we are, 30k?

OK.


reminds me of that:

https://institut-fdh.de/

but your's is way better implemented it seems - very nice!

(disclaimer: it's my site, shameless-plug)


I like it. Yours definitely has that old-school Norton Commander vibe.

https://en.wikipedia.org/wiki/Norton_Commander


I like the clean style, I was working on something similar, but never reached that readines-level.

However, the first screen seems a little contra productive. User entered a paragraph and in return gets a "summary" - but it feels so long. It's probably okay... it just triggered me very first impulse.

And what remains unclear is the meaning of "Team Sync". What's happening there? Is it a "group chat" or does it sends messages?

Speaking of... what about integrations with like Teams, Slack, Discord? I know that Teams offers a Meeting Summary which would be great if you could directly store it in _sig_.


The screenshot feedback is fair. Updating it. The capture response should feel like "filed. here's where." not a report back at you. Working on it.

Team Sync isn't a group chat or a message sender. It's more like an approval step: you review what you've captured privately, decide what's actually worth sharing with your team, and publish that specific text to a shared knowledge base. Right now "publishing" means an abstracted git flow that pushes updates to a central Git repo on Github. Nothing goes to the team without you explicitly choosing it. The name could be clearer — that's useful feedback, thanks.

On integrations: Slack imports work today (you can pull exports into your context). Teams Meeting Summary is an interesting one. Right now you'd paste it in and Sig routes it. The summary gives you the factual scaffolding, then you add your layer on top. That's the part the transcript can't give you.


I like your blog and I can totally relate to this article - it's like something I wanted to write about for a couple of weeks now. :D

https://thoughts.jock.pl/p/adhd-ai-agent-personal-experience...


https://news.ycombinator.com/item?id=47894155

(I am just learning that "a couple of weeks" apparently means "2 weeks"...)


I am trying Qwen3.5-9B-Claude-4.6 since a couple of days now locally coming from OMLX. Either via Hermes or Continue in VS Code. It's oka'ish, even performance-wise.

[1] https://huggingface.co/Jackrong/Qwen3.5-9B-Claude-4.6-Opus-R...


I dare to call meself a senior dev, so I don't need a replacement, I need a tool.

That's what I am thinking, too. It sound's like a conspiracy theory, but at the end Anthropic et al benefits from models that don't finish their jobs. I recently read about this "over editing phenomenon". The machine is never done. It doesn't want to.

It's like dating apps. They don't want you to find a good match, because then you cancel the subscription.


Which works fine, right up until China releases a new DeepSeek model that's 85% as capable as an Anthropic or OpenAI premium model but costs a fraction of what either of those US companies are charging.

Speaking of which:

https://www.cnbc.com/2026/04/24/deepseek-v4-llm-preview-open...


> consumer-grade hardware

Not disagreeing per se, but a quick look at the installation instructions confirms what I assumed:

Yeah, you can run a highly quantized version on your 2020 Nvidia GPU. But:

- When inferencing, it occupies your "whole machine.". At least you have a modern interactive heating feature in your flat.

- You need to follow eleven-thousand nerdy steps to get it running; my mum is really looking forward to that.

- Not to mention the pain you went through installing Nvidia drivers; nothing my mum will ever manage in the near future.

... and all this to get something that merely competes with Haiku.

Don't get me wrong - I am exaggerating, I know. It's important to have competition and the opportunity to run "AI" on your own metal. But this reminds me of the early days of smartphones and my old XDA Neo. Sure, it was damn smart, and I remember all those jealous faces because of my "device from the future." But oh boy, it was also a PITA maintaining it.

Here we are now. Running AI locally is a sneak peek into the future. But as long as you need a CS degree and hardware worth a small car to achieve reasonable results, it's far from mainstream. Therefore, "consumer-grade hardware" sounds like a euphemism here.

I like how we nerds are living in our buble celebrating this stuff while 99% of mankind still doomscroll through facebook and laughing at (now AI generated) brain rot.

(No offense (ʘ‿ʘ)╯)


Why would you want to move conversations with you? I use multiple different models, I don't care about the history.

My "brain" in terms of projects, is local on my computer. I have a simple set of system rules that I need to copy.

I am not everyone, I understand that. What I try to say: don't overestimate the lock in effect of AI. I doubt there is one.


> I don't care about the history.

I've actually been using the Gemini app more because it auto-deletes old history. I like using LLMs without thinking this is going to stick around forever.

Models are relatively interchangeable for day-to-day use anyway.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: