"Getting productive" wasn't really my goal when configuring my system. I spend a lot of time in front of the computer and simply prefer using it this way because it feels natural. But, of course, you're entitled to your opinion :)
I'm Swapnil, and I've been building AI agents for the past year. After the 10th time setting up the same infrastructure (Kubernetes, LLM routing, secrets management, monitoring), my co-founder and I decided to build the platform we wished existed.
Phinite is a developer-first platform for building, deploying, and orchestrating AI agents.
*Core features:*
- FlowGen Studio: Visual workflow builder for agent orchestration (think n8n meets LangChain)
- Developer Studio: Full IDE for custom tools with AI copilot assistance
- One-click deployments to managed Kubernetes
- Multi-environment support (DEV/UAT/PROD) with secrets management
- 50+ pre-built integrations (Slack, JIRA, Salesforce, etc.)
- Built-in observability and cost tracking
*Technical architecture:*
- Cloud-agnostic (AWS, GCP, Azure, or on-prem K8s)
- Auto-scaling pod orchestration
- Microservices architecture with service mesh
- WebAssembly-based runtime for tool execution
- Vector DB integration for RAG workflows
*Why we built it:*
Most AI agent frameworks are either too low-level (write everything in Python) or too constrained (no-code boxes that break when you need custom logic). We wanted something that works for both the "I need this done in 5 minutes" and "I need full control" use cases.
*Pricing:*
- $10 free credits (no credit card)
- Pay-as-you-go: $20 base + usage-based pricing
- Self-hosted option coming soon
*What we're looking for:*
- Feedback on the developer experience
- Performance benchmarks vs. self-hosted solutions
- Use cases you'd build (or wouldn't build, and why)
- Technical deep-dives you'd like to see
We're currently in UAT with 100+ beta testers and opening to the public today.
I love and do the same thing with Raycast! But mostly with apps that do not have a designated "workspace".
Most of the time, I only have Spotify, chat clients, my browser, and the terminal open. And I do prefer every one of them just having a fixed place behind a shortcut, which at this point is just muscle memory.
All of this "getting productive" with window managers, especially in the context of macOS is just yakshaving and, unless you enjoy doing it a waste of time. The point of macOS is to have a system with tasteful defaults.
You have said it all.
I have kids, but it's basically impossible to age gate everything, my solution for a long time besides screen time restrictions was having terrible internet connections.
We were still using Verizon copperwire till 3 years ago, slow, high latency, dropped service all the time, had to restart the router daily.
It worked enough for 1 device to watch a movie or 3 people to do basic tasks, but there was no multiple screens displaying madness at all times. Apps just failed to load and you had to restart, it was absolutely fabulous.
The big problem is that schools require them to have and use the internet. I have no idea why. It's not better, it's not like they use Wikipedia, no it's a mish mash of private companies selling edu-ware to buzzword aficionados in administration.
But having no computers would be fine, at least till high school, at minimum.
The Times New Roman commentary could have been true back when it was written, but now Calibri is the default for Microsoft Word, and has been for a long while (almost 20 years). So choosing Calibri is the path of least resistance.
Why would you want segmented stacks, cgo FFI overhead, goroutines, & asynchronous preemption for a game engine? Odin is better suited than Go is for this type of software. Almost any programming language would have been a better choice here. This is rage bait I'm sure.
I’m actually building the system-level approach this memo hints at.
I’m not from a lab or an academic group, but I’ve been working on a post-transformer inference method where you extract a low-rank “meaning field” from a frozen Llama-70B layer and train a small student model to generate those fields directly.
The idea is similar to what this memo describes, but with an empirical implementation.
It isn’t about bigger models.
It’s about reorganizing the system around meaning and structure, then treating the transformer as a teacher rather than the final destination.
I’d genuinely appreciate critique or replication attempts from people here. HN tends to give the most honest feedback.
I’ve been working independently on a method that replaces full-transformer inference with a low-rank “meaning field” extracted from internal activations.
The core result: a frozen Llama-3.3-70B can be distilled into a 256-dimensional field representation, giving 224× compression and slightly higher accuracy on several benchmarks. A small student model then learns to directly generate these fields from text, removing the transformer from the inference path.
The Zenodo link contains the full paper, statistical results, and methodology.
A reference implementation (non-optimized) is here: https://github.com/Anima-Core/an1-core
Production variants (AN1-Turbo, FPU work, etc.) are not included.
I’m an outsider to academia so I’m posting this openly to get technical feedback, replication attempts, and critique from people who understand this space.
Around May, Altman said to FT that his job was the "most important job maybe in history" (FT: https://www.ft.com/content/a3d65804-1cf3-4d67-ac79-9b78a10b6...). He did come back from brink of death before as well. But steering OpenAI into an "ecosystem" rather than a focusing on the product when you are up against the likes of Google? Seems like cashing in on the hype too early.
This is a complete organizational operating model built from first principles.
It covers worker, manager, and executive workflows; AI-assistant boundaries;
governance; scorecards; SOP hardening; alignment cycles; and scaling rules.
The goal is to give companies a simple, deterministic structure that avoids
complexity and reduces failure points. All material is open and published on GitHub.
If you’re going to vibe code, why not do it in Brainfuck?
Claude hilariously refused to rewrite my rails codebase in Brainfuck…not that I really expected it to. But it went on a hilarious tirade about how doing so was a horrible idea and I would probably fired if I did.
Nothing mentioned will help for a website with a Let's Encrypt SSL cert. How can I know with confidence that I can conduct commerce with this website that purports to be the company and it's not a typo squatter from North Korea? A google search doesn't cut it. Nothing in this thread has answered that basic question.
It's a non-issue for DigiCert and Sectigo certs. I can click on the certs and see for myself that they're genuine.
OK, that is interesting. Separating infra from AI valuation. I can see what you mean though because stock prices are volatile and unpredictable but a datacenter will remain in place even if its owner goes bankrupt.
However, I think the AI datacenter craze is definitely going to experience a shift. GPU chips get obsolete really fast, especially now that we are moving into specialised neural chips. All those datacenters with thousands of GPUs will be outcompeted by datacenters with 1/4th the power demand and 1/10th the physical footprint due to improved efficiency within a few years. And if indeed the valuation collapses and investors pull out of these companies, where are these datacenters supposed to go? Would you but a datacenter chock full of obsolete chips?
Hi HN! The idea: you see START and END words, and guess the path that AI chose to connect them. Example: OCEAN → ??? → ??? → ANTENNA Answer: OCEAN → WAVE → SIGNAL → ANTENNA. "Wave" bridges physical (ocean) to abstract (radio), then to antenna. These cross-domain jumps are where it gets tricky. Built with Next.js. Would love feedback on the difficulty curve.
2. Indeed - I'm happy to hear someone know their Blizzard modding community history. To my knowledge, King Arthur never finished his studies as he got offered a job at Blizzard. He worked at Blizzard from around 2000 to 2020. He's now at Dreamhaven it seems, along with former Blizzard CEO Mike Morhaime. And indeed - Jorsys is very much inspired by Camsys =)
3. I think my abuse of overplaying Diablo I on our old 56k modem is what made my parents invest in broadband. I'm happy they didn't make me pay the bills back then.
My long term goal with Jorsys is to put tutorials and mods and make the whole thing accessible for people today and tomorrow. It's all pretty arcane, with tools, mods and instructions barely accessible anymore. Time is limited though.
I don't anticipate creating a community, but if you have anecdotes or stories to share, or want to help out in any way, don't hesitate to get in touch. My email is on Jorsys.