Hacker Newsnew | past | comments | ask | show | jobs | submit | eddyg's commentslogin

Swish⁽¹⁾ lets you drag the divider to resize multiple windows at once. BentoBox⁽²⁾ is inspired by Fancy Zones. And Lasso⁽³⁾ is a grid-based window manager with custom layouts. There's also MacsyZones⁽⁴⁾ that appears to resize multiple adjoining windows but I've never used it (it appears to be open-source with an option to pay to support the author).

⁽¹⁾ https://highlyopinionated.co/swish/

⁽²⁾ https://bentoboxapp.com/

⁽³⁾ https://www.thelasso.app/

⁽⁴⁾ https://macsyzones.com/


Have you tried ElevenLabs latest "expressive mode" models? ("Von Fusion" is particularly fun!) Models like this (well, not Von Fusion, but some of the others...) in the hands of scammers will be able to fool a lot of people.

https://elevenlabs.io/agents/expressive-mode


Quoting from https://web.archive.org/web/20181118114804/http://imsai.net/...

“Director John Badham states in the commentary that the actor voicing the raw content that was later modified for the computerized effect was John Wood (the Falken character), reading the script word-for-word in reverse order in order to portray a "flat quality" with limited inflection. That raw audio was then edited and re-assembled after being run through audio processing equipment to achieve the desired effect.”


Obsidian with the (core) Daily Notes⁽¹⁾ plugin plus Jump-To-Date⁽²⁾ and Daily Note Navbar⁽³⁾ is a powerful combo for me.

Everything is still searchable (or can be fed into an LLM) since it’s all Markdown text files behind the scenes. (And I can type my thoughts much faster than I can write.)

⁽¹⁾ https://help.obsidian.md/plugins/daily-notes

⁽²⁾ https://github.com/TfTHacker/obsidian42-jump-to-date

⁽³⁾ https://github.com/karstenpedersen/obsidian-daily-note-navba...


Writing with a pen and paper is different from typing on a keyboard at. brain level.

I need to finish that research and write that blog post, apparently.


I suspect it's to do with fluency, at least partially. Have you looked into how it goes for very fluent typists? Problem is, I suspect you might get enough fluency that typing isn't distracting as compared to handwriting only at significantly faster speeds than most people consider the beginning of "fast".

It's not about distraction, actually. Since the mode of typing vs. writing is different; I can feel that my brain is working differently.

While I don't type lightning fast (~130WPM and beyond), I already type without thinking about it. I just think and it appears on screen, actually. On the other hand, when I'm writing, there's another sub-process which is evaluating whether what I'm writing makes sense or works in real world. It's not possible to do this while typing since the freedom provided by the pen, and the thinking process is completely different. Also, I can build a model of what I'm writing about in my mind better. In short, typing lends to shallower thinking while writing allows more depth and exploration on the subject.

This is also evident when I'm writing code. I design it on paper, and type that design to the IDE I use.

The research articles I started to collect also points to something similar. When using pen and paper, neurons fire differently and in larger networks, pointing to a different mode of thinking. Considering I started use pen/paper and keyboards almost at the same time, and able to verify that using a pen really makes my brain work differently, I find "you're typing it wrong" a flawed argument for the most part.


Interesting, thanks. Notwithstanding all I've said, I've also struggled with a nebulous sense that there's something to handwriting, possibly bordering on "sacred", to be a bit dramatic about it. This has resulted in phases of annoying obsession with handwriting where all its technical/practical shortcomings are overwhelmed by this unknown property for a time.

The sub-process thing sounds vaguely plausible, but at the same time, I'm under the impression that I'm already constantly evaluating whether what I'm writing makes sense, its implications and assumptions, etc. At least I feel that I'm attentive to the impression of it not making sense, though I also try not to confuse the feeling that something makes sense with the fact of it being so.

Anyway, what I'm going for here is, did you consciously apply any technique or process to evaluating this or do you come by this insight naturally?


Curious to know what you actually do with the notes, though. I've tried to get in the habit of keeping daily journals but it ends up being very much write once, read never. Maybe having some kind of fuzzy, semantic search or LLM would unlock their usefulness, but so far I don't find myself ever really using the things I write down.

I use a similar system at work.

Off the top of my head, I have used it to put links together — for example, a Stack Overflow description of some bug, the official documentation, and maybe copying in the exception or the error message.

Then I've sometimes done the same thing when I'm doing ops on a broken system.

Other times it's copying in a specific query or a link to a query in Application Insights.

Other times it's the ticket I was working on, a comment from a coworker, and maybe a few references to either tickets or files. Very rarely is this professional or looks nice. It's just that I need one place where I can put multiple things that fit together.

I find that retrieval does drop off very quickly. But that's just to say most of the value is front loaded. And we should not underestimate the value of being able to answer 'da fuck was I doing yesterday'. Context switching is expensive. But in many ways it is also unavoidable. If you context dump at the end of a workday, it's that much easier to return to it later.

The other thing I do is because the note system I use can I can drop in Hashtags. Yeah, I know. Not exactly HN friendly. What that means is I can find all the times I ran into the same issue, sort of weaving a meta thread through my work. It's really hard to explain, but it's one way of treating notes as not just segments of text.


For me it's mostly about being able to find stuff. For example, I save links (with some notes) that I've seen that day, and weeks/months later I'll remember "I read an article about $THING" or "I saw a repo that was similar to $THING" and I'll be able to find it.

Omnisearch is really good: https://publish.obsidian.md/omnisearch


Same - they are too low quality for me to decode more than 8 hours afterwards.

Quoting from the press release: https://www.apple.com/newsroom/2026/01/apple-introduces-new-...

"Designed exclusively for tracking objects, and not people or pets"

(emphasis mine)


That’s just so you don’t sue the over your lost dog

I think it’s also for practical reasons: your dog needs to be near a person with an iPhone. If the dog is in the middle of the woods it won’t show up. Generally most objects require a person to move them and so the chances of them being near an iPhone are much higher.

Or your dog eating the AirTag with the button battery inside it

Somewhat related (and seems like a lot of folks don't know about) is VS Code's "Code Tour" feature: https://github.com/microsoft/codetour

The best part is the ability to easily edit, delete, and re-order "tour points".

Tours can either be checked into the repo or exported to a "tour file" to allows anyone to replay the same tour (without having to clone any code to do it!)


The iPhone automatically goes into BFU (Before First Unlock) after 72 hours of inactivity (it actually reboots the phone). This can’t be disabled.

In addition, there are additional restrictions where your passcode will be required. For example, if the passcode has not been used to unlock the device in the last six days and Face ID has not unlocked the device in the last eight hours, then you must use a passcode to access the device (in other words, biometric unlock is automatically disabled).

If you've ever wondered why you've had to enter your passcode after a good night's sleep and haven't entered your passcode recently, that's probably why!

Given these built-in precautions, a click-bait headline like this is a bit excessive for most people.


>The iPhone automatically goes into BFU (Before First Unlock) after 72 hours of inactivity (it actually reboots the phone). This can’t be disabled.

But if the threat is from law enforcement, as the beginning of the article implies, how does that help? They just have to scan your face with your phone when they seize it, and slurp up all the data they want.

>In addition, there are additional restrictions where your passcode will be required. For example, if the passcode has not been used to unlock the device in the last six days and Face ID has not unlocked the device in the last eight hours, then you must use a passcode to access the device (in other words, biometric unlock is automatically disabled).

The conditions for triggering this is so unreliable that it probably exists more to prevent people from forgetting their pins, than meaningfully increase security.


before apple changed it again in ios26 - tripple hitting the side button to bring up emergency also went into BFU. (can't confirm- screw you Dexcom.)

>before apple changed it again in ios26 - tripple hitting the side button to bring up emergency also went into BFU

AFAIK that disables biometrics, but that's not the same as BFU.


Interesting- searching says you're right. I thought the enclave discarded the derived decryption keys in those situations. Looks like it just goes extra locked down.

It really seems it should. It's a quick distress signal, no time to ask the user how locked they want their phone to be.

For iPhones your eyes have to be open.

I’ve got to think some cops are good at holding up the phone and saying look at this text message and people opening eyes to see it though.


Not just open, but (by default) “paying attention” and not actively trying to “look away” from the phone:

The TrueDepth camera will provide an additional level of security by verifying that you are looking at iPhone before unlocking. Some sunglasses may block attention detection.


There are already agents doing this in SRE roles.

Agents are monitoring metrics and logs. A bug is introduced into the system. Error rates go up and the agents notify diagnostic agents. These agents look at recent commits and figure out the problem. They instruct another agent about how to fix the issue and deploy the change. The problem is fixed before an engineer even has time to start looking at logs.

If you aren’t seeing this, you’re not keeping up with what others are already doing. It’s not just people vibe coding ToDo apps.


If you have Apple News, you can open the article in Safari and then "Share" it to the News app.

Have you heard of Switch-Angel? If not, check out https://www.youtube.com/shorts/hbZb1Q0mM7k (to pick one) for a taste of what Strudel is capable of in "real time".


It’s been a lot of fun watching her subscriber count go through the roof. She’s outrageously talented.

It’s also funny because usually it’s hard to reproduce what a musician does. I can listen to someone play guitar, but there’s so much nuance to how it’s played that you need to be pretty good to reproduce it.

But so much of her music is code, and she shows you the code, so she’s really teaching you how to reproduce what she’s doing perfectly. It’s awesome for learning.


Thanks! I saw a few live vids. However it is cool to see someone saying what they are doing (guess that would be perfect training material for an LLM ;)). Seriously, I do not think an LLM can replace any artist. It is exactly that live thing that makes it cool. However I remember some research projects that were trying to reinforce also music selection with crowed movements in a club. iMHO would be some fun to create actually some live reinforcement from audience reactions and see where this is going.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: