Hacker Newsnew | past | comments | ask | show | jobs | submit | sawyna's commentslogin

I have installed this and configured the API key, it's been three hours nothing is happening for some reason. The app doesn't show anything. Is it because I have a multi monitor setup?


Can you enlighten me with some advanced features that you use? I would love to start using them. I have always used iTerm, but never really used advanced stuff.


Sure, here are some to look at :

https://iterm2.com/features.html

I'll just mention some that I have used and found good.

The drop-down visor like Yakuake is great.

Instant Replay is handy for ephemeral text that gets wiped from the terminal, like TUI apps and scaffolding tools. You can imagine that there's always something like Asciinema recording into a buffer, so you can stop and rewind to catch any output you missed.

The notifications are useful.. I can start a long running task, get on with other things, and get a MacOS notification when that terminal rang a bell.

Global search is good, and searches across tabs. I also set a large scrollback buffer, so I can do a reverse incremental search for strings. You can also use the Triggers facility to highlight any string matches (or regex) whenever they occur in the terminal output. This is great when you are tailing a log and want to know immediately when an expression is output, alerting you that a condition has occurred.

Jumping up and down through the command entry points in a session is useful, if there's a lot of output to cut through (I think vscode terminal also does this).

I've also used the toolbelt side-window when I want to repeat verbose commands on a host where I don't want to set up aliases. There is much more you can do with the toolbelt, including automatically capturing text that matches regex patterns.

There's a lot I haven't mentioned, but those are some features I can recall finding useful.


I have always wondered - what if the utf-8 space is filled up? Does it automatically promote to having a 5th byte? Is that part of the spec? Or are we then talking about utf-16?


UTF-8 can represent up to 1,114,112 characters in Unicode. And in Unicode 15.1 (2023, https://www.unicode.org/versions/Unicode15.1.0/) a total of 149,813 characters are included, which covers most of the world's languages, scripts, and emojis. That leaves a 960K space for future expansion.

So, it won't fill up during our lifetime I guess.


I wouldn't be too quick to jump to that conclusion, we could easily shove another 960k emojis into the spec!


Black Santa with 1 freckle, Black Santa with 2 freckles…


Wait until we get to know another specie then we will not just fill that Unicode space, but we will ditch any utf-16 compatibility so fast that will make your head spin on a snivel.

Imagine the code points we'll need to represent an alien culture :).


Nothing is automatic.

If we ever needed that many characters, yes the most obvious solution would be a fifth byte. The standard would need to be explicitly extended though.

But that would probably require having encountered literate extraterrestrial species to collect enough new alphabets to fill up all the available code points first. So seems like it would be a pretty cool problem to have.


utf-8 is just an encoding of unicode. UTF-8 is specified in a way so that it can encode all unicode codepoints up to 0x10FFFF. It doesn't extend further. And UTF-16 also encodes unicode in a similar same way, it doesn't encode anything more.

So what would need to happen first would be that unicode decides they are going to include larger codepoints. Then UTF-8 would need to be extended to handle encoding them. (But I don't think that will happen.)

It seems like Unicode codepoints are less than 30% allocated, roughly. So there's 70% free space..

---

Think of these three separate concepts to make it clear. We are effectively dealing with two translations - one from the abstract symbol to defined unicode code point. Then from that code point we use UTF-8 to encode it into bytes.

1. The glyph or symbol ("A")

2. The unicode code point for the symbol (U+0041 Latin Capital Letter A)

3. The utf-8 encoding of the code point, as bytes (0x41)


As an aside: UTF-8, as originally specified in RFC 2279, could encode codepoints up to U+7FFFFFFF (using sequences of up to six bytes). It was later restricted to U+10FFFF to ensure compatibility with UTF-16.


Unpopular opinion: Let us say AI achieves general intelligence levels. We tend to think of current economy, jobs, research as a closed system, but indeed it is a very open system.

Humans want to go to space, start living on other planets, travel beyond solar system, figure out how to live longer and so on. The list is endless. Without AI, these things would take a very long time. I believe AI will accelerate all these things.

Humans are always ambitious. That ambition will push us to use AI more than it's capabilities. The AI will get better at these new things and the cycle repeats. There's so much humans know and so much more that we don't know.

I'm less worried about general intelligence. Rather in more worried about how humans are going to govern themselves. That's going to decide whether we will do great things or end humanity. Over the last 100 years, we start thinking more about "how" to do something rather than the "why". Because "how" is becoming more and more easier. Today it's much more easier and tomorrow even more. So nobody's got the time to ask "why" we are doing something, just "how" to do something. With AI I can do more. That means everyone can do more. That means governments can do so much more. Large scale things in a short period. If those things are wrong or have irreversible consequences, we are screwed.


For reducing the number of points, I had to do something similar but for an isochrone. There were 2000 points for each isochrone and we had like 1000s of map markers. I simply picked every 200th point from the isochrone polygon and works reasonably well.

Of course, mapbox provides a parameter in the API to reduce the number of points using Douglas-Peucker algorithm. But I didn't want to make API call every single time, so we stored it and used a simple distilling depending on the use case.


I wish more people could understand this. Yes, LLMs help me learn faster, brainstorm ideas and so on, but to say that it can generate code and hence you can do complex things easily does not make sense.

For me, writing code has never ever been the challenge. Deciding what to write has always been the challenge.


I went from writing zero code, because I'm lazy, to writing zero code because I get the robots to do it.

I have this folder of academic papers from when access was free during covid which is enough to keep me busy for quite a while. Usually I get caught up with the yak shaving and never really progress on whatever I was intending to work on but now I have this super efficient yak shaver so I can, umm, still get caught up with the yak shaving.

But, alas, shaving yaks and arguing with stupid robots makes me happy so...


My personal daily experience with this! I first used vertexai APIs because that's what they suggested, that Gemini APIs are not for production use.

Then there comes the Google.generativeai. I don't remember the reason but they were pushing me to start using this library.

Now it's all flashy google.genai libraries that they are pushing!

I have figured that this is what I should use and this is the documentation that I should look for, because doing a Google search or using an LLM gives me so many confusing results. The only thing that works for sure is reading the library code. That's what I'm doing these days.

For example, the documentation in one of those above libraries say that Gemini can read a document from cloud storage if you give it the uri. That doesn't work in google.genai library. I couldn't figure out why. I imagined maybe Gemini might need access to the cloud storage bucket, but I couldn't find any documentation as to how I can do that. I finally understood that I need to use the new file API and that uri works.

Yes, I like Gemini model they are really good. But the library documentation can be significantly simpler.


I got really excited to play a custom card game with my friends who are spread across the world. Was looking to see how I can add my own game, but when I downloaded the app to try out some sample games, I see that all the players have to be connected to the same WiFi. Does this mean the app is only for games played in person but on phone?


Maybe try some sort of private VPN/LAN network tool?

I remember back in the day I used to use Hamachi to play Xbox LAN with my friends in different universities.


Check out playingcards.io or VirtualTableTop. These do exactly that, although the interface is a bit cumbersome on mobile


Yes, for now, it supports only the local network. But... I think I can deploy server to make it online as well


That'd be amazing!


This isn't true at all for OWNERS files. If you try developing a small feature on google search, it will require plumbing data through at least four to five layers and there is a different set of OWNERS for each layer. You'll spend at least 3 days waiting for code reviews to go through for something as simple as adding a new field.


3 days for a new change on the biggest service on the planet? Not bad.


I agree that it could be worse! Facebook has significant (if not more) time spent and I found adding features to news feed a heck of a lot easier than adding features that interacted with google search. Generally a lot of this had to do with the number of people needed to be involved to ensure that the change was safe which always felt higher at Google.


I'm only an outside observer in this conversation but could it be that the review process (or the lack thereof) and the ease with which you can add new features has had an impact on the quality of the software?

The thing is, in my experience as a user Facebook (the product, not the former company) is absolutely riddled with bugs. I have largely stopped using it because I used to constantly run into severe UI/UX issues (text input no longer working, scrolling doing weird things, abysmal performance, …), loading errors (comments & posts disappearing and reappearing), etc. Looking at the overall application (and e.g. the quality of the news feed output), it's also quite clear that many people with many different ideas have worked on it over time.

In contrast, Google search still works reasonably well overall 25 years later.


There are pretty different uptime and stability requirements for a social product and web search (or other Google products like Gmail). When news feed is broken life moves on, when those products break many people can't get any work done at all.

One of Google's major cultural challenges is imposing the move slow and carefully culture on everything though.


It’s not considered ok for newsfeed to break. It would be a massive issue that would command the full attention of everyone.


And yet folks who are on call for it say things like this: https://news.ycombinator.com/item?id=40826497


I have the same background: I find the code quality at G to be quite a lot higher (and test pass-rate, and bug report-rate lower) than News Feed, which was a total shit-show of anything-goes. I still hold trauma from being oncall for Feed. 70 bugs added to my queue per day.

The flip side is of course that I could complete 4 rounds of QuickExperiment and Deltoid to get Product Market Fit, in the time it takes to get to dogfooding for any feature in Google.


I guess it's mostly a matter of what framework the society put in place. We incentivise economical achievements, optimisations, tech. The common theme is making wealth. As long as that's the case, you can't expect these problems to be solved. Humanity/societal frameworks have to start incentivising the right problems.

These so called billionaires have become billionaires because the society and economy are set up that way and they got good at this economical game.


I mean, do we really need to assume a preconfigured basis in order for wealth and power to be “incentivized”? They’re pretty self-incentivizing and built into human societies. On the other hand, I think you’d have to figure out some pretty radical rules and enforcement strategies to decouple the incentives from wealth and power. That presumably would require both wealth and power. Seems internally inconsistent.


Humanity has come so far and I'm sure it's possible to do it. I'm not saying wealth and power are bad. The ways of getting them are what's causing this. If you can become a billionaire by focussing on starvation, that could be a good one.

Today you are incentivised to get rich by making a technological innovation like Facebook. Facebook is a great money making business idea, however is it a life saver as much as solving clean water? I guess not. Unfortunately, we prioritised and incentivised the business models of Facebook and that's where we are.

FYI, I'm not picking on Facebook, it could be any big company today.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: