Hacker Newsnew | past | comments | ask | show | jobs | submit | InfinityByTen's commentslogin

I've never understood how their Brave Credits were supposed to work, but I liked the idea that someone wanted to try out a different model to ads how we know them for about a century.

Ads made magazines, newspaper, news, radio, tv and now internet terrible to be with and I'm honestly curious what can be done to improve the situation.


I'm in a similar camp where I'm stucking to windows for that one software: lightroom classic (or CC as they call it). I'm happy to pay for a legitimate replacement that lets me go Linux native on a laptop. I'm fine even paying for the Adobe Cancellation tax from the money I save not buying Windows.

On that note, is this supported on Linux?


Yes DaVinci Resolve is supported on Linux. Unfortunately the free version of DaVinci Resolve does not include H.264/H.265/AAC support on Linux due to codec licensing issues though you can transcode it elsewhere first.

Even the paid version doesn't include aac support in Linux so you have to transcode the audio from videos recorded from your phone, with ffmpeg for example, prior to opening them with resolve. That's the biggest inconvenience it has for me in Linux. And plugins can't solve that either, because apparently can only add codecs for encoding, not for decoding.

I think this will be the year of the Linux then.

Native photo editor with decent ux was the missing piece.


I'm so eager to try this out today after work. I heard a lot of things about Darktable, but then it didn't really feel like the alternative to Lightroom I'd hope for.

Have you tried Darktable or Rawtherapee? Both are excellent alternatives to LR.

I'll be honest that it was *long ago* that I made that attempt. Plus with the new AI denoise, it seemed even harder to move away from it.

But, if there's a battle-tested, mature UI, I'm up for giving it a shot. I have done no video editing, so no clue how my experience with DaVinci Resolve is going to go. I might give Darktable another go while I'm at it. Just tend to have a bad gut feeling about it.

Some people love tinkering. I do that as my job, so I don't often have the urge to do it when I just want to get shit done.


Darktable has really improved over the last couple years. It used to have some pretty confusing workflows and lots of overlapping modules, but somehow it's been getting cleaned up and polished into something of an intuitive app. It is still different but not so overwhelmed with features that you can't figure it out

When I see someone just throwing a lot of numbers and graphs at me, I see that there are in to win an argument, and not propose an idea.

Of late, I've come across a lot of ideas from Rory Sutherland and my conclusion from listening to his ideas is that there are some people, who're obsessed with numbers, because to them it's a way to find certainty and win arguments. He calls them "Finance People" (him being a Marketing one). Here's an example

"Finance people don’t really want to make the company money over time. They just thrive on certainty and predictability. They try to make the world resemble their fantasy of perfect certainty, perfect quantification, perfect measurement.

Here’s the problem. A cost is really quantifiable and really visible. And if you cut a cost, it delivers predictable gains almost instantaneously."

> Choosing to spend three weeks on a feature that serves 2% of users is a €60,000 decision.

I'd really want to hire the Oracle of a PM/ Analyst that can give me that 2% accurately even 75% of the time, and promise nothing non-linear can come from an exercise.


As with any attempt to become more precise (see software estimation, eg. Mythical Man Month), we've long argued that we are doing it for the side effects (like breaking problems down into smaller, incremental steps).

So when you know that you are spending €60k to directly benefit small number of your users, and understand that this potentially increases your maintenance burden with up to 10 customer issues a quarter requiring 1 bug fix a month, you will want to make sure you are extracting at least equal value in specified gains, and a lot more in unspecified gains (eg. the fact that this serves your 2% of customers might mean that you'll open up to a market where this was a critical need and suddenly you grow by 25% with 22% [27/125] of your users making use of it).

You can plan for some of this, but ultimately when measuring, a lot of it will be throwing things at the wall to see what sticks according to some half-defined version of "success".

But really you conquer a market by having a deep understanding of a particular problem space, a grand vision of how to solve it, and then actually executing on both. Usually, it needs to be a problem you feel yourself to address it best!


None of his math really checks out. Building a piece of software is or at least was orders of magnitudes more expensive than maintaining it. But how much money it can make is potentially unbounded (until it gets replaced).

So investing e.g. 10 million this year to build a product that produces maybe 2 million ARR will have armortized after 5 years if you can reduce engineering spend to zero. You can also use the same crew to build another product instead and repeat that process over and over again. That's why an engineering team is an asset.

It's also a gamble, if you invest 10 million this year and the product doesn't produce any revenue you lost the bet. You can decide to either bet again or lay everyone off.

It is incredibly hard or maybe even impossible to predict if a product or feature will be successful in driving revenue. So all his math is kinda pointless.


> Building a piece of software is or at least was orders of magnitudes more expensive than maintaining it

This feels ludicrously backwards to me, and also contrary to what I've always seen as established wisdom - that most programming is maintenance. (Type `most programming is maintenance` into Google to find page after page of people advancing this thesis.) I suspect we have different ideas of what constitutes "maintenance".


> that most programming is maintenance.

What do you mean by maintenance?

A strict definition would be "the software is shipping but customers have encountered a bug bad enough that we will fix it". Most work is not of this type.

Most work is "the software is shipping but customers really want some new feature". Let us be clear though, even though it often is counted as maintenance, this is adding more features. If you had decided up front to not ship until all these features were in place it wouldn't change the work at all in most cases (once in a while it would because the new feature doesn't fit cleanly into the original architecture in a way that if you had known in advance you would have used a different architecture)


> If you had decided up front to not ship until all these features were in place it wouldn't change the work at all in most cases

In my experience (of primarily web dev), this is not true, and the reasons it is not true are not limited to software architecture conflicts like you describe (although they happen too). Instead the problems I usually encounter are that:

* once you have shipped something and users are relying on it, it limits the decisions you are allowed to make about what features the system should have. You may regret implementing feature X because it precludes more valuable features Y and Z, but now that X is there, the cost of ripping it out is very high due to the backlash it will cause.

* once you have shipped an application, most of the time when you add new features you are probably slightly changing at least some UI, and so you need to think about how that's going to confuse experienced users and how to address that in a way you wouldn't have to when implementing something de novo. For an internal LOB app, that might mean creating announcements and demos and internal trainings that wouldn't be necessary for greenfield work.

* the majority of professional web dev involves systems with databases, and adding features frequently involves database migrations, and sometimes figuring out how to implement those database migrations without losing data or causing downtime is difficult and complicated.

* as web applications grow their userbase, the scale of the business often introduces new problems with software performance, with viability of analysing business-relevant data from the system, or with moderation or customer support tasks associated with the system, and these problems often demand new features to keep the broader business surrounding the software afloat that weren't needed at launch.

* software that has actually launched and become embedded in existing business processes inherently tends to have many more stakeholders in the business that care about it than pre-launch software, and those stakeholders naturally want to get involved in decision-making about their tools, and that creates meeting and communication overhead - sometimes to such a degree that stakeholder management and negotiating buy-in ends up being an order of magnitude more work than actually implementing the damn feature being argued about.

To the extent that the amount of work involved in implementing a new feature is inflated by these kind of factors relative to what would have been involved in doing it de novo, I personally conceive of that as "maintenance" work; and in my experience my work on big teams at successful businesses has on average been inflated severalfold by those factors. (I also count work mandated by legal/compliance considerations that arise only after a successful launch as "maintenance". My rough conception of "software maintenance" is that the delta between "the work involved in building a product de novo with the same customer-pleasing features that ours has" and "the work we actually had to do to incrementally build the product in parallel to it being used" as "maintenance".)

Would most people agree with my broad notion of maintenance? I reckon they roughly would, but it's hard to say since people who talk about maintenance rarely attempt to define it with any precision. You give a precise but extremely narrow definition above. Wikipedia likewise gives a precise but extremely broad definition - that maintenance is "modification of software after delivery", under which definition surely over 99.999% of professional software development labour is expended on maintenance! I guess my definition puts me somewhere in the middle.


I like the good ol' "80% of the work in a software project happens before you ship. The other 80% is maintaining what you shipped."

The longer software is sold the more you need to maintain it. In year one most of the cost is making it. Over time other costs start to add up.

As with most things, isn't the truth somewhere in the middle? True cost/value is very hard to calculate, but we could all benefit by trying a bit harder to get closer to it.

It's all too common to frame the tension as binary: bean counters vs pampered artistes. I've seen it many times and it doesn't lead anywhere useful.


Here I think the truth is pretty far to one side. Most engineering teams work at a level of abstraction where revenue attribution is too vague and approximate to produce meaningful numbers. The company shipped 10 major features last quarter and ARR went up $1m across 4 new contracts using all of them; what is the dollar value of Feature #7? Well, each team is going to internally attribute the entire new revenue to themselves, and I don’t know what any other answer could possibly look like.

Even if you could do attribution correctly (I think you can do this partially if you are really diligent about A/B testing), that is still only one input to the equation. The other fact worth considering is the scale factor - if a team develops a widget which has some ARR value today, that same widget has a future ARR value that scales with more product adoption - no additional capital required to capture more marginal value. How do you quantify this? Because it is hard and recursive (knowing how valuable a feature will be in the future means knowing how many users you have in the future which depends on how valuable your features are as well as 100 other factors), we just factor this out and don't attempt to quantify things in dollars and euros.

You’re illustrating one of the points of TFA - a team that is equipped with the right tools to measure feature usage (or reliably correlate it to overall userbase growth, or retention) and hold that against sane guardrail metrics (product and technical) is going to outperform the team that relies on a wizardly individual PM or analyst over the long term making promises over the wall to engineering.

Feature usage can't tell you that.

There's often a checklist of features management has, and meeting that list gets you in the door, but the features often never get used


But surely you have to have at least an hypothesis of how software features you develop will increase revenue or decrease costs if you want to have a sustainable company?

You don't know something is slow until you encounter a use case where the speed becomes noticeable. Then you see the slowness across the board. If you can notice that a command hasn't completed and you are able to fully process a thought about it, it's slow(er than your mind, ergo slow!).

Usually, a perceptive user/technical mind is able to tweak their usage of the tools around their limitations, but if you can find a tool that doesn't have those limitations, it feels far more superior.

The only place where ripgrep hasn't seeped into my workflow for example, is after the pipe and that's just out of (bad?) habit. So much so, sometimes I'll do this foolishly rg "<term>" | grep <second filter>; then proceed to do a metaphoric facepalm on my mind. Let's see if jg can make me go jg <term> | jq <transformation> :)


Well grep is just better sometimes. Like you want to copy some lines and grep at the end of a pipeline is just easier than rg -N to suppress line numbers. Whatever works, no need to facepalm.


I was personally hoping the The JV would be spared Honda's strategy shift, but apparently not.


Spot on! I hate being sucked into an "accountability sink", where delay/bad treatment/ tangential answers are ok (somehow acceptable) and justified because it's not personal, "it's just the process".


I was considering trains for a Berlin-Frankfurt trip, and after looking at the performance of the preferred train, I'm not sure I want to still go that way: 25% cancellations :/


If your train is cancelled, you can take the next train or a different train, without buying a new ticket. The app will tell you this.


So 3h of sitting on the floor because there's no seat after a canceled one?

I mean yes, better than no transport, but it's ridiculous. And if you have an appointment in the morning, 2h of delay are a deal breaker.


You surely haven't read the whole of it. There's more!

> Sinderella She has to leave the ball by midnight — but her last train was cancelled. Now she roams the platform in glass slippers, waiting for a replacement bus.


Hahaha, I missed that. Amazing! Great copy writing right there.


This is an underrated observation. The companies built surveillance as a competitive advantage. The "system" rewards bolstering this advantage.


Distributing surveillance id the data of your usage is extremely difference from understanding how your app is being used.


So, now it's mis-anthropic?


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: