Hacker Newsnew | past | comments | ask | show | jobs | submit | shin_lao's commentslogin

What is the core use case for this structure? Because it seems like a very heavy price to pay just to keep value stable, as opposed to make a copy of that value when you need it.

A stable vector generally improves append performance (because you never have to move the data), allows storing data that can't be moved, and allows for appending while iterating. The downside is slightly worse performance for random lookups (since the data is fragmented across multiple segments) and higher memory usage (depending on implementation).

This "semistable" vector appears to only do the "allow appending while iterating" part, but still maintains a contiguous buffer by periodically moving its contents.


It's basically a form of reference-counted data access as I understand it.

If the code here operates with a bit of data from some container, the container will ensure that this bit will persist until all references to it are gone even if the bit is removed from the container.

Depending on the datamodel this may be handy or even required. Consider some sort of hot-swappable code when both retired and new code versions running in parallel at some point. That sort of thing.


Because String Theory hasn't delivered falsifiable predictions, yet keeps expanding to accommodate failure.


The word 'falsifiable' comes from Popper's criterion, which is central to scientific methodology. What it means: if theory predicts something, and later observations show that prediction doesn't hold, then the theory is incorrect.

String theory doesn't work this way, whatever was measured will be explained as an afterthought by free parameter tuning.


What do you mean by "falsifiable"?

Do you mean that have been falsified? Of course, no standing theory delivers falsified predictions, when that happens you throw the theory in the garbage.

Do you mean that can be falsified in principle? In that case String Theory has falsifiable predictions, I gave you one. In principle, we can make experiment that would falsify special relativity. In fact, we've made such experiments in the past and those experiments have never seen special relativity being violated. The test of special relativity are the most precise tests existing in science.


I suspect what they mean is that there is no outcome of an experiment such that, prior to the experiment, people computed that string theory says that the experiment should have such a result, but our other theories in best standing would say something else would happen, and then upon doing the experiment, it was found that things happened the way string theory said (as far as measurements can tell).


But there are such experiments. String theory says that the result of such experiment is: Lorentz invariance not violated.

> but our other theories

This is not how scientific research is done. The way you do it is you a theory, the theory makes predictions, you make experiments, and the predictions fail, you reject that theory. The fact that you might have other theories saying other things doens't matter for that theory.

So string theories said "Lorentz invariance not violated", we've made the experiments, and the prediction wasn't wrong, so you don't reject the theory. The logic is not unlike that of p-testing. You don't prove a theory correct is the experiments agree with it. Instead you prove it false if the experiments disagree with it.


There are no such experimental results satisfying the criteria I laid out. You may be right in objecting to the criteria I laid out, but, the fact remains that it does not satisfy these (perhaps misguided) criteria.

In particular, predicting something different from our best other theories in good standing, was one of the criteria I listed.

And, I think it’s pretty clear that the criteria I described, whether good or not, were basically what the other person meant, and should have been what you interpreted them as saying, not as them complaining that it hadn’t been falsified.

Now, when we gain more evidence that Lorentz invariance is not violated, should the probability we assign to string theory being correct, increase? Yes, somewhat. But, the ratio that is the probability it is correct divided by the probability of another theory we have which also predicts Lorentz invariance, does not increase. It does not gain relative favor.

Now, you’ve mentioned a few times, youtubers giving bad arguments against string theory, and people copying those arguments. If you’re talking about Sabine, then yeah, I don’t care for her either.

However, while the “a theory is tested on its own, not in comparison to other theories” approach may be principled, I’m not sure it is really a totally accurate description of how people have evaluated theories historically.

And, I think, not entirely for bad reasons?


> But there are such experiments. String theory says that the result of such experiment is: Lorentz invariance not violated.

This is not a new prediction... String theory makes no new predictions, I hear. I don't understand why you need to be told this.

To your point, there exist various reformulations of physics theories, like Lagrangian mechanics and Hamiltonian mechanics, which are both reformulations of Newtonian mechanics. But these don't make new predictions. They're just better for calculating or understanding certain things. That's quite different from proposing special relativity for the first time, or thermodynamics for the first time, which do make novel predictions compared to Newton.


[flagged]


I suppose it's my bad that I've interacted with a troll that might not even be a real human being.


It has delivered falsifiable postdictions though. Like, there are some measurable quantities which string theory says must be in a particular (though rather wide) finite range, and indeed the measured value is in that range. The value was measured to much greater precision than that range before it was shown that string theory implies the value being in that range though.

Uh, iirc . I don’t remember what value specifically. Some ratio of masses or something? Idr. And I certainly don’t know the calculation.


Seeing a lot of people shit on Paul, which I guess, why not, but it's not super useful or positive.

I think this is a fairly good essay which can be boiled down to "don't do premature optimization" or "don't try to behave like companies much bigger than you".

There are three advantages to this:

1/As a founder, get your hands dirty, even if in the grand scheme of things it's inefficient. You'll get first hand experience and feedback. 2/Avoid the upfront cost of "something that scales", and thus get quicker feedback. 3/Makes you different, very important in the beginning.

"Do things that don't scale" is a way to drive the point home and must not be taken literally...


Many people are concerned with becoming an overnight success and being unable to withstand the load, and losing the momentum. So they build highly scalable things before the slightest need for horizontal scaling even arises.

I think that vertical scaling is underappreciated. Just throwing more resources at your monolithic backend can buy you quite enough time to understand how to best scale your product. More importantly, it may give you the time to reconsider what are the key strengths of your product that the users are for, and thus to understand what needs scaling.

Also, when users really love your product, they will excuse you for its spotty performance. Early Twitter was built on a hugely inadequate architecture (a Rails app), and kept melting and crashing; remember the "fail whale"? Despite that, users were coming to it in droves, because it did the key things right.

To my mind, the early stage is all about easy iteration and finding out what makes users happy and enthusiastic. Ignore scaling; experiment, listen to the users, talk to the users. When it clicks, you can consider scaling. It may happen at a point you did not anticipate, and could not optimize for.

Technology is a tool, an instrument. It's great to have a Stradivarius, but you need some good music first.


Pets.com died because they scaled too fast and couldn't handle the load and didn't have the cashflow to fill orders.

It's a valid concern, but most people radically overestimate the likeliness.


When anything fails (even a business) a suitable excuse is found (maybe a story that executives can sell to investors). If you were there then sometimes you know what the actual hidden reason was (often an intersection of multiple causes).

It's a human pattern for businesses to discover a strawman, build a story about that strawman, then share that story widely.

Not saying the above answer about pets.com is wrong - just that in my experience you need to be cynical enough to ignore the story and then resourceful enough to find a better causal reason.

Edit:

Scaling is NOT given as a reason. "Despite only earning $619,000 in revenue, the business spent more than $70 million on advertising and marketing", "the company sold its pet products under their original purchase price", "bulk items like dog food were expensive to ship".

I guess another reason is that people make up stories like blaming scaling?

I'll make one up: Amazon owned 50% of pets.com and Amazon encouraged it to fail.


"Didn't have the cashflow" sounds more like lack of investment / loans, but I agree, explosive growth that catches you unready can happen. It seems to be an exception rather than the rule though. But everyone strives to be an exception, I know :)


You hire your sales team for their infectious enthusiasm. Then they get some momentum going and you don't want to try to stop that momentum, but now they're bringing you so much fame that it's now tipping into infamy.

And every time you ask them to slow down, they tell you a very convincing story about why if anything we should be going faster.

I worked for a small consulting shop where the founder was one of these people. I had to try three times over about 2 years to quit. The last time I just kept talking over him like a priest doing an exorcism and shoving my 2 weeks' at him.


It’s also important to realize that not every successful or worthwhile business has millions or billions of users that requires extreme optimization and scalability.

I work on internal tools at my company. We know how big our environment is, there isn’t much sensitivity to performance, and we don’t see random spikes that we don’t cause ourselves. Yet I had someone on my team who was obsessed with optimizing everything, to the point of changing his entire code base to a new language that only he knew, so he could save a few milliseconds on paper. Aside from him, no one noticed, no one cared. His optimizations just introduced risk, as he was the only one who could support what he built. When he left, we threw the whole thing away at management’s demand. Had it been a little more simple and slow, someone else probably could have taken it over without as much effort.


My own ethics is mostly about collaboration and confidence. I make sure that I’m ready to offload work whenever I want, and knowing what I ship is working. Other thing are just fun experiments. If it does not impact positively the business/consumers, I’m very happy to not do it. God knows that there’s always something to work on that does.


> "don't try to behave like companies much bigger than you"

This is such good advice for organizations at all stages. As a consultant I spend a lot of time talking startups and small companies out of hobbling themselves by adopting policies they think they have to simply because they're a corporation, when those policies only make sense when you have at a minimum hundreds of people involved in the org.

Everything from k8s to nosql to overly restrictive security policies. The Netflix employee handbook/guide really drove this point home to me. When you're small, and you're hiring well, you can afford to actually delegate real responsibility to your staff and let them use their judgement. Not everything needs to be a hard and fast rule until and unless there's an unacceptable level of risk or a demonstrated problem at hand.


This relates to dealing with people too. A few times I've hired people who seemed to have good interpersonal skills, for people-facing roles. But for some reason, as soon as they were sending an email on behalf of the company, ie. to a customer or supplier, suddenly they're communicating like a soulless corporate automaton. Like, you don't have to pretend to be a cog in a massive corporation; no one actually likes receiving that kind of communication! Perhaps at a certain scale, when you're employing thousands of customer service agents, you'll need them to follow a strict script to maintain quality control. (Or maybe not.) But it's certainly not necessary at a company with single or double digit employees.


There is a key difference from "don't do premature optimization." "Premature optimization" suggests the scaled version is optimal. It might not be worth the resource cost to achieve it, but disregarding that, its the best.

Whereas "Do things that Dont Scale" is suggesting the non-scaled process may be the optimal one. For example (and this sort of thing is in the article IIRC), giving direct contact details to the CEO instead of a generic form that get's sent to some shared customer service/sales inbox. Way better process for selling your product. The inquiry form scales but its in no way a premature "optimization."

Another way of putting it is that SCALING IS BAD. Or to be a bit more nuanced, it's a necessary evil. It's complex. It's resource intensive. It creates distance between you and your customer. Of course business goals and environment may dictate it, but that doesn't mean none of the processes are degrading in quality. So its more like dont do "premature process degradation" than "premature optimization" I think.


Or boiled down to "don't solve problems you don't have".


> "don't try to behave like companies much bigger than you"

That's a good point.


A lot of people have not built a successful company either.


>Seeing a lot of people shit on Paul

Hardly "a lot". There's like three negative comments and one of them is strictly criticizing the article itself and not paul. I thought it brought up some good points.


You may be underestimating the depth of scope of SAS. You can't just replace it with "a bunch of R and Python scripts".


Depends on what you're doing, to be fair. R would probably be a better choice as it's more likely to have all the statistical stuff in SAS. Python might make sense if you want to integrate it into web/other systems.

It's definitely not an easy lift and shift though, as the OP made it out to be. Particularly when you're integrating with other departments it's vital to have perfect compatibility as otherwise you'll break the other departments workflow.


Doesn't mean we shouldn't do it.


Well, sure, but perhaps some kind of plan is warranted?


IIRC correctly, the previous administration did try to do some of the slow, steady imperfect work of planning to gradually bring back key industries.

See https://en.wikipedia.org/wiki/CHIPS_and_Science_Act and https://www.theverge.com/2024/7/11/24195811/biden-ev-factory...

Of course, the voters wanted something else.


Who is going to commit the resources to make serious money losing plans vs manufacturing overseas?


Isn't the point of capitalism to not have a plan and let the market figure it out?


How are markets going to figure anything out with tariffs changing every day, depending on the mood of dear leader?


That's a problem for the markets to suss out.


They've sussed out that if you suck up to him he'll give you an exemption. Of course if you are a medium sized business you are screwed and have to wait in line, but you'll get your chance as long as you can hold on through the summer.

In two years of course it won't matter.


> They've sussed out that if you suck up to him he'll give you an exemption

It's a very thinly veiled protection racket. People do tend to repeat the plays that they know.


sounds about right.


Countries that believe that are dominated by those who plan.


Citation needed. How did those five year plans go?



It's a principle of capitalism, but taken to the extreme, it's just a strawman. At this point, I think we are pretty sure that some interventions make capitalism better.

This post is specifically about Industrial Policy: https://en.wikipedia.org/wiki/Industrial_policy

But other effective interventions are anti-trust and demand-inducing regulation (e.g. people want to fly because they know it's safe).


No, of course not. That's oversimplifying to the point of idiocy.

Markets do not mean that an Industrial strategy / Industrial policy is not needed.

Markets respond to incentives created by such a strategy.


The free market (which I think people also include in capitalism) would correctly predict labour intensive jobs would be outsourced. This is very much a feature (comparative advantage), not a bug. I realized a lot of supposedly free market people don't even know the basics of it. Politically the free market has become an identity associated with national greatness and a sense of control of ones destiny. The dominant feeling seems to be if you have a free market, you will win everything (which is actually opposite from the truth).


That's what we did, and it moved everything to China.


China, who do have an industrial strategy. It worked for them.


The point of Capitalism is Marx needed a straw man to tear down. The world has never seen what he envisioned.

What you might call capitalists very much plan. They don't believe in central planning where one "guy" makes a plans and everyone else implements them, but they do plan.


> they do plan.

I've just sat through a long meeting with lots of Jiras and Q2 objectives. Trust me, there's planning. Lots of planning.


Marx never said that capitalists didn't plan. In fact, the possibility of the transition from late stage capitalism/imperialism to socialism is based on that very fact, capital got concentrated in very big companies with internal planification. See https://en.wikipedia.org/wiki/The_People%27s_Republic_of_Wal...


I'm sure that the capitalists would disagree in this instance.


Capitalism as such went out the window with tariffs.


No, purely free markets (which weren't free to start off with) went out the window.


Which is why things that bring back manufacturing to the US is something we were doing. It's just unfortunate that instead of continuing that, the current administration is trying undermine the effective efforts of the previous administration's actions that helped bring manufacturing back into the US.


No, it doesn't. There is a presumption that manufacturing is Better, a more ideal way of organizing the economy, based on a false nostalgia of America past.


sure, but it will take longer than 4 or 8 years and everyone in power wants their own thing, not continuity. it cannot happen without a long term plan and long term plans cannot happen if have, maybe, a year to do things and the rest is election time.


The EU has already been doing that with "fines".


The fines were for breaking EU laws. Don’t break EU law, don’t get fined. It’s quite straightforward


The argument is that it's better handled at the state level, and having a centralized federal authority for that is of little to no value.


Having a patchwork of safety standards seems like a mess versus a federal policy that applies across the country


It'll allow states to race to the bottom in worker safety, which I suppose is the idea.

It might also make compliance overall more expensive by creating a fragmented patchwork of regulations and practices, which increases complexity. Look at the effect of different states having different fuel formulation requirements on the oil refining industry.

A lot of the cost of regulatory compliance comes from complexity, fragmentation, and cognitive load. Simple uniform regulations are cheaper and easier.


Which turns into Florida and Texas making mandatory water breaks illegal. It’s a bad argument that only serves to ruin workers lives and health .


Not sure why it makes sense to redo the same work 50 times


Nor for companies to have to 50 different compliance rulesets to follow.


Not just that, but one central institution can employ experts and do studies due to the larger budget. 50 small ones are just going to scrape by with the minimum.

Maybe there's a middle ground where states can have local-overrides - if their residents agree.


What's the most expensive software you bought?


lol, believe it or not this was an interview question one of my Director of Engineering used to use to sus out the experience of people. As I read the parent comment I was thinking the same thing.

Be careful listening to this kind of advice. You never know what ballpark the "CTO" is playing in.


I'm just thinking about 6 to 7 figures software investment and trying to understand how you could do that without several meetings.


Easy, use JIRA and give your whole company seats. Add on some other Atlassian products and you'll quickly get to 4-5 figures per month.


DB Engine rankings are not very reliable and are constantly gamed, they shouldn't be used for anything serious.


Covid stats skewed everything. This article isn't very rigorous.


The article should be comparing 2023 per-capita mortality rates with 2019.


This. Mortality displacement isn’t a new phenomenon.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: