Hacker Newsnew | past | comments | ask | show | jobs | submit | bezout's commentslogin

“Microsoft always lands on top”


The reels looks too realistic though. I’m guessing they made the actual celebrities record them and left the AI to take care of the pictures and convo.


If US insists on keeping healthcare private I’d expect soon or later for these money-sucking “health” insurance companies to be replaced by VC-money-sucking tech companies offering a fairly priced subscription for accessing drugs and treatments.

The only obstacle is - and it’s a big one - drugs and health insurance companies crying out to the gov about how tech companies are stealing their lunch and they should broke down.


I'm wishing now for this to happen. At least for now tech companies are pricing things a bit more fairly


To be fair some nations do have their software development teams. Italy has Sogei which is fully controlled by the Ministry of Economics. They had a reputation for developing crappy software but I heard they are getting better

PS: they were not involved in the development of this algorithm


Smals to Belgium is pretty much what Sogei is to Italy. A government-controlled IT sweatshop.


The way it works in Italy: the design and implementation of the algorithm in Java 8 or an even older version of C# was probably outsourced to one of the Big 4 consultancy companies.

A group of underpaid graduates was put together to crack the problem. All of them crammed for their algo & ds exam since that’s what the Italian university system incentivises so none of them did actually remember a thing about algorithm design. They googled a bunch of words and forked the first PoC they found on GitHub.

Everything was wrapped into a nice PowerPoint full of corporate BS and delivered to the government.

Edit: As expected, the algorithm was developed by a company owned by Dxc Technology [1] and Leonardo which is the Italian defence company. The contract was worth 5 million of euros.

[1]: https://www.wired.it/article/algoritmo-scuola-supplenze-mini...


I don’t see what “Java 8 or an even older version of C#” has to to with the correctness of the algorithm or its implementation.


Doesn't have anything to do with correctness.

Has everything to do with "we won't challenge the system and propose changing how we actually build software because doing so will lead to us losing the contract, so we'll build upon these antiquated frameworks that will become harder and harder to support and sweep the problem under the rug long enough for me to buy another $EXPENSIVE_THING"

From a technical perspective, this is a terrible approach.

From a "look, we all just want to make money here, right?" perspective, makes total sense.


It doesn’t. It’s not an attack on the programming languages. It’s just that they have a sweet spot for using old ass versions which might or might not have known vulnerabilities and they don’t care about updating it.


FYI, OpenJDK 8 still receives regular security updates and will continue to do so for at least three more years (Temurin, RedHat) (or, according to Oracle, until end of 2030). It’s still in production in a lot of places.


5 million, at 100,000 euro comp per engineer and a 2x multiplier for total cost per head, that’s only a team of 8 for three years.

Not that more money would have fixed it, but good software is not some $200k affair.

This program really could have used a small software verification team.


Software engineers in Italy don't earn 100k.


For a just graduate it is less than 30k


ofc dxc would be behind this.

(dxc was owned or spun off from HP Enterprise, HPs consulting arm)


It’s not right now but it will be when combined with WEI or if Safari gains more market share. Overall, I agree with the sentiment expressed in this blog post: https://httptoolkit.com/blog/apple-private-access-tokens-att...


Safari has absolute market share.


Submarket share?


How many iOS safari users are you forgetting about?


Apple Screen Time API is a hot mess. It’s definitely one of the most poorly documented APIs out there. I dare you to take a look at Apple’s documentation. While playing with it, I also found out - I wasn’t the only one - that some stuff such as the STWebpageController does not work correctly in the Simulator and you must use a real device.


All of Apple's documentation is terrible. All of their developer ecosystem is terrible. They do care about you writing software for their devices if you are not an Apple employee.

Time is better spent writing apps for the web, android, Linux, and Windows. You'll at least be productive and have access to resources and tools that work.


That also mirrors my experience, the whole dev ecosystem is very messy and I gave up on the platform


> Apple Screen Time API is ... definitely one of the most poorly documented APIs out there.

Sadly, there's a lot of competition for this title. Apple's documentation is pretty terrible.

I never liked the Microsoft Windows model or APIs but I felt their documentation was so good my opinion was well supported by facts :-).


Same experience. Its half baked, and you don't know which half, until you try and implement it. I have a lot more appreciation for Android documentation, after seeing what IOS has to offer.


It’s still called “retweet” on the iOS app though. Even though I understand they cannot go exactly together since the mobile app has an approval process before it goes online, I’d like for web and mobile to have the same copywriting


Sure they can do it - scheduled releases are possible: https://developer.apple.com/help/app-store-connect/manage-yo...

They just decided to YOLO the change without coordination or waiting for all parts to be available, so we've got this messy "random things as they become available" process.


The web still says Retweets too. I'm not sure what this post is showing.


The post states it. This is not a problem because Safari is not the leading web browser. Apple has very limited power over what they can do with it.


Exactly. Websites will not require this version because they know that Safari is a minority market share and they can't force users to buy an Apple product. However if this is supported by Chrome and Safari all of a sudden the equation flips and many sites will feel that they can reject service to other users.


Safari is not only leading browser in mobile, it is the only choice any iphone users have unlike chrome where user has choice to not use it. I would be more wary of safari changes than chrome changes.


> Safari is not only leading browser in mobile

No it's not? Android has upwards of 70% of the mobile market[0], and Chrome has nearly 65% of the mobile browser market, compared to Safari with under 25%.[1]

> the only choice any iphone users have

Sort of. WebKit is the only choice iOS users have, but there are plenty of browsers available on iOS (including Chrome and Firefox) that use WebKit, not just Safari.

[0]https://gs.statcounter.com/os-market-share/mobile/worldwide

[1]https://gs.statcounter.com/browser-market-share/mobile/world...


Do you really need that many endpoints even at LinkedIn scale? I’d expect a lots of them is due to engineers reinventing the wheel due to undocumented endpoints


Never want to deprecate an endpoint, so you end up with /service/v1, /service/v2, /service/v3… /service/v37


Someone who's job it is to oversee development across the comp, just needs to ensure teams treat internal dependencies like they would external dependencies — allow for time to upgrade upstream services as part of the normal dev cycle, never get more than N versions behind etc.

If you're on v37 of a service and your forced to continue to support v1 (and 35 others) there's a problem somewhere.


For one, that there wasn't enough challenging to make backwards incompatible changes.

If it's internal APIs, they need to get on top of deprecating and removing older ones. This is one of the key points of Google's SWE book (at least the first part) and the benefits of a monorepo; if you change an API in a backwards incompatible way, you're also responsible for making sure every consumer is updated accordingly. If you don't, either you're left maintaining the now deprecated API, or you're forcing however much teams to stop what they're doing and put time into a change that you decided needed to happen.


> If you're on v37 of a service and your forced to continue to support v1 (and 35 others) there's a problem somewhere.

I think you misunderstand.

v23 was built on v5, which is built on v1. Re-using the earlier logic was obviously better than duplicating it. v24 is used by an external system that nobody has any control over, so it’s impossible to change. All the other versions… well, no idea if anyone uses them, but everything works now, why invite disaster by removing any?


One of the problems with API versioning is that it’s really a contract for the whole shebang and not just a specific endpoint. You almost certainly want them to move in sync with each other.

So if you have an API with 10 endpoints, and one of them changes 10 times… you now have 100 endpoints.


It doesn't make sense, are you implying that every change changes the behaviour of every single endpoint?


No, more like “For our team’s next micro service epic are we using API v5 for it like we have been so far?”…”No we’ll upgrade to consuming API v10 as part of this epic”.

But maybe the only changes between API v5 and v10 were to 5% of the endpoints. But the other 95% of the endpoints got a new version number too. That way people can refer to “API v10” instead of “Here’s a table with a different version number for all 19,000 endpoints we’re consuming in this update on our micro service”.

It’s an organizational communication thing, not a technical thing. The “API v10” implies a singular contract. Otherwise how do you communicate different version numbers for 19,000 endpoints without major miscommunications? You couldn’t even reasonably double check the spreadsheet sent between teams. Instead it’s “just make sure to use v10”. Communication is clear this way.

Obviously this method has pros and cons, I’ve explained the pros. Also this is why chaos engineering can help by intermittently black-holing old API endpoints to encourage teams to move to new ones and finally remove the old versions entirely so you don’t ever get to 19,000 endpoints, which is the real problem.


Yes, this is what I meant. At least a service should be versioned as a single unit, it’s /api/myservice/v2/endpoint. But if you have 10 endpoints in your service and 10 versions, it’s still 10x10 even if most of them don’t change.

It would be a nightmare to consume something like /api/myservice/endpoint/v2. Needing v2 of the create endpoint but only v5 of the update? That would be ugly to try and work against. And actually there is no guarantee versions are even behavior compatible (although it would be stupid for it to wander too far). There can be cases where response objects don’t convey some info you need in some versions etc.

I was thinking of service as being the unit of "API" here rather than an API consisting of multiple services, "each service provides its own API" is how I was thinking of it. But I can see the usage of saying "this is our [overall] public/internal APIs" too. And I agree /api/v2/myservice would be a bit much if every service moved the global version counter every time a single endpoint was changed lol

(although I suppose you could make an argument for "timestamp" as a "point in time" API version, if you version the API globally. Sounds like it would cause friction as services try to roll out updates, but it's notionally possible at least.)


I was thinking along the same lines. It is easy to make it sound like you have a lot of endpoints, when the vast majority are likely API mappings pointing to the same underlying service.

API endpoints is is almost as weird of a metric as LoC. It does tell you something, but in a way that can be misleading.


For example if your URL scheme is /api/{version}/{path} then any new version will introduce lots of new endpoints. Most of them will work in the same way, but without checking source code you will never be sure.

Because of that I prefer to version each service instead of versioning whole API, but both of those strategies have pros and cons.


I’m trying to parse this too and am reaching the same conclusion. Basically they seem to be implying with the “100 endpoints” number a scenario where I have 10 endpoints and change endpoint 0v1 and now call it endpoint 0v2, I must duplicate endpoints in the range 1v1 to 9v1 to go alongside my endpoint 0v2 so I serve them all together. This doesn’t make sense to me as there’s no reason to upgrade or duplicate the other nine endpoints just because I updated the first as far as I can tell.

It really reaches absurdity when the first endpoint is on its tenth iteration (the others haven’t changed) and now you’re serving ten duplicate endpoints per version, or 100 total endpoints where 90 of them are duplicate of themselves.


If you're only incrementing when you change a particular call, then you end up with /api/myservice/create/v2 (or sometimes this is done via header) but v5 for the update call, and understanding what version goes to what becomes a cognitive overhead.

(and really the problem isn't basic CRUD endpoints, it's the ones with complex logic and structure where what's being built isn't necessarily the same thing over time.)

It's one thing when v2 and v5 are the latest, but if someone else comes through later and wants to bolt on a feature to a service that is trying to talk to v2/v5 when v3/v9 are the latest, you have to go back and look up a map of which endpoints are contemporary if you want to add a third call (v2/v5/v2) that is supposed to work together.

This can be done via swagger/etc but you are essentially just rebuilding that service versioning with an opaque "api publish date" built over the top.


Absolutely not.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: