Hacker Newsnew | past | comments | ask | show | jobs | submit | Varelion's commentslogin

The need to search for what they are being exposed to already outs them as being out of touch. Them coming to the wrong conclusion only cements their misunderstanding for both the industry's status-quo and the experiences lived by up-and-commers.


PP is where we managed to do it. Anyone who hasn't had the Gardesil-9 — which actively targets cancerous strains — please don't hesitate. It's byad.


Wouldn't blocking IPv6 and using a kill-switch prevent leaking?


In the case of PureVPN, the only way of preventing leaks is by switching to a different provider. There is definitive proof that they keep logs despite their claims to the contrary. I have linked to a federal criminal complaint where the FBI requested logs after the offense and was given them by PureVPN. The relevant portion is on page 22.

https://www.justice.gov/archives/opa/press-release/file/1001...


Block IPv4 as well and you're pretty solid.


No, not in all cases. Imagine your Browser gets 0-dayed and just send all IPs it sees to an endpoint.


Noob here. If this happened, wouldn't any type of layering of network solutions ultimately result in this leaking all the same though?


No. The Browser or torrent prozess is sandboxed and can only see the VPN Network interface. The other interfaces are hidden.


MY partner also had a hard time getting it, at the age of 26. I also had a hard time getting it, at age 26.

People read the recommendation guideline, and brainlessly follow it without care of why it's in place. If you haven't had a reason to be exposed by 30, 30 is as good of a time to get the shot as 9.

One of the few times I can say the majority of medical practitioners don't know what they are talking about when they spit dogma instead of life-saving sense.


I think it’s more that people judge and then look at the guidelines and get even more judgey and feel justified in doing so.


Let's break this down carefully, step by step.

Start with Jacob.

Jacob’s son → call him A.

A’s son → call him B.

B’s son → call him C.

C’s son → call him D (this is “the son of Jacob’s son’s son’s son”).

Now the question asks for the paternal great-great-grandfather of D:

D’s father → C

D’s grandfather → B

D’s great-grandfather → A

D’s great-great-grandfather → Jacob

Answer: Jacob


If only they could fix the ecosystem's stability; I feel like anything written with C#'s staple packages becomes outdated considerably faster than any other options.


TBH aspnet core has been the most stable web framework I've worked in my life. If you have good test coverage, upgrading projects between major versions often takes minutes — because nothing, or almost nothing, gets broken. Some things might get deprecated, but the old way keeps working for years, you can chip away at it slowly over the next year or two.

You still need to make sure that everything works, but that's what tests are for, and this has to be checked regardless of your tech stack.

Of course, they had a massive backwards compat break when moving from the regular aspnet to aspnet core, here's hoping nothing like that happens in the next 10-15 years..


I've hedged the stability risk by using the narrowest possible slice of the framework.

With enough experience you can accomplish pretty much everything using just the minimal API and router. HttpContext is the heart of AspNetCore. If you can get your hands on instances of it within the appropriate application context, you can do anything you need to. Everything else is dependent upon this. The chances that HttpContext can be screwed with are very, very low. There are billions of dollars riding on the fact that this type & API remains stable for the next decade+.


do you still use the framework's DI pattern with this approach? I have an older school .NET app I work on (still Core) sometimes and haven't gotten much experience with the minimal APIs, although it looks attractive as someone that prefers lower level routers.


I use the AddService() method to inject a things on occasion but its not used heavily. For lightweight apps I'll spin up things like a shared SQLiteConnection and inject it for all resources to use.

The DI pattern is simple & clean at this scale. In my top-level program I define my routes like:

  app.Map("/", Home.HandleRequest);
  app.Map("/login", Login.HandleRequest);
  app.Map("/account/new", NewAccount.HandleRequest);
  app.Map("/{owner}", OwnerHome.HandleRequest);
And then I have HandleRequest implementations like:

  static async Task HandleRequest(HttpContext context, SQLiteConnection sql)
  static async Task HandleRequest(HttpContext context, SQLiteConnection sql, string owner)
  etc...
The actual HandleRequest() method can do anything, including concerns like directly accepting and handling web socket connections.


I saw a lot of breaks between .Net Core 1, 2 and 3... since 3 it's been much more steady though.


Or hope that key figures, or significant portions of the team don't get laid off. I don't trust Google or Microsoft products for this reason alone.


.NET has been powering StackOverflow for decades. All of .NET itself is MIT Licensed.


I haven't written C# professionally since the early 2010s but back then the language had a big problem in that the old Container classes were not compatible with the Container<X> classes that were added when they added generics to C#. This created an ugly split in the ecosystem because if you were using Container<X> you could not pass it to an old API that expected a Container.

Java on the other hand had an implementation of generics that made Container<X> just a Container so you could mix your old containers with generic containers.

Now Java's approach used type erasure and had some limitations, but the C# incompatibility made me suffer every day, that's the cultural difference between Java and a lot of other languages.

It's funny because when I am coding Java and thinking just about Java I really enjoy the type system and rarely feel myself limited by type erasure and when I do I can unerase types easily by

- statically subclassing GenericType<X> to GenericType<ConcreteClass>

- dynamically by adding a type argument to the constructor

- mangling names (say you're writing out stubs to generate code to call a library, you can't use polymorphism to differentiate between

  Expression<Result> someMethod(Expression<Integer> x)
and

  Expression<Result> someMethod(Expression<Double> x)
since after erasure the signature is the same so you just gotta grit your teeth and mangle the method names)

but whenever I spend some time coding hard in a language that doesn't erase generic parameters I come back and I am not in my comfortable Java groove and it hurts.


This hasn't been an issue for a long time because nobody uses the non-generic collections anymore. That doesn't help you with your reliance on type erasure though.

If you're up for it you should give it another try. Your example of subclassing GenericType<X> and GenericType<ConcreteClass> may be supported with covariance and contravariance in generics [1]. It's probably not very well known among C# developers (vs. basic generics) but it can make some use cases a lot easier.

[1] https://learn.microsoft.com/en-us/dotnet/standard/generics/c...


Yeah, I had a chance a few months back when I was the backup programmer in a game development hackathon and my team was developing with Unity which uses C#. It was fun.

People talk about tradeoffs with GC, the worst one is that I've seen an occasional game that has a terrible GC pause, for instance Dome Keeper based on Godot which also runs in .NET. I used play a lot of PhyreEngine (also .NET) games on the Playstation Vita and never noticed GC pauses but I think those games did a gc on every frame instead of letting the garbage pile up.


Whether you're programming with a GC or without one it all comes down to profiling and optimization (when required). Using a GC means you need to think about memory allocations and reduce them (when required) because they factor into how long the GC will need to run. C# is a great language for this because it provides a lot of options for removing GC memory allocations from your code entirely (struct types, Span<T>, etc).

Not all GCs are created equally either. Unity, for example, is based on an ancient version of Mono and so it uses the Boehm GC which is significantly slower than the one used by .NET. Godot probably has two GCs because it primarily runs GDScript (their custom language) and only supports using .NET in a separate engine build. They'll all have their own performance characteristics that the developer will need to adjust for.


PhyreEngine doesn't use .NET, and it is written in C++ (with optional Lua for scripting). You might be thinking of PlayStation Mobile, which does use .NET for scripting.


Covariance and contravarience is very useful, but it's quite annoying that it can only be used with interfaces and not classes/structs. I write lots of little monoid-like data wrappers like Named<T>, and I have to either go through the hassle of dealing with interfaces for something that shouldn't need it, or I have to abandon covariance. The .NET maintainers seem to think that nobody wants covariance on classes, but it would be very helpful to me.


And yet, Java is trying desperately to fix its generics now with project Valhalla (although value types are their main goal), but it's almost impossible to do without major breaking changes. Without reified generics there's major performance cost when primitive (or value) types are used as generic parameters, because of boxing. Also, .NET has had generics since 2005 when .NET Framework 2.0 was released.


> This created an ugly split in the ecosystem because if you were using Container<X> you could not pass it to an old API that expected a Container.

Correct me if I'm wrong but allowing this would mean the called api might insert objects wholly unrelated to X. This would break every assumption you make about the container's contents. Why would this ever be allowed or wanted?


lol, whut?

Microsoft created the ".NET Standard" for this. Literally anything that targets .NET Standard 1.0 should work from circa 2001 through modern day 2025. You still get the (perf) benefits up the runtime upgrade which is what the blog post is about.


.NET Standard is more for maintaining interoperability between .NET Framework and .NET (Core). At this point only (very) legacy applications should be on Framework. Everything else isn't, so they shouldn't use. NET Standard as a target because it limits the new features they can use.


I support your statement for custom dev applications. Unfortunately some large enterprise applications like D365 still require the Framework :(.


Can you provide more examples? I've taken a Win32 application from .NET 3.5, converted it to a .NET Console Application (it was a server with a GUI) to run on .NET 8 with minimal friction, a lot of it wound up me just importing .NET Framework packages anew from NuGet.

What are you looking for out of .NET? The staple packages don't go away as often as in languages like NodeJS


If you are not on the bleeding edge of whatever new framework Microsoft is promoting today, the ecosystem is incredibly stable.


Yep. A breaking change that makes my code a gajillion times faster is still always just a dirty breaking change that I'll hate.


The breaking changes are very well documented and are esoteric in nature. https://learn.microsoft.com/en-us/dotnet/core/compatibility/...

Scott Hanselman has a very short blog on how 20 year old code is upgraded to the latest .NET in just a few short minutes: https://www.hanselman.com/blog/upgrading-a-20-year-old-unive...


Bs, basically, the docs about upgrading between frameworks and what works with what is actually pretty current and often disappears after some years. Especially anything about edge cases. Several upgrades also demands that you do the upgrade version by version. It is tedious work if you don't have a full understanding of the app. Nuget has also become a complete dependency hell. Today you often have to point out what to use to get your legacy apps to build with newer .NET versions. You can't just go with latest.


I just upgraded a .NET Framework 3.5 Windows application to .NET 9, with little to no issues. I even decided to remake it as a WinUI app instead of leaving it in WinForms. The entire process took me less than 2 weeks, of just casual development


"Several upgrades also demands that you do the upgrade version by version"

This seems unlikely. Do you have a source?


I upgraded hundreds of projects. The docs with breaking changes are by version. For smaller solutions you can create a new solution and pull in the code and try to fix everything that breaks. There is also an upgrade tool that can work for smaller projects. But the fastest way for normal sized solutions is to update version by version up to the closest LTS. Then you can go LTS to LTS. .NET does also not only break in code, it breaks in behavior so you really want to test those changes in isolation.


They almost never make any breaking changes these days. And when they do it's because of something that is absolutely unavoidable. I've never hit one.


Key difference being the social safety nets in place around the jobless and under-employed.


I hope Brazil sorts itself out, economically and in regards of crime and opportunity.


I hope it does that, as I hope the same for USA.


Depends whether they support the working class or the ownership class. All politics is about this.


"Oh there you go, bringing class into it again."


Brazil is a melting pot. It's completely impossible.


You imbecile — it was the emails!


"'clearly almost certainly'" !


Android is no longer open-source, with a move like this. In the past few years, it has become obscene how emboldened the corpos have gotten with how far they are willing to push to mine every last crumb of data, eager to sell it to dangerous government bodies.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: