Hacker Newsnew | past | comments | ask | show | jobs | submit | qzzi's commentslogin

I've been hacking professionally for 30 years and I know what to look for. Anthropic's report is garbage. Period.


Compared to what? What gives you all that, and what prevents you from having it with tables?


Javascript arrays have functions for all of that, so if you use something like React and renders your table from data arrays then it's all pretty trivial. I guess the point is that if you have to use JS to do those manipulations, then at some point it's going to be easier to just the React(/Vue/Svelte/etc) approach than manipulating the table yourself using the API described in the article.


Frameworks that make development easy are inherently inefficient. If performance is priority, then you better sort it yourself.


countless plugins for use in countless frameworks. We should have all that built-in by now. like we finally have a functioning date picker.


I'm pretty sure they're talking about reference counting that depends on the arguments, not about optional arguments or invalid argument combinations.


C and Python automatically concatenate string literals, and Rust has the concat! macro. There's no problem just writing it in a way that works correctly with any indentation. No need for weird-strings.

  " one\n"
  "  two\n"
  "   three\n"


Personally, I'd rather prefix with `\\` than have to postfix with `\n`. The `\\` is automatically prepended when I enter a newline in my editor after I start a multiline string, much like editors have done for C-style multiline comments for years.

Snippet from my shader compiler tests (the `\` vs `/` in the paths in params and output is intentional, when compiled it will generate escape errors so I'm prodded to make everything `/`):

    test "shader_root_gen" {
        const expected =
            \\// Generated file!
            \\
            \\pub const @"spriteszzz" = opaque {
            \\    pub const @"quadsprite" = @import("src\spriteszzz/quadsprite.glsl");
            \\};
            \\
            \\pub const @"sprites" = opaque {
            \\    pub const @"universalsprite" = @import("src\sprites/universalsprite.glsl");
            \\};
            \\
            \\pub const @"simpleshader" = @import("src/simpleshader.glsl");
            \\
        ;

        const cmdline =
            \\--prefix src -o testfile.zig src\spriteszzz/quadsprite.glsl src\sprites/universalsprite.glsl src/simpleshader.glsl
        ;

        var args_iter = std.mem.tokenizeScalar(u8, cmdline, ' ');
        const params = try Params.parseFromCmdLineArgs(&args_iter);

        var buffer: [expected.len * 2]u8 = undefined;
        var stream = std.io.fixedBufferStream(buffer[0..]);
        try generateSource(stream.writer().any(), params.input_files.items, params.prefix);
        const actual = stream.getWritten();

        try std.testing.expectEqualSlices(u8, expected, actual);
    }


They aren't comparing those crimes. You don't get to pick and choose where laws should matter based on whether you personally like the item being banned. Your "bad guys will do it anyway" argument doesn’t hold.


The point is obviously that the counter is centralized, and it relates to the previous example where is no concurrency. The need for synchronization when sharing data across threads is mentioned just below that.


In the right place, but maybe at the wrong time. You can expect +1 and always be right, when the value should be +10.


The __LINE__ macro, like all other macros, is expanded during the preprocessing of the source code and is not handed to the debugger in any way.


Yes... And debuggers that implement line numbers, generally work by taking that information as part of the preprocessing stage. And the #line and __LINE__ macro/directive were implemented _for debuggers_ when originally created. They were made to be handed over to the debugger.

If you simply compile and run, the debugger won't have __LINE__, no. But it also won't have line numbers, at all. So you might have missed a bit of context to this discussion - how are line numbers implemented in a debugger that does so, without access to the source?


No, the debugger does not get involved in preprocessing. When you write "a = __LINE__;", it expands to "a = 10;" (or whatever number) and is compiled, and the debugger has no knowledge of it. Debugging information, including the mapping of positions in the code to positions in the source, is generated by the compiler and embedded directly into the generated binary or an external file, from which the debugger reads it.

The __LINE__ macro is passed to the debugger only if the program itself outputs its value, and the "debugger" is a human reading that output :)


This is the place where the listening socket is initialized, and you can see that if the port is 0, it doesn't do anything. Are you observing different behavior?


It is a commercial product and it is their goodwill to offer licenses for free. They don't have to do it at all, Mr. Entitled.


A commercial product that has seen almost no green field development in ages. The only way to make it long-term sustainable is to reduce the barriers to new development.

One thing they could do is to offer it in cloud instances. Let more people play with it, see its strengths compared to Linux, and let it win share on its merits.

The optics of this aren't great - it looks like they aren't fond on people learning its characteristics.


I can't imagine there is much new development. I knew companies using VMS in the casino gaming/lottery industry. They moved on to other platforms, like AIX and Linux, decades ago.


Right. Good luck getting new users.

VMS has been dead for decades. If they want to attract new users they should make better choices

Them, like ArcaOS and OS 2200 are living in some wild fantasy land. There's, at least in theory, ways to revitalize these products but it's not going to happen by digging larger moats

More people are probably still using CP/M then all of those put together


New VMS users? Why lord would anyone want to do that, they floated some crazy idea of VMS on Intel Atom as a IoT platform as if that made any sense some years ago during the migration trajectory.

Somehow this seems like one of those idea's that many legacy-niche-OS developers imagine themselves in, it's old, uses little memory (and does little) so now it must be feasible as a embedded-OS, AmigaOS-oid developers imagined the same in the early 00's...

However about total amount of users I don't fully agree, there's significant deployments of VMS still around in the infrastructure and finance sectors. Although some very high profile customers have migrated away on to Linux using compatibility layers.

And within that subset there's customers that still have high performance requirements making them willing to invest in VMS on new hardware.

If you ran VMS on a GS1280 (64 Alpha CPU's, split into 2 32CPU partitions), then migrated to several generations of SuperDome's (Itanium) and your work-load is still scaling with your wider company demand, bare metal deployments on latest x86 hardware of VMS can perfectly make sense.


Maybe they want to compete for MCP and GECOS users. VMS is the new MULTICS killer.


MULTICs has capability based security, VMS is stuck in the world of access control lists. (But OpenVMS 7.2 runs just fine in the virtual VAX 11/780 in my smartphone)


Sorry. I was being sarcastic.


MCP's Architecture struck me as absolutely amazing and fascinating when i read about it.

Bull released a MCP VM demo development kit, I tried really hard to set it up and write some simple ALGOL for it.

If anything it thought me to appreciate how MULTICS/UNIX (and aside the pdp10 world) gave us line oriented developer-focused interactive environments that don't make ones eyes bleed.


It's no coincidence the villain in Tron is called MCP. As a former colleague who managed an A-series mainframe, MCP is actually very user-friendly. It's just that it's extremely picky about its users.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: