Browser monoculture is bad for the open web and if all we have is Webkit (Safari on iOS, Macs) and its fork Blink for all the Chromium browsers, then the web will start becoming a mess of proprietary extensions instead of open standards.
I see this claim often. As someone who learned web dev during the days of IE dominance, I don't understand it.
Internet Explorer never kept up, especially after IE6 reigned supreme. They weren't "a little behind" or didn't have some more niche APIs missing or implemented in a buggy or proprietary way. It actively ignored standards, it didn't receive real updates for a long time (IE11 being the fruition of what the best they could offer was) and generally with few exceptions (namely, the invention of CSS Grid and XMLHttpRequest) generally degraded the ecosystem for over a decade. It actively held back companies from adopting new web standards. Its why polyfilling became as proliferated as it is now.
Safari / WebKit has not induced any of this. Yes, sometimes Safari lags behind in ways that are frustrating. Yes, sometimes Apple refuses to implement an entire API for political rather than technical reasons (see the FileSystem API), but largely it has managed to stay up to date with standards in a reasonable time frame.
While their missing or subset implemented APIs can feel really frustrating, they haven't actively held back any work nor the mass adoption of newer browser APIs.
Apple has their faults, but this isn't even close to the drudgery that was the IE heyday era.
I've not used Claude Claude yet, but why would it be bad if it gains features that people use?
Did people ever complain about Photoshop to have too many features demanding some cognitive load? Excel? Practically every IDE out there?
There is a reason people use those tools instead of the plain text editor or paint. It's for power users and people will become power users of AI as well. Some will forever stick to chatgpt and some will use an ever increasing ecosystem of tools.
good question. the difference with AI tools is the interface isn't stable in the same way photoshop or excel is. with traditional software you learn it once and muscle memory carries you. with LLM tools the model itself changes, the optimal prompting style shifts, features interact with model behavior in unpredictable ways. so the cognitive load compounds differently. not saying features are bad, just that the tradeoffs are different
I don't know I tend to either come across new tools written in Rust, JavaScript or Python but relatively low amount of C. The times I see some "cargo install xyz" in a git repo of some new tool is definitely noticeable.
I fail to see how it makes it much easier to review a lot of interdependent branches where branch C needs new logic from branch B which needs commits from branch A.
Sounds like a nightmare to me.
Can someone please explain what this means? I'm familiar with agentic development workflows but have no clue what this means and what I can do with it?
Is it something like n8n, to connect agents with some work flow and let the work flow do stuff for me?
In the late 90s and early 2000s there was a bunch of academic research into collaborative multi-agent systems. This included things like communication protocols, capability discovery, platforms, and some AI. The classic and over-used example was travel booking -- a hotel booking agent, a flight booking agent, a train booking agent, etc all collaborating to align time, cost, location. The cooperative agents could add themselves and their capabilities to the agent community and the potential of the system as a whole would increase, and there would perhaps be cool emergent behaviours that no one had thought of.
This appears, to me, like an LLM-agent descendent of these earlier multi-agent systems.
I lost track of the research after I left academia -- perhaps someone here can fill in the (considerable) blanks from my overview?
As someone who got into ongoing multi-agent systems (MAS) research relatively recently (~4 years, mostly in distributed optimization), I see two major strands of it. Both of which are certainly still in search of the magical "emergence":
There is the formal view of MAS that is a direct extension of older works with cooperative and competitive agents. This tries to model and then prove emergent properties rigorously. I also count "classic" distributed optimization methods with convergence and correctness properties in this area. Maybe the best known application of this are coordination algorithms for robot/drone swarms.
Then, as a sibling comment points out, there is the influx of machine learning into the field. A large part of this so far was multi-agent reinforcement learning (MARL). I see this mostly applied to any "too hard" or "too slow" optimization problem and in some cases they seem to give impressive results.
Techniques from both areas are frequently mixed and matched for specific applications. Things like agents running a classic optimization but with some ML-based classifications and local knowledge base.
What I see actually being used in the wild at the moment are relatively limited agents, applied to a single optimization task and with frequent human supervision.
More recently, LLMs have certainly taken over the MAS term and the corresponding SEO. What this means for the future of the field, I have no idea. It will certainly influence where research funding is allocated.
Personally, I find it hard to believe LLMs would solve the classic engineering problems (speed, reliability, correctness) that seem to hold back MAS in more "real world" environments. I assume this will instead push research focus into different applications with higher tolerance for weird outputs. But maybe I just lack imagination.
Maybe this article can help you. It mentions the multi-agent research boom back in the 1990s. Later, reinforcement learning was incorporated, and by 2017, industrial-scale applications of multi-agent reinforcement learning were even achieved. Neural networks were eventually integrated too. But when LLMs arrived, they upended the entire paradigm. The article also breaks down the architecture of modern asynchronous multi-agent systems, using Microsoft's Magentic One as a key example.
https://medium.com/@openagents/the-end-of-a-15-year-marl-era...
openagents aims to build agent networks with "open" ecosystems, many agent systems these days are centered around workflows, but workflow is possible when you already know what kinds of agents will be there in your team. But when you allow any agent to join/leave a network, the workflow concept breaks, so this project helps developers to build a ecosystem for open collaboration.
Thanks, but do you realize that you explained it to me using agent systems and ecosystems and open collaboration but i still don't know what it does for the user?
Can it book flights for me?
Is it supposed to be some kind of autonomous intelligent bot that does "stuff" for me? What stuff? From the sibling comments it sounds like "we" are putting LLMs together and hope that something emerges? What?
Ultimately, i ask what openagents.org does for me as a user.
But, in my experience enterprises are moving to office365.
Ten years ago I would have bet on the Google suite but it looks like Microsoft is winning this game in the corporate world.
Google gets all the private users, Microsoft the companies.
So, no, sadly, MS Office is not dying. I wish it would.
What I've occasionally seen pointed out for a whole range of separate microsoft software is it's not the merits of on individual bit of software that gets enterprises to use it, it's the M365 package deal of all of them, and once you're in that ecosystem then you might as well use it. Teams is the common example where this comes up, where it's shortcomings are well known but the costs of licensing and setting up an alternative can't get over the threshold.