I never understood the "low code" microbubble that was being inflated.
We went through that era already. We called them RAD tools, and they targeted the same sort of strange, mythical end user profile. Someone so technically capable and apt that they could navigate a dizzying domain of deeply buried checkboxes, property fields, and sprawling relationships & side-effects, but who was also simultaneously unable to understand source code or program structure.
When using them you would quickly hit a point where making changes to relatively simple things would take mounting an archeological dig of GUI controls that would have otherwise been a few simple find & replace operations on code in a regular environment.
RAD is a broadly used term, but tools like Delphi were good at it and not restricted. You could build anything, but the dream of dragging and dropping little boxes and filling properties to build applications with the client and having, possibly another team, building other little boxes to satisfy the features you couldn't deliver, was a successful way of doing things.
I would say especially in modern day guy in some cases: I have not seen anyone happy changing modern code (nextjs or so) that has not been touched for 5 years. The 'just drop in a new component' won't work because 9 billion dependencies had updates and break everything (seems modern devs in the npm ecosystem have serious issues keeping things compatible even across minor versions); that issue was never there with delphi; you just make the change; either in code or gui. Many components I used for 2 decades to create and fix applications without the pain I feel these days. Unlike others apparently, I have no interest in actually maintaining applications: I want to make them and if no changes are needed, I don't want to update them; security fixes are meant to be compatible with what there already is, so that's just a recompile. It's not anymore though so it causes work and work costs money. It's not very nice unless you get paid by the hours then it's brilliant.
Commenting on your general use of rad tools, the rest you say i agree with. I see (i googled a bit) that things like Outsystems are RAD tools now, and yes, those are hell on earth to work with (we did a massive project with it and everyone basically thought it was terrible).
Then again, I wouldn't say Delphi was (is?) "low code". Certainly easier to use than some of the alternatives available for building GUI applications at the time (looking at you "Visual" C++!), but that just took care of the boilerplate, you still had to code the application's business logic.
Yes, I agree, but they were talking about RAD tools and Delphi was the posterchild for RAD tools. I was solely responding to the term RAD tools (and their abuse/misuse).
Oh man, I totally forgot about the delphi IDE and the drag and drop editor for making GUIs. I only ever did encounter it in college (early 2000s) and for group projects it was really nice. Simply because it allowed you to prototype GUIs in the IDE and then instead of having to re-implement them in your markup language simply use those prototypes to build the functionality behind it.
It's a bit of a different perspective as you describe.
> I have not seen anyone happy changing modern code (nextjs or so) that has not been touched for 5 years.
Yeah... even if you do faithfully update dependencies it isn't straightforward. The sort of stuff I work on is mostly used internally and overall is fairly simple as far as the UI goes. So, for a while now, I have done away with most dependencies where I can and switched to vanilla JS, HTML and CSS for this sort of tooling. Not only does this help me with future maintaining of these tools, it also makes the whole development process a lot smoother as there is no building involved.
I very much realize that I am in a somewhat luxury position here as I don't do client facing applications and most of them are fairly simple. But that's also my point, all too often I see very simple single purpose applications that make use of a complete ecosystem of modern frameworks where the same can easily be achieved without them.
I think you're taking the intent of "low-code" too literal, or have not worked in an organization of sufficient size for its value proposition to be evident. It's not to solve a solutioning problem; it's to solve an organizational one.
While any "low-code" is marketed as a WYSWG, business friendly solution platform, what it actually is is a way for the business to get access to capabilities IT otherwise gatekeeps as "domain expertise", but fails to actually produce with.
Case-and-point: IT quotes an organization $75 million for 30 projects in fiscal year 20nn. By 20nn+1 IT has completed 5 projects for $75 million. Sick. Org gets "low-code" on their own dime for $1 million, hires a couple "business systems analyst" for a little less, and in 20nn+1.5 has completed 25 projects. In 20nn+3 IT looks incompetent, gets pissed, cries foul, the "business systems analyst" are ingested into IT and taught Java and CRUD circa 1998, and the life-cycle continues.
My experience of apps built by "the business" is 13 years in UBS and Bank of America. "The business" cannot be trusted to understand regulatory and privacy concerns, they cannot test their apps, they do not concern themselves with vulnerabilites in their dependencies or the licence terms therof. For those reasons, and more, the ability for the business to deliver apps more cheaply than IT is illusory.
That doesn't stop a cyclical swing towards RAD/no-code/AI when people forget this and then a swing back when we remember.
I have heard this before. But before we assume incompetence, first we need to understand what the IT is producing. Anyone in IT can also build the application in a very short time. What the business do not fully understand is the effort required to implement all the other non-functional requirements they need but they don't know yet. Once the quick and dirty solution is done, and they are happy that the feature are done, they realize it is not compliant. Now they spent some effort for compliance and after that they realize that there is no backup. If the data is corrupted, all is lost. So then they call up their business analyst to implement that. And after a few such iterations they give up and hand it over to IT. Now IT has a shitty application that is not secure, partially compliant and terrible disaster recovery. So it has to be rebuilt. Now it costs much more than if IT had implemented it in the beginning.
The costs of the IT department exists because we have experience on the real costs of implementing production grade software.
For minor throwaway apps, there is always excel and MS access.
100% this! In some companies the 'simple app' that is described in this post will get some ridiculous quote from central IT/tech ('it will take our team 4 sprints') and then never get signed off. IT will also ban anyone spinning up their own servers due to support issues.
No code platforms manage to get around this.
Another use case - I work for a 'non-tech' consultancy. Clients typically won't like paying us to spin up some flask/django/rails app, but are happy to pay us to spin up some sort of no-code thing for them (perception is that it will be easier to self-support, which is also probably the reality compared to me developing some sort of rails app and then leaving the company).
In my experience you are right - IT will always deny these requests, so you need to build the solutions in a way that avoids accessing existing DBs.
Usually it’s replacing a spreadsheet, so either the information can be manually keyed in or can be imported from various reports. Sometimes you even get into screen scraping, sometimes scheduled reports that are getting dumped to a drive and getting imported… basically any way that avoids needing to get permission from the IT team.
Writing the code is NOT the problem with these enterprise project failures.
Usually decades of problem-solving have led to an absolute mess of blurry ownership and accountability.
This in turn leads to corner cutting and a road completely covered in Chesterton fences…
Tearing arbitrary fence down leads to consequences out of project scope, no one can answer questions, and no one can prioritize - this is a business problem, and no amount of fancy code (lo/hi/full/lo/left or right) will help.
If you run a bigger company and rely on IT and ERP flows, well, it’s a part of your core and you’d better treat it as such!
From your first sentence it is implied you have some working experience with this. What are your thoughts on end user computing and the longer term effect in the business?
One of my very first jobs was taking tools that were developed at the team/dept level and scaling them up org wide if they were useful. Honestly it was great to have end users already deeply thinking about what’s needed by building prototypes themselves. This business was much better for it. Looking back I was very fortunate to land in a large business who embraced technology as a key differentiator very early on.
Low code tooling is alive and well in the entertainment industry. Node graphs are becoming very popular in game engines, shaders, procedural modeling software etc.
The king of low-code, spreadsheets, are still quite popular as well.
Well said. You are exactly right. Low Code stuff is usually invented by people with a specific set of criteria that they realize can be generalized, and defined by a GUI, but who lack enough experience to realize the entire world of possibilities can't be crammed into their model. Languages (like Python, etc) are already the most compact way to represent most things, and trying to avoid that fact just makes things even harder.
You're letting perfect be the enemy of good. The low code solutions can simply have a full-code escape hatch with interop. Much like how Python can interop with C.
But people who lack enough experience to realize the entire world of possibilities that can't be crammed into their model aren't normally humble enough to design good escape hatches.
I don’t think they were strange or mythical users. Excels and Access gave business analysts enough power to make real tools tailored to specific needs. VB would have been the next step. One thing that made this possible was the ubiquity of Windows though.
I still haven’t used anything as easy and powerful as those tools were even if they were Windows only and lacked easy distribution.
I see it quite often as SaaS platform I work for the configuration grew into small time low code because business wanted to do all kinds of changes skipping dev cycle to deliver faster.
„low code” appeals to people - who are not technically capable, whose numbers are big - who think if they can get rid of this complex writing stuff they will be able to do stuff. Those people are on all levels of seniority so if CEO mandates stuff, company will do that.
Unfortunately essential complexity of an application does not go away and I have seen those people struggling, cursing and shooting themselves in the foot.
proper software dev tooling has all the right solutions for handling complexity like version control, CI/CD, unit/integration testing - no low code tools implement that.
But if people hear my solution „let’s teach you proper dev tools” they are pretty much uninterested.
It's particularly baffling because there's currently a _competitive_ "do programming without doing programming" bubble; LLMs. Whatever about one at a time, it's odd to have two approaches to the same false promise going at once.
RAD tools had nothing to do with low-code. I think you're confusing something. You still needed seasoned software developers and you weren't restricted by anything.
We inherited some Informatica ETL workflows once at work. Nice at first glance with good logging, but peel back the surface a little bit and it was a dizzying level of hidden complexity. Some of this was business logic which was inherently complex, but it was so deeply buried in menus and abstractions with no easy diffing or version control...
Like the comment starter mentioned - who are these tools designed for?
Low code efforts go back decades. In the eighties there was this whole movement around 4GL languages. Basically relatively simple languages around databases that enabled relatively quick development of business applications. Before that, Cobol of course was an attempt to come up with a business programming language that was nominally human readable. In the nineties we got things like Visual Basic, Delphi and a bunch of other things which again were targeting relatively inexperienced programmers. And then of course there's a long history of creating domain specific languages for all sorts of things - typically with the goal to let domain experts be able to define things. Tcl/TK is a good example for UI applications on X-windows.
Rails built on all of that. Ruby brought two useful thing to the table (well, borrowed from Lisp) which was meta programming and the ability to use its syntax to build so-called internal DSLs: domain specific languages which were just building on top of Ruby's own syntax instead of needing a new one. Rails is basically a DSL for building web based database applications with server side model view controller style UIs.
Once MVC moved mostly client side with single page javascript applications and rich mobile applications, the MVC bits and bobs became somewhat redundant. And of course the rest of it is basically a nice but otherwise unremarkable ORM framework that you can find for other languages as well. I was never that impressed with it to be honest and I'm not a big fan or ORM frameworks in general. Server side MVC is still somewhat relevant if you are into server side rendering (which reinvents what world + dog was doing twenty years ago) but otherwise not that relevant for most REST APIs.
IMHO the last two decades have been a bit unremarkable for UI development. It seems a lot of things plateaued in the nineties. The average UI projects are still fairly labor intensive for what they do; which is mostly just building a lot of form based crap to input data in some database. We had perfectly usable and relatively idiot proof visual UI builders that did that sort of thing thirty years ago. From a functional point of view, the resulting UIs more or less did the same thing. Was that great code, not necessarily. But it did the job. And most "modern" react/rails/django/whatever code isn't a whole lot better. If you discard the lipstick on a pig that is CSS, you are left with essentially the same UI components and primitives (buttons, checkboxes, text fields, etc.). We had all of those decades ago. You don't need a mustache twirling hipster web ninja to reinvent those wheels.
The "nano" questions are silly and tell you nothing useful about the candidate other than I guess their level of candor in being subordinate to obsequious lines of questioning.
However, the rest of the questions are just as pointless too.
As a hiring manager and product owner, the level of familiarity that an engineer has with using debugging and diagnosis tools (e.g. as simple as how to attach and effeciently use a debugger) is 100x more valuable to the predictable delivery and quality of the things they're building than Programming 101 trivia.
Writing code is quite possibly the easiest, least fraught time-sucking milestone-missing part of software development. The morass of the entire rest of the SDLC is where ambitions and dreams go to die. Version control expertise, build system esoteria, correct configuration & setup of dependencies, understanding how to test, being able to do more than printf'ing your way out of a Russian nesting doll inspired paper bag. That sort of thing.
Then a good nano question would be if they know what a debugger is or how they use it. Or any of those very generic questions that any half decent programmer will know, but others won’t.
These questions are made to filter out the pure frauds. People who claim to have 10 years experience but have none. Those who claim to have a CS degree but it’s a fake diploma, etc. They aren’t meant to tell the good developers from the bad.
As said in the article, these questions are risky because they annoy experienced developers. But it’s also a waste of time having a person who never programmed in their life go through a deep interview about api design or architecture.
Sure, though I'd ask more open ended questions about a problem to see where they go with it.
Maybe they land on using something like a debugger or wireshark or strace or whatever makes sense to dig into whatever horrible voodoo is plaguing them. The important thing is that they are creative and experienced at questioning or confirming their priors and eliminating thousands of paper cuts and yak barber shops for themselves, their team, and their organization, so that collectively everyone is enabled to operate at a high-level instead of everyone constantly bushwhacking their way toward eventual failure.
That’s the recruiter’s job, not the interviewing engineer’s. Once a candidate gets time with an engineer, the assumption should be that they are at least kind of good, and not an outright fraud. A good enough fraud might get through this, but a genuinely good engineer might not get through the nano questions.
I never had any success working with recruiters. I have only had success when doing everything from screening CV’s to the end. I think having a ”chain” with HR, Recruiters etc is an antipattern.
They never had any meaning to begin with. Outside of ostensibly knowing how to program, the title carried with it no firmly held, measured, or maintained any baseline expertise of any kind. It's always been a hodgepodge of ad hoc criteria on a job by job and hype cycle by hype cycle basis.
I wouldn't write it off as a bubble, since that usually implies little to no underlying worth. Even if no future technical progress is made, it has still taken a permanent and growing chunk of the use case for conventional web search, which is an $X00bn business.
A bubble doesn't necessarily imply no underlying worth. The dot-com bubble hit legendary proportions, and the same underlying technology (the Internet) now underpins the whole civilization. There is clearly something there, but a bubble has inflated the expectations beyond reason, and the deflation will not be kind on any player still left playing (in the sense of AI winter), not even the actually-valuable companies that found profitable niches.
Sam Altman is not a hapless victim at the mercy of the isolating effects of his financial success.
He was an opportunistic, amoral sociopath before he was rich, and the system he reaps advantage from strongly selects for hucksters of that particular ilk more than anything else.
He's just another Kalanick, Neumann, Holmes or Bankman-Fried.
Yeah, until it materializes in lower rates of people actually going to college people at minimum folks seem pretty confident that it has a return on investment.
Also I'm honestly not sure I really care/put any weight into the opinions of random adults whose opinions of higher education are basically a reflection of how universities are portrayed to them in their news bubble. The fact that political affiliation not only matters but matters a great deal means it has little to do with the institutions themselves. I bet you could get the same results with "confidence in science" which is just as vague and nonsensical.
We reached 'peak university' (in terms of enrollment) in 2011. [1] I also would not say political affiliation matters, as confidence is plummeting for all groups. As for science, they seem to have stopped asking this question after 2021 (perhaps to avoid the temporary biases caused by COVID?) but Gallup has indeed had science as one of their 'confidence in institution' series of questions. [2] As of 2021 it had a total of 64%, leaving it as the ~3 highest rated institution. That's contrasted against 36% for higher education, leaving it somewhere between the church and medical system.
Don't think covid affected university reputations that much. Slogans like "decolonize maths" and skin colour based recruitment and award of degrees give me very little confidence even in modern STEM degrees from formerly prestigious universities.
These are niche talking point if you aren’t terminally online. Most people probably aren’t even considering politics, it’s just an issue of cost and roi. Degrees are oversaturated and insanely expensive.
Honestly, I think universities took their good reputations for granted, and so chose to pursue other goals than maintain them.
I don't think any institution can maintain the confidence of the general public without being scrupulously neutral on controversial things (or at least scrupulously respectful of all common perspectives) and staying focused on widely-shared values.
Probably has more to do with birth rates, but nevertheless its a good thing since these institutions of higher learning will be more accessible to people who are actually passionate about whay they’re learning rather than a bunch of people trying to check a box
Fertility rates are an interesting hypothesis, but looking at the data I think we can definitely say that's not the driver. In 2011 there was total enrollment of about 21 million. In modern times we're down to around 19 million. [1] Fertility rates have only recently cratered, and from 1990-2010 we were even pretty close to sustainability. That's relevant, because that's when most of all of the current student body would have been born. So there's definitely fewer children, as can be clearly seen in this population pyramid [2], but it can also be seen the difference is, at most, the low hundreds of thousands. And we're talking a difference on the order of millions fewer students.
An open question would also be the overall shift (if any) in international enrollment. If international enrollment has stayed the same (or even increased) then it means the decline in American enrollment could be even more extreme. By contrast if international enrollment has completely plummeted, it could go some way towards mitigating these numbers.
Once you're running a fleet of autonomous vehicles, whatever century that finally becomes a reality, then it makes sense to optimize out of the operation a few people who make at or near minimum wage.
We went through that era already. We called them RAD tools, and they targeted the same sort of strange, mythical end user profile. Someone so technically capable and apt that they could navigate a dizzying domain of deeply buried checkboxes, property fields, and sprawling relationships & side-effects, but who was also simultaneously unable to understand source code or program structure.
When using them you would quickly hit a point where making changes to relatively simple things would take mounting an archeological dig of GUI controls that would have otherwise been a few simple find & replace operations on code in a regular environment.