I think “middleware” is a bit of a misnomer in Next.js. It’s really an edge function that runs before your request hits the app -- quick header checks, routing, and other lightweight guards. It runs on the edge runtime, not on the app server.
The post's author seems to conflate the edge runtime with the server runtime. They’re separate environments with different constraints and trade-offs.
I struggled with Next.js at first for the same reason: you have to know what runs where (edge, server, client). Because it’s all JavaScript, the boundaries can blur. So having a clear mental model matters. But blaming Next.js for that complexity is like blaming a toolbox for having more than a hammer.
> But blaming Next.js for that complexity is like blaming a toolbox for having more than a hammer.
The biggest issue is that the complexity is self-inflicted. The term middleware has a pretty well understood meaning if you've worked with basically any other framework in any language: it's a function or list of functions that are called at runtime before the request handler, and it is assumed those functions run in the same process. The fact that Next.js puts it on the edge and only allows one is breaking that assumption, and further, most applications do not need the additional complexity. To go back to your toolbox analogy, more tools mean more complexity (and money), so you wouldn't get a new tool simply because you might need it, you get it because you do need it, and the same applies to edge functionality. If Next.js wants to allow you to run code on the edge before your app is called, that's fine, but it should be opt-in, so you don't need to worry about it when you don't need it, and it shouldn't be called "middleware".
Yes, the term "middleware" is unfortunate, this is abundantly clear.
> you wouldn't get a new tool simply because you might need it
No but you get a framework precisely because it's "batteries included": many apps will need those tools. You don’t have to use all of them, but having them available reduces friction when you do.
> If Next.js wants to allow you to run code on the edge before your app is called, that's fine, but it should be opt-in
It already is. Nothing runs at the edge unless you add a middleware.ts. You can build a full app without any middleware. I'm surprised the author of the article fails to acknowledge this, given how much time was spent on finding alternative solutions and writing the article.
> If learn what is a package/module in python, try to apply that in Go without any brain power, you will complain that Go is bad. If you are using any technology, you should have some knowledge about that technology.
Don’t really think the analogy holds up, middleware is an established term in web frameworks, the same space next is operating in and the thing looks a lot like a middleware but violates some of the core assumptions people make about what a middleware is. It is not really surprising it is a point of confusion.
I'm also working with Next.js, app router, and like it very much.
The problem is probably that Next.js makes it very easy to move between front and back end, but people think this part is abstracted away.
It's actually a pretty complex system, and you need to be able to handle that complexity yourself. But complexity does not mean it makes you slower or less productive.
A system with a clearly separated front- and back-end is easier to reason about, but it's also more cumbersome to get things done.
So to anyone who knows React and wants to move to Next.js, I would warn that even though you know React, Next.js has a pretty step learning curve, and some things you will have to experience yourself and figure out. But once you do, it's a convenient system to easily move between front- and back-end without too much hassle.
I wouldn't consider this a misnomer, but a really big misuse of the term. Middleware has a long established definition in web applications, so they really should not use the term if they mean something entirely different.
If learn what is a package/module in python, try to apply that in Go without any brain power, you will complain that Go is bad. If you are using any technology, you should have some knowledge about that technology.
It’d be hard to draw any conclusion. A whistleblower must be under extreme stress and pressure which in itself in some way or other will increase the risk of death — so that has to be taken account before saying the plausible cause for the excess deaths is assassination.
- A company I worked for wanted to be 100% focused on doing one thing. It was spending 10x more than it was making revenue. It went bankrupt.
- Another company I worked for always insisted on not having all eggs in one basket. There was one big revenue maker that dwarfed the others though. The company is still around and doing well.
I have quite a scattered brain too so I get the appeal of "choosing to focus". But looking others do it I see the risks : refusing to experiment and learn new stuff, or find new opportunities.
EDIT: I'd like to add that focusing or not focusing is not a useful dichotomy, it's more about finding the right "exploration vs exploitation" balance.
Some of the replies here are pretty good, I basically agree with “if it works for your data scientists then why not”.
I’m actually a software developer with 10 years experience and also happen to do data science. And found myself in situations where I parametrized a notebook to run in production. So it’s not that I can’t turn it to plain python. The main reasons are
1. I prototype in a notebook. Translating to python code requires extra work. In this case there’s no extra dev involved, it’s just me. Still it’s extra work.
2. You can isolate the code out of the notebook and in theory you’ve just turned your notebook into plain py. You could even log every cell output to your standard logging system. But you loose context of every log. Some cells might output graphs. The notebook just gives you a fast and complete picture that might be tedious to put together otherwise.
3. The saved notebook also acts as versioning. In DS work you could end up with lots of parameters or small variations of the same thing. In the end what has little variations I put in plain python code. What’s more experimental and subject to change I put in the notebook. In certain cases it’s easier than going through commit logs.
4. I’ve never done this but a notebook is just json so in theory you could further process the output with prestodb or similar.
I guess it won’t take very long before you can do PPC (or similar paid advertising) in LLM results. Google had to turn to paid advertising for profitability. OpenAI and the likes might have to do the same — competition is fierce and price are dropping so not sure users will continue to accept paying a subscription.
Pandas has been working fine for me. The most powerful feature that makes me stick to it is the multi-index (hierarchical indexes) [1]. Can be used for columns too. Not sure how the cool new kids like polars or ibis would fare in that category.
Multi-indexes definitely have their place. In fact, I got involved in pandas development in 2013 as part of some work I was doing in graduate school, and I was a heavy user of multi-indexed columns. I loved them.
Over time, and after working on a variety of use cases, I personally have come to believe the baggage introduced by these data structures wasn't worth it. Take a look at the indexing code in pandas, and the staggering complexity of what's possible to put inside square brackets and how to decipher its meaning. The maintenance cost alone is quite high.
We don't plan to ever support multi-indexed rows or columns in Ibis. I don't think we'd fare well _at all_ there, intentionally so.
> and the staggering complexity of what's possible to put inside square brackets and how to decipher its meaning
I might not be aware of everything that's possible -- the usage I have of it doesn't give me an impression of staggering complexity. In fact I've found the functionality quite basic, and have been using pd.MultiIndex.from_* quite extensively for anything slightly more advanced than selecting a bunch of values at some level of the index.
Complicated code is (probabilistically) slow, buggy, infrequently updated code. By all means, if it looks like a good enough tool for the job (especially if the alternatives don't) then use it anyway, but that's slightly different from it not being your concern.
I've seen enough projects need "surprise" major revisions because some team tried to sneak a dataframe into a 10M QPS service that my default is keeping pandas far away from anything close to a user-facing product.
I've also seen costs balloon as the data's scale grows beyond what pandas can handle, but basically all the alternatives suck for myriad reasons, so I don't try to push "not pandas" in the data backend. People can figure out what works for themselves, and I kind of like just writing it from scratch in a performant language when I personally hit that bottleneck.
I work a lot with IoT data, where basically everything is multi-variate time-series from multiple devices (at different physical locations and logical groupings). Pandas multi index is very nice for this, at least having time+space in the index.
Sorry I don't know what to answer. I don't think what I do qualifies as "workload".
I have a process that generates lots of data. I put it in a huge multi-indexed dataframe that luckily fits in RAM. I then slice out the part I need and pass it on to some computation (at which point the data usually becomes a numpy array or a torch tensor). Core-count is not really a concern as there's not much going on other than slicing in memory.
The main gain I get of this approach is prototyping velocity and flexibility. Certainly sub-optimal in terms of performance.
I’m pretty sure the thumbs up emoji or the slightly smiling emoji won’t get different interpretations based on font.
And here the issue is not in font differences (or different pictures of the same thing getting different interpretations): it’s the thing represented that’s actually different.
Your question sounds like you want to know how the word is spelled, and no one would put two r’s at straw, so the model could be assuming that you’re asking whether it’s strawbery or strawberry.
What happens if you ask the total number of occurrences of the letter r in the word? Does it still not get it right?
Checks are quite foolproof too - spelling out the number (a hundred eighty eight) and writing the number (188), like on checks.
The paragraph you mention seems to worry more about mistakes more than tempering attempts though (taking 10x the dose prescribed could indeed be problematic)
The important part there is the termination characters. "Dollars" / "Pounds only" etc. and the decimal point (though that's easier to turn into a comma and make into thousands... Funnily enough the German way of using comma as the decimal separator avoids this)
Very cool. When I was living in Paris I wanted to do something similar (but without the driving / directions part). I just liked walking randomly in the city and wished someone could tell me about all the interesting little bits of culture and history of the places I strolled past.
Could probably do it with ChatGPT. Take picture of building. Ask what it knows about it. Haven’t tried, but may work for some buildings. No GPS though.
The post's author seems to conflate the edge runtime with the server runtime. They’re separate environments with different constraints and trade-offs.
I struggled with Next.js at first for the same reason: you have to know what runs where (edge, server, client). Because it’s all JavaScript, the boundaries can blur. So having a clear mental model matters. But blaming Next.js for that complexity is like blaming a toolbox for having more than a hammer.