1. It may be legally permissible, but it is impolite, to change the license away from the well-known Apache software license towards something which has not been legally vetted, and is in fact generated entirely by AI with minimal oversight.
2. There is an open question of what the supposed value add is here from the Pear team, that could not have been achieved by the people whose work they are co-opting.
3. Without a clear value proposition, the oversight given to projects by YC is called into question. I think this is the point most people are concerned by.
Your first point is not true. If Continue wanted a copyleft license, they would have done so. Continue basically said they are fine with people forking and changing the license
This seems like too much effort has been put into it for it to be a purely satirical exercise, but I really cannot see why anyone would prefer to use this within Python, rather than (say) reaching for a proper type-safe functional language and either using that directly, or calling out to it from Python code.
As a person who teaches functional programming at degree level, this is the kind of thing that would put people off FP before they even get in the door. It is obviously less ergonomic than standard Python, and the code you end up with is no safer or more abstracted than what you started with.
That said, if the authors really do think it's a better way for programming in Python, and it works for them, then more power to 'em.
I completely agree with you. As a Haskeller, I find it immensely irritating when people try to force ‘FP’ into programming languages where it doesn’t fit. It just ends up putting people off functional programming, which is a very nice paradigm in languages which properly support it. I feel that the best code is that which adheres to the general paradigm and style of the language it is written in, and these kinds of libraries go against that.
What’s worse, these versions of ‘FP’ often bear very little resemblance to actual functional programming and its advantages. Rather, people seem to fall into the trap of confusing functional programming with ‘those fancy words I hear from Haskell’. Now, I may like Haskell, but its concepts are only useful because of the rest of the language — you can’t just port ‘monads’ and ‘IO’ into some random language and expect them to be usable. It looks like this library partly avoids that fate, in that it does have more than just ‘monads’… but yes, the monads are still there, and they’re just as clunky as you’d expect from Python.
I mean, you can... But only with great discipline and experience. It just (probably) won't work if you're having to mentor more junior engineers with an OOP background.
Do you think that style of error handling works in Scala? It certainly seems possible to get right but the quality of Scala code I've worked with in different roles has frequently been atrocious, I'd almost rather dig into some old ColdFusion or PHP.
(Funny, I was downright terrified 15 years ago about ever mentioning "ColdFusion" or "Sharepoint" in a forum because I'd get contacted by recruiters about the first and salespeople about the second. I'd always tell the salespeople that we had no budget at all for Sharepoint and just had one for our dev team because you could get the license for free with an MSDN subscription)
To be perfectly honest, I don't know Scala very well.
I'm more interested in "pure FP" languages than in multi-paradigm languages, because the former seem much more coherent in design to me. IMO, functional programming isn't about adding extra degrees of freedom (or "extra features"), it's about working within very well-defined and rigid constraints (what one might call safeguards).
As a bit of an observation, and I suppose judgement, Python developers will seemingly do everything possible to not use another language. It is a strange phenomenon, but it partially explains why there are so many libraries There is a lot of reinventing the wheel, or trying to at least.
i have fastapi server which gets and mangles data. and calls numpy with that. data is badly formatted, i need to make it right. functional code is better for that. i do not want one more service now to be added in typescript, with effect ts.
there are no apis for haskell or other functional langs for data i need.
Why not? A TL;DR is a summation of the article. The title tells what the article's about.
So just reading the title and the tl;dr would leave one with no actual idea of the antipattern. What if the article was about something for which the imperative voice was an accurate description?
Describing something in the imperative voice doesn't make sense.
Why would you ever say "TL;DR perform this action" instead of "TL;DR performing this action", when trying to describe something that people should not do?
Why? Elder Millennials and younger GenXers (born late 70s and early 80s) were arguably the first generation to grow up where video games were mainstream, and that's just a little older than that age bracket. I'm in my early forties, and I've enjoyed everything from the original Final Fantasy to Baldur's Gate 3 over the years.
And I'm sure there are folks older than me who picked them up in their 20s and 30s.
It seems suspect in the sense that we’d expect gaming to be at least as mainstream in the subsequent generation, right? The median age of a gamers should be approaching the median age of the population from below, I’m fine with believing it is pretty close, but how’d it get higher?
At 58, and playing 'video' games since the late 70s, I'm happily pulling the average age of gamers higher every year. Better than that, at 58 we just started work on our first video game that we have talked about making for over 20 years.
> It has 995 open issues in its Github repository.
This is not a sensible metric for code quality. For one thing, only about 20% of the currently open issues are tagged as bugs - more than that are suggested improvements.
> Haskell program was supposed to work right if it compiles, wasn't it?
No. Especially for tasks like string manipulation and format munging, you cannot capture the complexity of the domain into types.
I would say we passed the tipping point quite some time ago, where they were seen as a cheap salve for numerous problems, rather than actually tackling the growth in reasons for people needing them. “My life is getting me down Doctor”/“Here, take some of these”.
> Even a P=NP result doesn't tell us that NP problems have efficient solutions.
Yes it does. That is literally exactly what it means. The class P is the class of problems which are considered theoretically "tractable"/"efficiently solvable"/"feasibly solvable" (Cobham-Edmonds thesis). Hence, if NP=P, then that same definition extends to all problems in NP.
There are very obviously algorithms in P which are not "efficient". For example, an algorithm in O(n^10^10^10) is not efficient in any reasonable sense. It is in fact much much much less efficient than O(e^n) for any n low enough that the whole thing would finish in under a year or so.
In practical terms, the class of efficient algorithms is probably O(n^3) at best, and even then assuming no huge constant factors.
I am not quite the mathematician to find out but I would love to know the relationship between the constant factor and the exponent in matrix multiplication research papers.
P vs NP is not an "in practical terms" question. It is a theoretical question with theoretical definitions of theoretical terms, including "efficient", which directly corresponds to the class P by definition.
Ok. When I say efficient, I mean "produces efficient code on near-term hardware". I understand that complexity theorists have a different definition of "efficient"-- they also have a different definition of "important" too.
The question being asked was "what would proving P=NP mean for us in practical terms". The fact that mathematicians call all polynomial-time algorithms efficient is irrelevant to this question.
It wasn't exactly a question, but the thread started by discussing practical implications:
> That said, there is definitely potential practical implications for this. Even if it means we can know np problems do not have efficient solutions [emphasis mine]
So, this was about efficiency in the practical sense, not some largely useless definition of efficiency by which galactic algorithms are "efficient".
I chased some links from Wikipedia and you're right that Edmonds uses "efficiently solvable" to mean P. However, he does not take "efficiently solvable" to mean "feasible" for basically the same reasons as I've said in this thread. From section 2 of Edmonds' "Paths, Trees, and Flowers" (1965):
> An explanation is due on the use of the words "efficient algorithm." First, what I present is a conceptual description of an algorithm and not a particular formalized algorithm or "code." For practical purposes computational details are vital. However, my purpose is only to show as attractively as I can that there is an efficient algorithm. According to the dictionary, "efficient" means "adequate in operation or performance." This is roughly the meaning I want—in the sense that it is conceivable for maximum matching to have no efficient algorithm. Perhaps a better word is "good."
> It is by no means obvious whether or not there exists an algorithm whose difficulty increases only algebraically with the size of the graph. The mathematical significance of this paper rests largely on the assumption that the two preceding sentences have mathematical meaning.
> ...
> When the measure of problem-size is reasonable and when the sizes assume values arbitrarily large, an asymptotic estimate of FA(N) (let us call it the order of difficulty of algorithm A) is theoretically important. It cannot be rigged by making the algorithm artificially difficult for smaller sizes. It is one criterion showing how good the algorithm is—not merely in comparison with other given algorithms for the same class of problems, but also on the whole how good in comparison with itself. There are, of course, other equally valuable criteria. And in practice this one is rough, one reason being that the size of a problem which would ever be considered is bounded.
You should read the rest of section 2, it's short very clear. Calling P the class of "efficiently solvable" problems is a completely reasonable framing for this paper, considering that this was written at a time when we had fewer tools to formally compare algorithms. Edmonds correctly does not claim that all P algorithms are feasible, and my opinion that P=NP is not important is based on the 60 years of feasible non-P computations we've had since.
The value of purely functional programming languages, as opposed to functional programming languages like lisps, is that you get referential transparency, which means that when you define `a = b`, you know that you can always replace any instance of `a` with `b` and get the same answer. This is a very natural property in mathematics (algebraic rewritings are basically just this property writ large) and so it helps to draw nice parallels between the familiar notation of functions from mathematics and the "new" and "confusing" notion of functions in functional programming and other declarative languages.
As other posters have said, strong typing is also a nice property for lots of reasons, most notably it gives a platform to talk about ad-hoc and parametric polymorphism.
(I lecture on Functional Programming at the University of Warwick, where we use Haskell.)
The first problem with this argument is that referential transparency is a property of syntactic positions, not of languages.
The second is that languages like Lisp, SML, C, Pascal and BASIC all have referentially transparent and referentially opaque positions in exactly the same way that languages like Haskell do.
This means that all these languages enjoy referential transparency in the same way, because when you unpack the notion of equivalence, referential transparency itself is within a whisker of being a tautology: if a is equivalent to b, then you can substitute a for b or b for a. The relevant sense for "is equivalent to" can really only be contextual equivalence, which is all about meaning-preserving substitutability.
That said, not having to reason about effects within one's program equivalence sure makes things simpler in a pedagogical setting. But that's not to do with referential transparency per se.
For those of us who are unfamiliar with Lisps, can you expand on how they break referential transparency (and how Standard ML contrasts in that regard)?
more importantly there are functions (using scheme as an example) like set! and set-cdr! that mutate existing values and totally break referential transparency.
this isn't just user facing - for example let* kind of depends on creating bindings up front so they work across clauses, and then mutating them afterwards
let* permits expressions on the right refer to arbitrary other symbols bound by the let*. in particular it allows for construction of recursive lambdas that may not be linearlizable.
> let* permits expressions on the right refer to arbitrary other symbols bound by the let*
In what language? I just checked Elisp, SBCL, and Guile, and they all error out if you refer to a variable not previously defined by a left-to-right traversal of the varlist:
(let* ((a (+ b 1)) (b 1)) a)
Edit: This doesn't work either:
(let* ((a (lambda () (+ b 1))) (b 1)) (funcall a)) ; (funcall a) -> (a) for Schemes
Ah, that does look to be the case. I didn't know about that one.
Signature
(letrec BINDERS &rest BODY)
Documentation
Bind variables according to BINDERS then eval BODY.
The value of the last form in BODY is returned.
Each element of BINDERS is a list (SYMBOL VALUEFORM) that binds
SYMBOL to the value of VALUEFORM.
The main difference between this macro and let/let* is that
all symbols are bound before any of the VALUEFORMs are evalled.
Lisp allows you to mutate, you can certainly write non-mutating code in lisp. Why do you think you need to use mutation with let*? let* is just a sequential let
Subtraction is truth preserving on the sign bit. It's not truth-preserving in the actual subtractive bits.
(I disagree with their claim that the subtractive bit is functionally complete on its own - you're right, since it's truth-preserving, it clearly is not functionally complete)
Without trying to defend this particular carve-out, I would suggest that things like computers and video game consoles are improving in capability over a much faster time scale than TVs and video cameras. Hence there is much less of an expectation of longevity / relevance than with other tech goods.
That said, the same argument could be made for mobile phones as well, so it's clearly spurious.
>Hence there is much less of an expectation of longevity / relevance than with other tech goods.
These kinds of arguments are hollow. Especially in gaming, if you make a good console with good games, people will want to hang on to them and play them for literally decades. But even ignoring that specific aspect of gaming culture, it really should not be up to some top-down, self-serving analysis about what most consumers should expect. Otherwise it's just a race to making the least consumer-friendly product so you can make legal/political arguments about consumers obviously want to buy expensive garbage which they expect to break beyond repair in a few years at best.
That argument made sense 10 years ago, but since then we've seen a lot of slowdown in computers, consoles and mobile phone progress, while TVs have overcome the LCD slump.
The value difference between 10 year old console (PS4!) and new one, can be smaller than 10 year old LCD vs new OLED.
> > Without trying to defend this particular carve-out, I would suggest that things like computers and video game consoles are improving in capability over a much faster time scale than TVs and video cameras. Hence there is much less of an expectation of longevity / relevance than with other tech goods.
I disagree with your point, but I'll reply to this one:
> That argument made sense 10 years ago, but since then we've seen a lot of slowdown in computers, consoles and mobile phone progress
That argument doesn't made even less sense 10 years ago in my opinion. When things are moving fastest (eg, most profitable) is when parts must be made available for consumers to repair themselves. When things are moving slower, then the IP/schematics should absolutely be provided if nobody is willing to make the parts.
Honestly though: how long something lasts shouldn't matter, companies should still be forced to provide support for things they sell, or else to provide their IP/schematics so that other people can support the trash that was sold.
This is absolutely true when you look at hardware from today vs 10 years ago, then do the same comparison between the 90s and 80s or even 00s and 90s. People are playing basically the same manner of game now and 10 years ago, but between the 80s and 90s there was radical change in technology in a way that shaped the development of entirely new video game genres. Video game development since about the early to mid 00s has been mostly a matter of refinement, very little has been truly revolutionary.
It makes less sense. Video game consoles typically run 5-10 year cycles. If anything, supporting repair on them should be easier, because you can play the same games on the console at very first release as you can on the console sold right before they discontinue them. PCs and phones get updates yearly, and a 10 year old PC certainly can't play the same games as a brand new one.
1. It may be legally permissible, but it is impolite, to change the license away from the well-known Apache software license towards something which has not been legally vetted, and is in fact generated entirely by AI with minimal oversight.
2. There is an open question of what the supposed value add is here from the Pear team, that could not have been achieved by the people whose work they are co-opting.
3. Without a clear value proposition, the oversight given to projects by YC is called into question. I think this is the point most people are concerned by.