Hacker Newsnew | past | comments | ask | show | jobs | submit | niam's commentslogin

Honestly that just seems like normal English.

To refer to something as "team-level" seems so absurdly unspectacular, relative to the other kinds of signals that exist for sussing out AI writing, that I'm surprised it was worthy of mentioning at all.


It's proper grammar but I have only ever seen journalists use it. It is a strong signal in my experience

Perhaps this is a joke but I'd truly love to sic a fleet of asshole AIs on my drafts.

Grant us the ability to reply to bots in a public and timestamped manner so that, if ever a human makes a similarly ridiculous response, we can just point at our response there. It'd free up space in the piece itself that'd otherwise go to asterisks and other preemptions of armchair dweebery.


It fits in nicely imo. It's plausible (services re-appear on hn often enough), and hilarious because it implies the protracted importance of Leetcode.

Though I agree that the LLM perhaps didn't "intend" that.


If the limit of someone's behavior winds up making everyone happier-off, I don't understand why I ought to care. In that sense, calling it "manipulative" seems either inappropriate or not very useful.

At least with something like adultery, there's a pretty obvious ill consequence of someone finding out what's going on behind the scenes. But if I looked behind the curtains of someone like OP and found out that the reason they're so charming is because they thought about people a bunch: I couldn't be burdened to care.


> If the limit of someone's behavior winds up making everyone happier-off, I don't understand why I ought to care.

I guess I don’t believe this behavior actually leaves the targets better off.

Doing a lot of experiments where you feign connections and openness with other people is going to leave a lot of the targets feeling unhappy when they realize they were tricked into opening up to someone who was just using them as a target for their experiments.

Take, for example, the section of the post where he talks about getting someone to open up into “cathartic sobbing” but displays zero interest in the person’s problems, only wonder about how he managed to trigger that through yet another technique.

My takeaway was distinctly different about the net effects of these social connection experiments. It was fine in the context of waiting tables where everyone knows the interaction is temporary and transactional, but the parts where it expanded into mind-reading people’s weaknesses and insecurities and then leveraging that into “connections” that he later laments not actually wanting.


The assumption is that it’s feigned. Frankly you do not develop these skills to this degree if you are inauthentic.

Even the “zen openness” bit is mimicry of people whose vibe they liked, and they were surprised by the results.


"Money is speech" is kind of a misleading interpretation because it comes with all sorts of baggage that people typically infer from a thing "being speech".

Phrased another way: the argument is that limiting one's ability to spend is practically a limitation on their speech (or their ability to reach an audience, which is an important part of speech). If some president can preclude you from buying billboards, or web servers, or soapboxes on which to stand: he has a pretty strong chokehold on your ability to disseminate a political message.

I'm not defending that argument, only saying what it is as I understand it.


My arguments are as bad-faith as the arguments that lead to corporate personhood and citizens united. Fight fire with fire.


This looks great.

I do yearn for a day though when we're using something like Marimo over Jupyter as a default for these kinds of things. Particularly in GIS where there's more utility in being able to use a notebook-like interface for an executable routine (rather than an analysis or experiment, which is (and should probably remain) the primary use case for Jupyter).


Calling hooks "traditional" in relation to signals seems fine, lest that word be relegated in time to whatever you in particular care about at this juncture.

I'm sure some 80 year olds before us thought the same of us using that term.


> Calling hooks "traditional" in relation to signals seems fine, lest that word be relegated in time to whatever you in particular care about at this juncture.

"Traditional" hooks are 6 years old. I think it's to early to call it traditional. Given that literally everyone else looked at this "tradition" and chose differently. Namely, signals.

Signals were popularized by SolidJS, but SolidJS's Ryan Carniato will keep telling you that what everyone calls signals now has its roots in libs like KnockoutJS from 2010. And everyone has been busy using signals for the past three years.

Given the amount of frameworks that implement signals today (including monsters like Angular), it's React who's not following tradition.


> I think it's to early to call it traditional.

I guess the bigger point is that we could offer the same charity to the OP that I'm lending you here in your use of the wrong "to".

There are meaningful critiques elsewhere in the comments about the piece with some semblance of charitable interpretation. We've elected instead to manufacture a snide, pedestrian haughtiness about wording.


First-class signals came from functional reactive programming back in 1997. Even JS implementations of signals existed before React's hooks.


These "signals" don't seem like FRP signals (which are time-varying values, like a "signal" to an EE but not limited to real numbers) but more like Qt signals or MVC "model has updated" notifications.


Original FRP was "behaviours" (continuous time-varying values) and "events" (discrete events, which map to current signals).


Yes, you're right. I think I read some later FRP papers that used the term "signal" instead of "behavior", and I thought that was what you were talking about. FRP "events" are kind of like the kind of signals being discussed here, but there are still big differences.


> FRP "events" are kind of like the kind of signals being discussed here, but there are still big differences.

I don't think the differences are that significant, JS signals are basically `latch(frp-event-stream)`, eg. FRP events yield edge-triggered systems and JS signals yield level-triggered systems, and latches transform edge-triggered to level triggered.

I understand why people can see JS signals as FRP behaviours though, as both have defined values at all times t, but the evaluation model is more like FRP events (push-based reactivity), so I think edge vs. level triggered is the real difference, and these are interconvertible without loss of information.

IIRC, the FRP literature calls both of them "signals" as a general category, just two different types.


Whether LLMs are "intelligent" seems a wholly uninteresting distinction, resembling the internet ceremony surrounding whether a hotdog is a sandwich.

There's probably very interesting discussion to be had about hotdogs and LLMs, but whether they're sandwiches or intelligent isn't a useful proxy to them.


I disagree completely. Many people take for granted that the expression of intelligence/competence is the same as actual intelligence/competence, and many people are acting accordingly. But a simulacrum is definitively NOT the thing itself. When you trust fake intelligence, especially as a way to indulge mental laziness, your own faculties atrophy, and then in short order you can't even tell the difference between a real intelligence bomb and a dumb empty shell that has the word "intelligent" written on it.


I'm not even taking for granted what it means. Can you define it in a way that your neighbor will independently arrive at? It's an incredibly lossy container for whatever meaning that people will want to pack it with, moreso than for other words.

Is a hotdog a simulacrum of a sandwich? Or a fake sandwich? I have no clue and don't care because it doesn't meaningfully inform me of the utility of the thing.

An LLM might be "unintelligent" but I can't model what you think the consequences of that are. I'd skip the formalities and just talk about those instead.


It sounds like you are [dis]interested in a philosophical discussion about epistemology. So it seems that you've skipped the inquiry yourself and have short-circuited to "don't care". Which is kind of "utilitarian". For other perspectives[0]:

> The school of skepticism questions the human ability to attain knowledge, while fallibilism says that knowledge is never certain. Empiricists hold that all knowledge comes from sense experience, whereas rationalists believe that some knowledge does not depend on it. Coherentists argue that a belief is justified if it coheres with other beliefs. Foundationalists, by contrast, maintain that the justification of basic beliefs does not depend on other beliefs. Internalism and externalism debate whether justification is determined solely by mental states or also by external circumstances.

For my part, I do believe that there is non-propositional knowledge. That a person can look at a set of facts/experiences/inputs and apply their mind towards discerning knowledge (or "truth"), or at least the relative probability of knowledge being true. That while this discernment and knowledge might be explained or justified verbally and logically, the actual discernment is non-verbal. And, for sure, correctness is not even essential--a person may discern that the truth is unknowable from the information at their disposal, and they may even discern incorrectly! But there is some mental process that can actually look behind the words to its "meaning" and then apply its own discernment to that meaning. (Notably this does not merely aggregating everyone else's discernment!) This is "intelligence", and it is something that humans can do, even if many of us often don't even apply this faculty ourselves.

From discussions on HN and otherwise I gather this is what people refer to by "world-modeling". So my discernment is that language manipulation is neither necessary nor sufficient for intelligence--though it may be necessary to communicate more abstract intelligence. What LLM/AGI proponents are arguing is that language manipulation is sufficient for intelligence. This is a profound misunderstanding of intelligence, and one that should not be written off with a blithe and unexamined "but who knows what intelligence is anyway".

[0] https://en.wikipedia.org/wiki/Epistemology


I'm not discounting the philosophy, just the language.

I don't mean to sound blithe. If I do, it's not out of indifference but out of active determination that these kinds of terminological boundary disputes quickly veer into pointlessness. They seldom inform us of anything other than how we choose to use words.


What are your thoughts on the Chinese room thought experiment?


See my other comment above. Language manipulation is not sufficient for intelligence and understanding. There is no one in the Chinese Room who understands the questions and answers; there is no understanding in the system; there is no understanding at all.


I'll die on this hill.

Google Sheets was phenomenal for prototyping apps and getting quick feedback from users back when I used it in 2015-2020. Back then they had this janky implementation of Mozilla Rhino underpinning their "Apps Script" engine and it still beat the pants off of anything else you could use for free.

Certainly you can shoot your feet with the various spreadsheet-isms but if you're diligent about keeping raw data pure (preferably in a completely different sheet inaccessible to users) it does a bangup job of quickly shoving a UI in front of users and letting them realize what they want and iterate on it before calcifying it into a more rigid system.


Exactly this. Worked for a startup that had dogmatic leaders on 'using the best tool' 'spreadsheets are bad' (a trope they just got from people, not having used it themselves). Ended up spending thousands on consultants to build reporting etc that ended up needing to be changed after 6 months because of business/personnel changes.

Spreadsheets are the best tool to quickly spin up and make changes to data.

I've always thought about a tool to make a 'front-end' version of spreadsheets that end users use, where the layout can be a bit more freeform (i.e. build reports and dashboards in spreadsheet, then 'select' these reports and paste them into a front end WYSIWYG tool).


Check out AirTable it’s exactly this. (Have mentioned it on two comments now, but I’m not affiliated with the company just a big fan)


I mostly agree with comments here saying this is kind of silly to malign Google for.

I'm grateful for the piece regardless if only to inform me that the service exists (and, well — now doesn't for some countries).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: