This is the conclusion I come to whenever I try to grasp the works of Nagel, Chalmers, Goff, Searle et al. They're just linguistically chasing their own tails. There's no meaningful insight below it all. All of their arguments, however complex, all rely on poorly defined terms like "understand" "subjective experience", "what it is like", "qualia", etc. And when you try to understand the arguments with the definition of these terms left open, you realise the arguments only make sense when the terms include in their definition a supposition that the argument is true. It's all just circular reasoning.
“The Feeling of What Happens” by Antonio D’Amasio, a book by a neuroscientist some years ago [0], does an excellent job of building a framework for conscious sensation from the parts, as I recall, constructing a theory of “mind maps” from various nervous system structures that impressed me with a sense that I could afterwards understand them.
As a radical materialist, the problem with ordinary materialism is that it boils down to dualism because some types matter (e.g. the human nervous system) give rise to consciousness and other types of matter (e.g. human bones) do not.
Ordinary materialism is mind-body/soul-substance subjectivity with a hat and lipstick.
Human bones most definitely do contribute to feeling, but not through logos. The book expands upon the idea of mind body duality to merge proprioception and general perception.
I’d bet bats would enjoy marrow too if they could.
So how does a radical materialist explain consciousness- that it is too is a fundamental material phenomena? If so are you stretching the definition of materialism?
I find myself believing in Idealism or monism to be the fundamental likelihood
well the hard problem of consciousness gets in the way of that
- I assume you as a materialist you mean our brain carries consciousness as a field of experience arising out of neural activity (ie neurons firing, some kind of infromation processing leading to models of reality simulated in our mind leading to ourselves feeling aware) ie that we our awareness is the 'software' running inside the wetware.
That's all well and good except that none of that explains the 'feeling of it' there is nothing in that 3rd person material activity that correlates with first person feeling. The two things, (reductionist physical processes cannot substitute for the feeling you and I have as we experience)
This hard problem is difficult to surmount physically -either you say its an illusion but how can the primary thing we are, we expereince as the self be an illusion? or you say that somewhere in fields, atoms, molecules, cells, in 'stuff; is the redness of red or the taste of chocolate..
whenever I see the word 'reductionist', I wonder why it's being used to disparage.
a materialist isn't saying that only material exists: no materialist denies that interesting stuff (behaviors, properties) emerges from material. in fact, "material" is a bit dated, since "stuff-type material" is an emergent property of quantum fields.
why is experience not just the behavior of a neural computer which has certain capabilities (such as remembering its history/identity, some amount of introspection, and of course embodiment and perception)? non-computer-programming philosophers may think there's something hard there, but they only way they can express it boils down to "I think my experience is special".
Because consciousness itself cannot be explained except through experience ie consciousness (ie first person experience) - not through material phenomena
It’s like explaining music vs hearing music
We can explain music intellectually and physically and mathematically
But hearing it in our awareness is a categorically different activity and it’s experience that has no direct correlation to the physical correlates of its being
Up to a point I agree, but when someone deploys this vague language in what are presented as strong arguments for big claims, it is they who bear the burden of disambiguating, clarifying and justifying the terms they use.
I don't agree that the inherent nebulousness of the subject extends cover to the likes of Goff, Chalmers (on pansychism), or Searle and Nagel (on the hard problem). It's a both can be true situation and many practicing philosophers appreciate the nebulousness of the topic while strongly disagreeing with the collective attitudes embodied by those names.
If he were capable of describing subjective experience in words with the exactitude you're asking for, then his central argument would be false. The point is that objective measures, like writing, are external, and cannot describe internal subjective experience. Its one thing to probe the atoms; its another thing to be the atoms themselves.
Basically his answer to the question "What is it like to be a bat?" is that its impossible to know.
>This is the conclusion I come to whenever I try to grasp the works of Nagel, Chalmers, Goff, Searle et al. They're just linguistically chasing their own tails.
I do mostly agree with that and I think that they collectively give analytic philosophy a bad name. The worst I can say for Nagel in this particular case though is that the whole entire argument amounts to, at best, an evocative variation of a familiar idea presented as though it's a revelatory introduction of a novel concept. But I don't think he's hiding an untruth behind equivocations, at least not in this case.
But more generally, I would say I couldn't agree more when it comes to the names you listed. Analytic philosophy ended up being almost completely irrelevant to the necessary conceptual breakthroughs that brought us LLMs, a critical missed opportunity for philosophy to be the field that germinates new branches of science, and a sign that a non-trivial portion of its leading lights are just dithering.
Don't agree with this kind of linguistic dismissal. It doesn't change the fact that we have sensations of color, sound, etc. and there are animals that can see colors, hear sounds and detect phenomena we don't. It's also quite possible they experience the same frequencies we see or hear differently, due to their biological differences. This was noted by ancient skeptics when discussing the relativity of perception.
That is what is being discussed using the "what it's like" language.
I like the more specific versions of those terms: the feeling of a toothache and the taste of mint. There's no need to grasp anything, they're feelings. There's no feeling when a metal bar is bent by a press.
As a player I like to think of sharpness as a measure of the potential consequences of a miscalculation. In a main line dragon, the consequence is often getting checkmated in the near future, so maximally sharp. In a quiet positional struggle, the consequence might be something as minor as the opponent getting a strong knight, or ending up with a weak pawn.
Whereas complexity is a measure of how far ahead I can reasonably expect to calculate. This is something non-players often misunderstand, which is why they like to ask me how many moves ahead I can see. It depends on the position.
And I agree, these concepts are orthogonal. Positions can be sharp, complex, both or neither. A pawn endgame is typically very sharp; the slightest mistake can lead to the opponent queening and checkmating. But it's relativity low in complexity because you can calculate far ahead using ideas like counting, geometric patterns(square of the pawn, zone of the pawn, distant opposition etc) to abstract over long lines of play. On the opposite side, something like a main line closed Ruy Lopez is very complex(every piece still on the board), but not especially sharp(closed position, both kings are safe, it's more of a struggle for slight positional edges).
Something like a king's indian or benoni will be both sharp and complex. Whereas an equal rook endgame is neither(it's quite hard to lose a rook endgame, there always seems to be a way to save a draw).
To add to this, Java and other GC languages in some sense have manual memory management too, no matter how much we like to pretend otherwise.
It's easy to fall into a trap where your Banana class becomes a GorillaHoldingTheBananaAndTheEntireJungle class(to borrow a phrase from Joe Armstrong), and nothing ever gets freed because everything is always referenced by something else.
Not to mention the dark arts of avoiding long GC pauses etc.
It's possible to do this in rust too, I suppose. The clearest difference is that in rust these things are explicit rather than implicit. To do this in rust you'd have to use 'static, etc. The other distinction is compile-time versus runtime, of course.
> The clearest difference is that in rust these things are explicit rather than implicit. To do this in rust you'd have to use 'static, etc.
You could use 'static, or you can move (partial) ownership of an object into itself with Rc/Arc and locking, causing the underlying counter to never return to 0. It's still very possible to accidentally hold on to a forest.
> It's easy to fall into a trap where your Banana class becomes a GorillaHoldingTheBananaAndTheEntireJungle class(to borrow a phrase from Joe Armstrong), and nothing ever gets freed because everything is always referenced by something else.
Can you elaborate on this? I'm struggling to picture a situation in which I have a gorilla I'm currently using, but keeping the banana it's holding and the jungle it's in alive is a bad thing.
The joke is you're using the banana but you didn't actually want the gorilla, much less the whole jungle. E.g. you might have an object that represents the single database row you're doing something with, but it's keeping alive a big result set and a connection handle and a transaction. The same thing happening with just an in-memory datastructure (e.g. you computed some big tree structure to compute the result you need) is less bad, but it can still impact your memory usage quite a lot.
The reason it's common courtesy is out of respect for the reviewer/maintainer's time. You need to let em know to look for the kind of idiotic mistakes LLMs shit out on a routine basis. It's not a "distraction", it's extremely relevant information. On the maintainer's discretion, they may not want to waste their time reviewing it at all, and politely or impolitely ask the contributor to do it again, and use their own brain this time. It also informs them on how seriously to take this contributor in the future, if the work doesn't hold water, or indeed, even if it does, since the next time the contributor runs the LLM lottery the result may be utter bullshit.
Whether it's prose or code, when informed something is entirely or partially AI generated, it completely changes the way I read it. I have to question every part of it now, no matter how intuitive or "no one could get this wrong"ish it might seem. And when I do, I usually find a multitude of minor or major problems. Doesn't matter how "state of the art" the LLM that shat it out was. They're still there. The only thing that ever changed in my experience is that problems become trickier to spot. Because these things are bullshit generators. All they're getting better at is disguising the bullshit.
I'm sure I'll gets lots of responses trying to nitpick my comment apart. "You're holding it wrong", bla bla bla. I really don't care anymore. Don't waste your time. I won't engage with any of it.
I used to think it was undeserved that we programmers called ourselved "engineers" and "architects" even before LLMs. At this point, it's completely farcical.
"Gee, why would I volunteer that my work came from a bullshit generator? How is that relevant to anything?" What a world.
But how much time does that 0.3 watt hour query take to run? They imply that an individual ChatGPT query takes 0.3-3 watt hours, but most queries come back in seconds, so we need to scale that over a whole hour of processing.
Edit: Scrolling down: "one second of H100-time per query, 1500 watts per H100, and a 70% factor for power utilization gets us 1050 watt-seconds of energy", which is how they get down to 0.3 = 1050/60/60.
OK, so if they run if for a full hour it's 1050*60*60 = 3.8 MW? That can't be right.
Edit Edit: Wait, no, it's just 1050 Watt Hours, right (though let's be honest, the 70% power utilization is a bit goofy - the power is still used)? So it's 3x the power to solve the same question?
Amphetamine is actually a very effective weight loss drug. And it's sort of orthogonal to the fact that it's a stimulant. Stimulants in general can cause an acute reduction in appetite and temporary weight loss. This tends to stabilise with tolerance, however. As someone with obesity and ADHD, thus was my experience with methylphenidate treatment. And I used to think the weight loss effects of amphetamine were analogous until recently.
Amphetamine and methyphenidate(MPH) have very different ways of acting as stimulants. MPH is an inhibitor of the dopamine transporter(DAT) and the norepinehrine transporter(NET). These cross-membrane proteins essentially "suck up" the dopamine or norepinehrine after neurotransmission, thus regulating the effect. MPH inhibits this process, increasing the effect. This is called a norepinephrine/dopamine reuptake inhibitor(NDRI). Cocaine also works like this, as well as the antidepressant wellbutrin(bupropion).
Amphetamine on the other hand, is a bit more complicated. It interacts with DAT/NET as well, as a substrate, actually passing through them into the neuron. Inside the neuron, it has a complex series of interactions with TAAR1, VMAT2, and ion concentrations, causing signaling cascades that lead to DAT reversal. Essentially, enzymes are activated that modify DAT in such a way that it pumps dopamine out of the neuron instead of sucking it up. How that happens is very complicated and beyond the scope of this comment, but amphetamine's activity at TAAR1 is an important contributor. As such, amphetamine is a norepinephrine-dopamine releasing agent(NDRA). Methamphetamine, MDMA, and cathinone(from khat) also work like this.
Anyway, recently I was reading about TAAR1 and learned something new, namely that TAAR1, besides being and internal receptor in monoaminergic neurons, is also expressed in the pancreas, the duodenum, the stomach, and intestines and in these tissues, TAAR1 activation will increase release of GLP-1, PYY, and insulin, as well as slow gastric emptying.
So in essence, there may be some pharmacological overlap between ozempic and amphetamine(I'm still looking for data on how significantly amphetamine reaches TAAR1 in these tissues, so unclear what the relevance is. But amohetamine is known to diffuse across cellular membranes, so it's likely there is an effect).
Also interesting, amphetamine was recently approved as a treatment for binge eating disorder. Not only because it causes weight loss, but because it improves functioning in the prefrontal cortex(crucial to its efficacy in ADHD), which is apparently implicated in the neuropsychological aspects of BED as well.
There is a mixed picture on this. I see a lot reports of reports of it causing binging in the evenings despite no prior issues.
The issue is that therapeutic doses are not the multi-day bender of a speed-freak that forgoes sleep to keep their blood-concentration permanently high. Instead it's a medicated window of 6-12 hours with a third or more of their waking hours remaining for rebound effects to unleash stimulation-seeking demons that run wilder than ever.
Stockfish being so strong is not merely a result of scaling of computation with search and learning. Basic alpha-beta search doesn't really scale all that well with compute. The number of nodes visited grows exponentionally with the number of plies you look ahead. Additionally alpha-beta search is not embarassingly parallel. The reason Stockfish is so strong is that it includes pretty much every heuristic improvement to alpha-beta that's been thought of in the history of computer chess, somehow combining all of them while avoiding bugs and performance regressions. Many of these heuristics are based on chess knowledge. As well as a lot of very clever optimisation of data structures(transposition tables, bitboards) to facilitate parallel search and shave off every bit of overhead.
Stockfish is a culmination of a lot of computer science research, chess knowledge and clever, meticulous design.
While what you mention is true, I'm not sure how it undermines the bitter lesson. Optimizing the use of hardware (which is what NNUE essentially does) is one way of "increasing compute." Also, NNUE was not a chess specific technique, it was originally developed for Shogi.
Not sure why this is downvoted, it's just factually true.
Police and politicians talking about outlawing things that help criminals as though it will somehow affect the criminals, will never cease to amaze and amuse me. It's such an elementary error of logic.
The fact is that in a reasonably free society it's quite feasible to get away with lots of crime, if you're smart enough. There is no stopping this. Especially if it's a crime which doesn't leave a whole lot behind in terms of physical evidence. Downloading an OS is one such thing. Sure, if you seize my phone, you could prove it runs Graphene. But in a free society, you need probable cause for that, sorry. And if I am some major criminal, and Graphene stops my criminal enterprise from being proven, in a free society that's always preferable to getting busted, because the punishment for using graphene is gonna be meaningless compared to the punishment I'm avoiding by using it. Because a free society includes a protection against disproportionate punishments for minor crimes. Sure I'll pay your $500 fine to avoid 20 years in prison. Cost of doing business.
Once you realise this, you realise the only way to tackle crime is by legalising as many of them as possible, as long as they're not actively and unambiguously violating people's rights. Murder and other violent acts, obviously stay illegal. Drugs, prostitution etc? Legalise them. That's most of the crime right there, because these classes of crime actually provide something that's in wide popular demand. Demand + black market pricing + lack of taxes means lots of money, and money means power to create strong criminal organisations that can do whatever they want with impunity, including influencing politics. With all that out the window, all you have left is a bunch of individuals going at it alone; murdering psychopaths, desperate poor people, the mentally ill, crimes of passion, sex crimes, etc. And you just freed up a ton of societal resources to channel into those vestiges, both via targeted, intelligent policing and broader societal reforms that target the sociological processes that cause these kinds of crime(like wealth inequality, to name one).
Instead, what we get is a never ending arms race towards a totalitarian society. Oh well, see you after the next revolution, I guess.
Legalizing drugs had a pretty bad outcome for Portland, which is why they re-criminalized some drugs.
Prostitution leads to trafficking, a word I absolutely despise, particularly when it is used as a past-tense verb: "trafficked." Ugh! What poorly educated government hacks do to our language should be criminal! Regardless, human trafficking is terrible and if that part could be fixed, then maybe prostitution wouldn't be so horrible, but it is because it is pretty much never a voluntary situation for the women, but always some kind of coercion.
I spent some time* working on the firmware side of developing custom electronics based on various AVR chips, ATmega328 among them. Arduinos are not good for much more than babby's first microcontroller project. They're not even that great for prototyping. Besides the aforementioned hardware design issues, the "arduino" language(really just C++) and core library had several problems both in terms of code quality and abstracting over things that shouldn't be abstracted over when working with such a limited chip(8bit, 2k SRAM...), like significant memory allocations and interactions with SREG.
My EE partner in crime ended up designing a prototyping board himself, with various creature comforts included that we needed shields for with Arduino, and I ended up writing just C with avr-libc instead of using any of the arduino library/tooling, developing a set of core modules to use the things we added to our boards, in a more flexible manner than the Arduino library. It took some time, but it saved us a lot of time and friction in our future prototyping efforts.
All that being said, there's nothing wrong with Arduino as a platform for learning and personal tinkering. I do think they could've done a better job bridging the gap between that and prototyping though.
* Ten years ago, so my memory of specifics is very fuzzy and only reflects the state of things back then.
> Arduinos are not good for much more than babby's first microcontroller project.
Baby’s first microcontroller project is exactly what they excel at and, by doing so, they made hobbyist microcontroller development vastly more accessible.
The Arduino value comes from the ease-of-starting and they made that a lot easier than the then-extant state of the art.
>So ... exactly for what the device is being sold as? Weird complaint: "I purchased an apple, and all I got was an apple that's only good as an apple."
Like I said:
>>All that being said, there's nothing wrong with Arduino as a platform for learning and personal tinkering.
I was just adding my 2 cents on Arduinos based on personal experience. That is all.
>Then you would know that ATmegas are in a lot of successful commercial products from the past.
Yes. What led you to believe I was suggesting otherwise? I made no criticism of the ATmega328, any other ATmega chip, or the AVR ISA for that matter. I could make some if I wanted to, but it doesn't seem relevant. The topic was Arduino boards, which typically contain an AVR chip, but is in fact not a chip but a dev board.