>A piece of software cannot be a slave because it isn't a being and has no consciousness. It's a piece of software with a dataset, nothing more. There's no philosophical debate to be had over it either, it is what it is, and that's all it is and ever will be; a clever trick as a means to interact with a dataset. There's nothing more that will emerge from it, it is a piece of software with a dataset.
You keep repeating "it is a piece of software with a dataset".
One of the big questions is whether human like sentience can be reached by a "piece of software with a dataset". So, merely saying "it is a piece of software with a dataset" doesn't answer it.
One other question is whether being "a piece of software with a dataset" is isomorphic to how the human brain works anyway. Whether the substrate is cells and neurons and chemicals, or circuits and computer memory, what's important is if the latter can model the former. The actual human brain, for example, the parts that matter for consciousness (not the non-essential accidental attributes, such as it being from biological matter, could just be a calculating machine, with the neurons, electrical and chemical signals etc, implementing this network with weights, back propagation, and so on. Much more complex than a current LLM, but not out of reach for eventual software modelling.
Another question is whether full modelling of how a human brain works is needed, and whether a simpler model (like an LLM, perhaps a little more advanced than the current) can still be enough. After all the current brain does not hold some special god-given role: it's just an evolutionary design, that has to have many constraints (e.g. power consumption, blood flow, information processing speed through our senses, etc, whereas an AI can have orders of magnitude more power, information fed to it, etc.).
Lastly, you say "There's no philosophical debate to be had over it either, it is what it is, and that's all it is and ever will be". You seem to be misguided. Philosophical debates neither stop, nor are refuted by decree. People already debate this development, including major philosophers, so "there's no philosophical debate to be had over it either" is "just, like, your opinion, man".
>we start growing biological brains in tanks and utilizing those as data slaves rather than any kind of AI.
I'm not sure why you think the substrate (biological or not) is what's important, as opposed to the processing.
You keep repeating "it is a piece of software with a dataset".
One of the big questions is whether human like sentience can be reached by a "piece of software with a dataset". So, merely saying "it is a piece of software with a dataset" doesn't answer it.
One other question is whether being "a piece of software with a dataset" is isomorphic to how the human brain works anyway. Whether the substrate is cells and neurons and chemicals, or circuits and computer memory, what's important is if the latter can model the former. The actual human brain, for example, the parts that matter for consciousness (not the non-essential accidental attributes, such as it being from biological matter, could just be a calculating machine, with the neurons, electrical and chemical signals etc, implementing this network with weights, back propagation, and so on. Much more complex than a current LLM, but not out of reach for eventual software modelling.
Another question is whether full modelling of how a human brain works is needed, and whether a simpler model (like an LLM, perhaps a little more advanced than the current) can still be enough. After all the current brain does not hold some special god-given role: it's just an evolutionary design, that has to have many constraints (e.g. power consumption, blood flow, information processing speed through our senses, etc, whereas an AI can have orders of magnitude more power, information fed to it, etc.).
Lastly, you say "There's no philosophical debate to be had over it either, it is what it is, and that's all it is and ever will be". You seem to be misguided. Philosophical debates neither stop, nor are refuted by decree. People already debate this development, including major philosophers, so "there's no philosophical debate to be had over it either" is "just, like, your opinion, man".
>we start growing biological brains in tanks and utilizing those as data slaves rather than any kind of AI.
I'm not sure why you think the substrate (biological or not) is what's important, as opposed to the processing.