6 Comments

Thanks for your thoughts. But, just taking the functionalist view for a minute, it seems highly improbable, maybe impossible, that ChatGPT could be implementing processes structurally isomorphic to those that humans undergo, given the underlying physical differences. ChatGPT runs on computers which do not have physical components corresponding to neurons, axons, dendrites, etc.; hence, it doesn't even seem physically possible for it to undergo structurally analogous processes to humans.

The errors that it makes are further evidence of this. Not because there *are* some errors (after all, everyone makes errors), but because of what those errors are like. They are completely different from the sort of errors that human beings make. That shows that the underlying processes used are different, even in the non-erroneous cases.

I'm not a functionalist to begin with, but most people in philosophy of mind are, and that's the viewpoint that initially seems most favorable to the strong AI view.

Expand full comment
author

I don't think appealing to structural isomorphism is necessary to establish that ChatGPT might have some genuine understanding of the world. You said that the reason why we think humans have genuine understanding is because it's the best inference available given these two facts,

1) Humans pass the Turing test, ie. they're functionally capable.

2) We don't have an independently-verified account of how human brains work that allows us to conclude that humans don't have any genuine understanding.

Furthermore, you said that "it is perfectly plausible that a more sophisticated version of ChatGPT might pass the Turing Test in the not-too-distant future". If that happened, and assuming we still haven't figured out how transformers work by that time, then both of your conditions would be satisfied for the future version of ChatGPT. In other words (1) ChatGPT-2.0 would have passed the Turing test, and (2) we would not have an independently-verified account of how ChatGPT-2.0 works that allows us to conclude that it doesn't have any genuine understanding.

Given your own arguments, it seems that we cannot rule out ChatGPT having some genuine understanding (at least in a limited form).

I also don't know what you mean when you talk about processes that are "structurally isomorphic" to each other. How would we show that two physical processes are structurally isomorphic?

A computationalist would probably say that two processes are structurally isomorphic if they're implementing the same algorithm. In that case, it could easily be true that some processes within ChatGPT are implementing something structurally isomorphic to what's implemented in the human brain, because algorithms are substrate-independent, ie. they depend only on how things relate to each other within the system, not what the system is ultimately "composed of".

Suppose we replaced each of your neurons slowly, one by one, with silicon-based neurons that performed the exact same functions. Suppose further that (at least it seems) that you remained awake the whole time, and from your perspective you don't feel any different after the procedure concludes. Would the new Huemer be structurally isomorphic to the old Huemer in your view? If not, then why not?

Expand full comment

>"If that happened, and assuming we still haven't figured out how transformers work by that time, then both of your conditions would be satisfied for the future version of ChatGPT."

We don't have to know exactly how it works. It's enough that we know that it works in some way or other that doesn't require referring to mental states. (Unless you're suggesting that this future ChatGPT might *not* be explicable in purely physical terms?)

Now, the functionalists would object: They would say that it's sufficient to have mental states that something has the same functional states that we have. But ChatGPT wouldn't have the same functional states.

In your hypothetical at the end, the resulting person with the artificial neurons has the same functional states. But there is no reason to think that ChatGPT or its successors would have the functional states that we have, since it does not in fact have neurons (not even ones made of silicon).

Expand full comment
author

> But there is no reason to think that ChatGPT or its successors would have the functional states that we have, since it does not in fact have neurons (not even ones made of silicon).

What counts as a neuron? By some definitions, ChatGPT does have neurons. In fact, it's an artificial neural network.

Are you proposing that unless we see biological neurons under a microscope when we look at ChatGPT, it can't be implementing any algorithms similar to the algorithms in our brains? But that would be absurd, because again, computer science is substrate independent. Alan Turing famously argued that any sufficiently advanced computational system can simulate _any other_ computational system, no matter what it's made of, which almost certainly includes the hardware that ChatGPT runs on (see articles on Turing completeness for more details). In other words, you can run a simulation of a neuron without any actual physical neurons.

Moreover, I find the whole discussion of whether ChatGPT has "neurons" to be a red herring. Neurons are simply brain cells; they're the unit that neural algorithms are built on top of. An algorithm is more abstract: it's more about how the neurons relate to one another. Under the computationalist view, there is no way you can deduce what algorithm a system is running by only knowing what it's made out of, such as the fact that it's made of neurons. Any Turing-complete system could be running any computer program!

Of course, if you reject functionalism and computationalism entirely, then you're not going to find any of this convincing. But at the least, I think you should concede that functionalists should have no problem thinking that ChatGPT might have some genuine understanding of the world.

Expand full comment
Feb 18, 2023·edited Feb 18, 2023

Just jumping in here briefly--you could *hypothetically* perfectly simulate a neuron, but that's not what's happening with chatGPT (or any other neural network). Also, there is a pretty clear definition of a neuron, and virtually nothing in/about chatGPT fulfils that definition. This is a necessary condition (physical similarity) for even pretty liberal multi-realisation.

Also, it's not about what specific software a "neuron machine" is running, but the possible *kinds* of "software" that is computable by a system. Computer architecture matters--even mildly different physical configurations (and engineering standards, see ARM vs x86) prohibit some software from running on some processors. You certainly may not be able to tell what algorithm somethings processing by virtue of its architecture, but you can certainly tell what it can and can't compute.

Expand full comment
author

I realize that I wasn't sufficiently clear about what I meant above in my discussion of neurons. I view the entire point about whether ChatGPT has biological neurons to be essentially irrelevant, with almost no bearing on whether it has any functional states that allows it to understand the world, at least in a limited capacity. That's what I intended to say.

I'll reiterate that Huemer wrote that the reason why we think humans have genuine understanding is because it's the best inference from two facts: (1) humans are functionally capable, and (2) we don't have an independently-verified account of how humans work that allows us to determine that they don't have any genuine understanding. As far as I can tell, these points apply to ChatGPT too, even if it's far less functionally capable than a human being.

Expand full comment