6 Comments

Thanks for your thoughts. But, just taking the functionalist view for a minute, it seems highly improbable, maybe impossible, that ChatGPT could be implementing processes structurally isomorphic to those that humans undergo, given the underlying physical differences. ChatGPT runs on computers which do not have physical components corresponding to neurons, axons, dendrites, etc.; hence, it doesn't even seem physically possible for it to undergo structurally analogous processes to humans.

The errors that it makes are further evidence of this. Not because there *are* some errors (after all, everyone makes errors), but because of what those errors are like. They are completely different from the sort of errors that human beings make. That shows that the underlying processes used are different, even in the non-erroneous cases.

I'm not a functionalist to begin with, but most people in philosophy of mind are, and that's the viewpoint that initially seems most favorable to the strong AI view.

Expand full comment