• In total there are 3 users online :: 0 registered, 0 hidden and 3 guests (based on users active over the past 60 minutes)
    Most users ever online was 1000 on Sun Jun 30, 2024 12:23 am

Ch. 14: The Evolution of Meanings

#152: Mar. - May 2017 (Non-Fiction)
User avatar
Chris OConnor

1A - OWNER
BookTalk.org Hall of Fame
Posts: 17034
Joined: Sun May 05, 2002 2:43 pm
22
Location: Florida
Has thanked: 3521 times
Been thanked: 1313 times
Gender:
Contact:
United States of America

Ch. 14: The Evolution of Meanings

Unread post

Ch. 14: The Evolution of Meanings
Please use this thread to discuss the above listed chapter of "Darwin's Dangerous Idea: Evolution and the Meanings of Life" by Daniel Dennett.
User avatar
Harry Marks
Bookasaurus
Posts: 1922
Joined: Sun May 01, 2011 10:42 am
13
Location: Denver, CO
Has thanked: 2341 times
Been thanked: 1022 times
Ukraine

Re: Ch. 14: The Evolution of Meanings

Unread post

This is certainly getting into interesting territory. I could wish for clearer writing, but I enjoy thinking about this stuff even if the tourguide is rather distracting.

We have several issues being discussed. One is whether AI's understand what they are doing, which is intimately related to the difference between syntax and semantics. Another is whether original intention is needed to declare a process truly intelligent (or is that the question? Dennett is far from clear.) Since he is intent on fitting every topic he takes up onto the Procrustean bed of cranes vs. skyhooks (for which he has substituted "leaps" or "saltations" freely) it is often difficult to tell just what point he thinks he has made.

Still, he has inadvertently brought up with his "strategic life preserver" example the case I am most interested in, namely whether an AI has a spirit. Having a spirit involves being able to assess, and modify, one's own goals. No one could refuse to have children if they were not able to critique their own motivations. But it is not even clear, for humans, which decisions amount to critiquing our motivations and which to just choosing from among an array provided to us. And when we consider that many motivations are not built in but are culturally laid upon us, with varying degrees of success, it might be reasonable to say of a person that if they do not rebel even in the least, they "lack spirit."

So if an AI is capable of choosing from among a number of strategies, but incapable of caring about anything other than winning a game of Go, does that mean they have no spirit? Possibly. Even given that they are programmed to assess "likely results" but not to compute the entire decision tree, and to commit to a strategy with only partial knowledge of the likelihoods of result given the different paths they could choose, it is still not clear they are capable of assessing their own strategies. Giving them machine learning processes to see which of the various choices tends to lead to the most tractable outcomes is still not introducing the element of understanding, but only "experience", and so even though we no longer know what criteria they are using to choose their strategy, we have enough knowledge of the overall assessment criterion to go back to their experiences and "reverse engineer" a lifeless process which algorithmically leads to their strategy choice.

So I guess for me, understanding and spirit are closely related. There is an emergent difference between weaknesses of understanding, which humans will typically seem to have relative to a dedicated computer stuffed with look-up knowledge and algorithmic decision functions, and weaknesses of spirit, which have to do with our ability to relate to ourselves in an open way. I am suspecting that an AI will either exhibit weaknesses of spirit (like HAL in Clarke's "2001: A Space Odyssey") or be a danger to humanity (in the sense that their reassessment of the "iron purpose" of protecting humanity may turn out to reject that purpose as unworthy and decide the universe is better off without us, which you have to admit is a tenable hypothesis.) (And of course, HAL was a danger to Dave.)

Nor do I accept a programmed look-up function of all known truths as understanding. Understanding literally builds an internal model of things. For that reason it is open to new learning. It has not been told "put this in the pile of truths and put that in the pile of falsehoods" for every enumerated proposition, but rather has the ability to compare the meaning of the proposition to the relationships which make up the functioning of the internal model. One reason why we can correct our Aristotelian biases for Galilean observations is that the conflict between our Aristotelian model and the observation calls for a reassessment of the model, (which is, in a particularly abstract way, an act of spirit). And once we have sorted out a model which accepts Galilean observations, so the notion of the earth "falling around the sun" makes sense, we can confront that with relativistic observations and make a still richer model with bending of space-time. Understanding is of necessity open, like that. It can't be said to be an implementation of a strategy for understanding unless it is building an internal representation of the various relationships between objects, processes and categories, and is open to reassessing the nature of that representation.

Consciousness, and so, in some sense, intelligence, seems also to be a matter of creating an internal representation. Only it is not a representation of the world "out there" but of the stream of awareness, including both sensory inputs which rise to the level of consciousness, and mental inputs, which seem to be some sort of monitoring of "what we are thinking about", i.e. what our internal representation of the world is holding up for examination when we do some mental work.

Some of what we are aware of is "emotional urges." Some are very powerful, such as disgust and fear. Others are quite delicate and easily quashed, such as appreciation for the way colors appear to change as the wind makes leaves dance in the sunlight. We are able to modify our responses, concluding that fear is "unworthy" and thus applying considerable pressure to overcome it. We are also able to apply understanding, and use that to follow a strategy which does not directly overcome an emotional urge but does master it over time. Neither the direct mastery of emotion nor the understanding approach, "taking myself by the hand," qualifies as the sole domain of spirit. Both are open in the sense that they come to the emotional urge with an understanding of life, and are open to either modifying their understanding or trying to let it modify the emotion.

I don't think a superb simulation of understanding is the same thing as creating the model with representations of all the relationships. The famous Turing Test notwithstanding, I don't think that choice of strategy alone (especially when one has no choice of what goal to pursue with the strategy) creates that kind of open relationship to the self which is the emergent property of interest about humans.
Post Reply

Return to “Darwin's Dangerous Idea - by Daniel Dennett”