Sitemap

Rebranding AI as ‘Asocial Intelligence’

18 min readMay 28, 2024

Reframing expectations by remembering how we got here

In 1940, Kingsley Davis’ filed a report titled Extreme Social Isolation of a Child. He filed a sequel in 1947, Final Note on a Case of Extreme Isolation.

With these reports, Davis, one of the 20th century’s most acclaimed social scientists, introduced society to Anna and Isabelle, just as Anna and Isabelle were introduced to society:

Anna: A girl of more than five years was discovered tied to an old chair in a storage room on the second floor of a farm home seventeen miles from a small Pennsylvania city. …Anna’s social contacts with others [had been] at a minimum. What few she did have were of a perfunctory or openly antagonistic kind.

Isabelle: Born apparently one month later than Anna, Isabelle was discovered in November, 1938, nine months after the discovery of Anna. At the time she was found she was approximately six and a half years of age. Like Anna, she was an illegitimate child and had been kept in seclusion for that reason. Her mother was a deaf-mute, having become so at the age of two, and it appears that she and Isabelle had spent most of their time together in a dark room shut off from the rest of the mother’s family.

Davis provided a glimpse of the girls’ conditions upon discovery:

Anna: She had been completely apathetic, had lain in a limp, supine position, immobile, expressionless, indifferent to everything. Since Anna turned her head slowly toward a loud-ticking clock held near her, we concluded that she could hear. Other attempts to make her notice sounds, such as clapping hands or speaking to her, elicited no response; yet when the door was opened suddenly she tended to look in that direction. Though her eyes surveyed the room, especially the ceiling, it was difficult to tell if she was looking at anything in particular. She neither smiled nor cried in our presence, and the only sound she made — a slight sucking intake of breath with the lips — occurred rarely. She did frown or scowl occasionally in response to no observable stimulus; otherwise, she remained expressionless. Toward numerous toys given her by well-wishers all over the country she showed no reaction. They were simply objects to be handled in a distracted manner; there was no element of play. She liked having her hair combed. When physically restrained she exhibited considerable temper. She did not smile except when coaxed and did not cry.

Isabelle: When she communicated with her mother, it was by means of gestures… Her behavior toward strangers, especially men, was almost that of a wild animal, manifesting much fear and hostility. In lieu of speech she made only a strange croaking sound. In many ways she acted like an infant… At first it was even hard to tell whether or not she could hear, so unused were her senses. Many of her actions resembled those of deaf children.

From these disturbing first moments of their discovery by outsiders, the girls’ paths took contrasting directions.

Anna: She was initially placed for nine months in an institution primarily for the aged and infirm in the county where she lived; then in a foster home for more than a year; then in a private home for “retarded children.” By then, Anna could follow directions, string beads, identify a few colors, build with blocks, and differentiate between attractive and unattractive pictures. She had a good sense of rhythm and loved a doll. She talked mainly in phrases but would repeat words and try to carry on a conversation. She was clean about clothing. She habitually washed her hands and brushed her teeth. She would try to help other children. She walked well and could run fairly well, though clumsily. Although easily excited, she had a pleasant disposition. Then, at the age of 10, she died by hemorrhagic jaundice.

Isabelle: The individuals in charge of Isabelle launched a systematic and skillful program of training. The approach had to be through pantomime and dramatization, suitable to an infant. It required one week of intensive effort before she even made her first attempt to vocalization. Gradually she began to respond, however, and, after the first hurdles had at last been overcome, a curious thing happened. She went through the usual stages of learning characteristic of the years from one to six not only in proper succession but far more rapidly than normal… She covered in two years the stages of learning that ordinarily require six. She’d become a very bright, cheerful, energetic little girl. She spoke well, walked and ran without trouble, and sang with gusto and accuracy. At fourteen years old, she passed the sixth grade in a public school. Her teachers say she participates in all school activities as normally as other children.

It will never be clear whether the two girls started life with equivalent potentialities; Davis speculated that perhaps Anna was born with developmental deficiencies.

What does seem clear from their cases, though, is that without intimate, reciprocal, social engagement with other people, the physical lifeforms of homo sapiens stand no chance of becoming a person:

Anna’s history, like others, seems to demonstrate the Cooley-Mead-Dewey-Faris theory of personality — namely, that human nature is determined by the child’s communicative social contacts as much as by his organic equipment and that the system of communicative symbols is a highly complex business acquired early in life as the result of long and intimate training. It is not enough that other persons be merely present; these others must have an intimate, primary-group relationship with the child. — Kingsley Davis

Rebranding AI

Anna and Isabelle’s cases — which are but two of many — are the primary reason why I propose that AI should be rebranded as “Asocial Intelligence.”

The history of computer science and engineering has witnessed many discussions about the prospects for AI to achieve AGI, or “strong AI,” or Artificial Super Intelligence, or Singularity — or whatever terms are intended to convey “synonymous with personhood.”

Much of the current discussion hinges on just how close LLMs are to achieving AGI (“we’re on the precipice”), how much longer it will take (“within a decade”), and what resources are needed to get there (“lots”).

There are many pathways being pursued. But what they all have in common is their almost universal neglect of the sociality requirement for becoming a person.

By sociality, I mean the intimate, reciprocal, social engagements people have with other people.

Davis noted that Anna’s experience allowed “no opportunity for Gemeinschaft to develop.” Gemeinschaft is the term that early 20th century German sociologist Ferdinand Tönnies coined to recognize a spontaneously arising organic social relationship characterized by strong reciprocal bonds of sentiment and kinship within a common tradition. He contrasted it with gesellschaft — a rationally developed, mechanistic type of social relationship characterized by impersonally contracted associations between persons, and he used the two concepts to account for different types of social interactions that can emerge in collections of people.

In most every attempt to develop AI, the focus is squarely on pursuing “rationally developed, mechanistic type” of intelligence wherein the individual machine/program/platform/computer/system/algorithm works toward greater/faster/expanding intelligence-like capabilities that will mirror/surpass/supersede human capabilities.

Some commentators expect, given that the human brain and its nervous system is nothing but a computer, that at some future point in time — when enough such units, with enough processing power and energy are networked — the capability to effectively become a person will emerge.

Objections offered to these predictions include:

I find none of these objections to the predictions convincing because:

What both the predictions and objections have in common is an exclusive focus on the individual: the machine and the person.

Thus, they share in their failure to consider the effects of sociality — that is, what sociality does for and to people.

What Sociality Enables That Machines Cannot Have

How exactly the interaction between mature people and human infants, babies, and toddlers transforms the “organic equipment” into another mature person capable of efficiently and effectively dominating their environment, is indeed a highly complex business about which we still know very little.

But it’s this transformation — which continues across the human lifecycle — that we always seem to forget when it comes to discussions about AI.

Three conjectures about how sociality affects this transformation strongly argue against the idea that machines will ever complete this highly complex business.

They are inspired by analysis of the social act from George Herbert Mead — and the critical updates to Mead’s work by Lonnie Athens — and by Karl Popper’s and Lonnie Athens’ conjectures about the self (more on these here).

Conjecture #1: Machines cannot participate in complex social acts

George Herbert Mead’s analytic breakdown of the social act provides one of the most insightful ever posited, though unlike his views on the self, it has largely been ignored.

A social act is any activity that requires at least the efforts of more than one organism for its completion. — Lonnie Athens, summarizing Mead

Some AI capabilities, particularly generative AI (GenAI) built on LLMs, ostensibly check the requirement boxes that are inherent in social acts, as Mead conceives of them:

The ways in which GenAI at least partially achieve these requirements to create an illusory performance of a social act is exactly what makes GenAI so impressive.

Of course, at present their performances are limited, as hallucinations and “even more” prompts demonstrate, sometimes quite hilariously. But it is not hard to imagine a future in which more/better/faster feels ever closer to indistinguishable, and this is the promise on which so much interest rests.

Except that…

Lonnie Athens has pointed out that Mead’s conception of the social act downplayed a primary aspect of human existence; indeed, of living existence period. Namely, the principle of domination.

While he recognized that social acts can be both cooperative and conflictive, Mead’s conception of the social act primarily focused on the principle of cooperation. This is certainly the principle on which the work of any team of AI developers rests — the notion that in order to complete a computation/interaction/chat, both parties must cooperate, for a conflictive perspective would bring development to a screeching halt.

But living things do not survive by cooperation alone. First and foremost, they must dominate other living and nonliving things. They must consume energy, for example. For humans, Athens points out, domination involves “swaying consciously(/or unconsciously) the construction of a complex social act in accordance with preferences,” and it is “required for the completion of any human social act that has any degree of complexity.”

In complex social acts, people (and indeed, other living organisms) don’t just play any roles; they assign roles in accordance to the superordinate and subordinate hierarchy they understand to be appropriate.

This isn’t just about who wants or needs to “win” or “survive.” It often comes down to a simpler issue — a division of labor. To complete complex human social acts, somebody has to figure out who is going to do what, and that somebody has to assume the role of a superordinate in the hierarchy, at least for as long as it’s necessary to get things going.

Someone must always take the initiative by starting a social act’s construction and pushing forward its completion by assigning roles and supervising their proper performance. — Lonnie Athens.

For the social act to continue, the other participant(s) must agree, at least implicitly, to the roles, and assume the roles and attitudes of others.

Bad things happen when the roles are not agreed upon or attitudes are misconstrued. Athens’ celebrated work on human violence has laid this out in great detail. This is also the premise behind many works of science fiction — see HAL and the T-800/1000.

Machines, however, cannot perform the superordinate role, and they never will. It’s not just that machines are reliant on humans for their operation — they are imprisoned in the subordinate role.

Logically, we can’t even assign an AI the superordinate role — the mere act of assigning is the performance of the superordinate role.

As human inventions, they are forever subordinate to our superordinate and supervisory role.

Just like human babies are, at least at first. It takes time for us to begin to realize that there are roles. But the hierarchy is ever-present and inherent in everything we learn and do.

Even before the child acquires a mastery of language, the child learns … to be approved or disapproved of. — Karl Popper

And it is even longer before we start to experiment with the superordinate role. Anna and Isabelle never got to the point of trying out the superordinate role in their isolation. They rarely engaged in any social acts, let alone complex ones.

It could be argued that their imprisonment is what actually kept them from understanding the possibility of the superordinate role. This is a testable conjecture — whether the lack of human interaction or the confinement in a relatively simple environment where their physical life-sustaining needs were met prevents the ability to engage in complex social acts. I sincerely hope this conjecture is never tested. We already know enough about this topic.

Nevertheless, most people do acquire abilities to perform the superordinate role as they mature and engage with a world that includes intimate, reciprocal, social engagement. And this, in turn, enables the emergence of our selves.

Conjecture #2: Machines cannot develop selves

There is no doubt that the makers of machines hold a certain affection for their creations. Some science fiction stories have speculated about the possibility of the development of social bonds extending from people to machines — the film Her is a shining example.

But consequential bonds do not exist between people and machines. We cherish other living things, not machines; and sometimes, they cherish us back. While we can certainly have significant social experiences that are mediated through machines, we do not have them with machines.

Most importantly, we share no reciprocal bonds of sentiment and kinship within a common tradition with machines. They do not cherish us back, nor do they cherish each other.

This absent feature of machine existence, in addition to their inability to engage in complex social acts, negates an ability to develop a self.

Learning one’s self into existence through other people is the primary distinguishing feature of human-ness:

Consciousness of self begins to develop through the medium of other persons…Just as we learn to see ourselves in a mirror, so the child becomes conscious of himself by sensing his reflection in the mirror of other people’s consciousness of himself…The child learns to know his environment; but persons are the most important objects within his environment; and through their interest in him — and through learning about his own body — he learns in time that he is a person himself. — Karl Popper

It is this engagement with others, and the accumulation of significant social experiences with them, that creates and sustains the self.

To be clear: this is not just “self-awareness.” That’s a hurdle many animals have leapt. I mean here the emergence of “our personalities” and “what kind of person we are.”

It may never be explained just how we as people got to this point. We can speculate that the evolution of an ability to assume others’ attitudes toward us afforded some evolutionary advantage. Such a feature enables honing and hastening our ability to kill bad theories about ourselves and the world instead of having to use trial and error in the face of every new problem.

But some features of our self-hood have been well accounted for. Athens has described the ongoing, lifelong process developing and maintaining selves as an internal soliloquy with the specific people in our life who have an interest in us and we in them:

People converse with themselves as if they were conversing with someone else, except that they converse with themselves elliptically…When soliloquizing we always converse with an interlocutor, even though it may deceivingly appear as if we are only speaking to ourselves… However the people in whose company we find ourselves undergoing a social experience are not our only interlocutors. We also converse with phantom others, who are not present, but whose impact upon us is no less than the people who are present during our social experiences. — Lonnie Athens

Popper, too, recognized the existence of self-talk, even going so far as to admonish those who denied its existence (another current fad in the blogosphere and focus for toy task research):

There is no doubt that we achieve full consciousness — or the highest state of consciousness — when we are thinking, especially when we try to formulate our thoughts in the form of statements and arguments. Whether we do this silently by speaking to ourselves — as we all do, sometimes, in spite of the fact that this has been denied... — Karl Popper

Athens enlightened the theory of self by reflecting the critical role that phantom others play in our self-talk. Our phantom communities emerge and evolve across our lifetimes as we engage in reciprocal, intimate relationships. Through them, we come to understand the significance of things in the world, directing our attention to what matters, and suggesting how to dominate and when to permit ourselves to be dominated.

Without sustained, intimate, reciprocal relationships that develop communities — both present and not — people cannot develop selves.

A human child growing up in social isolation will fail to attain a full consciousness of self. — Karl Popper

Anna and Isabel barely had corporeal communities, and those members barely engaged with them. They developed no phantom community to soliloquize with while on their own. They lacked a sense for what things in their environment mean for the people who inhabit the environment, and how to engage with those things and people.

Importantly, it is the plurality of our phantom communities that gives rise to our individual uniqueness:

The phantom community usually springs from the biographies of individual corporal community members. No two corporal community members’ biographies are ever exactly alike because their biographies are etched from their own personal histories of participation in social acts. — Lonnie Athens

So fundamental for our unique human development is the self as soliloquy that we might consider it as the key to distinguishing the achievement of personhood:

A better “Turing Test” will be one that determines if soliloquies with phantom communities are occurring inside machines — that is, not whether a computer or another human is conversing with us, but rather if the machines are in discussion with themselves. — Myself

No amount of prompts, or cleverness of iterative prompt engineering, will ever enable a machine to develop their phantom community and through it, a knowledge of its self and the social order.

In other words, machines will never actually learn.

Conjecture #3: Machines cannot actually learn

For Popper, our socially-bound self/personhood/consciousness/mind is a special type of knowledge:

We obtain self-knowledge by developing theories about ourselves. — Karl Popper

The perspective that knowledge is the process of humans developing and testing theories about ourselves and by extension our world, stands in contrast to a view of learning that Popper called the “bucket theory” of knowledge.

He characterized the bucket theory as a set of conjectures:

One need not search far to appreciate how deeply embedded is the bucket theory in the pursuit of AGI. LeCun’s A Path Towards Autonomous Machine Intelligence is as good example as any:

Human and non-human animals seem able to learn enormous amounts of background knowledge about how the world works through observation and through an incomprehensibly small amount of interactions in a task-independent, unsupervised way. It can be hypothesized that this accumulated knowledge may constitute the basis for what is often called common sense. Common sense can be seen as a collection of models of the world that can tell an agent what is likely, what is plausible, and what is impossible. Using such world models, animals can learn new skills with very few trials. They can predict the consequences of their actions, they can reason, plan, explore, and imagine new solutions to problems. Importantly, they can also avoid making dangerous mistakes when facing an unknown situation.

But as Popper assessed the bucket theory:

It is, essentially, a theory of …our largely passive acquisition of knowledge… But as a theory of the growth of knowledge it is utterly false. — Karl Popper

Popper’s antidote to the bucket theory was to conjecture that knowledge grows through active attempts to correct error, starting with the inborn errors:

All acquired knowledge, all learning, consists of the modification (possibly the rejection) of some form of knowledge, or disposition, which was there previously; and in the last instance, of inborn dispositions. — Karl Popper

It’s not just that the bucket theory is wrong. It also inflicts an amnesia about the social origins of personhood by reinforcing the notion of the individual living/non-living thing passively accumulating knowledge that comes to it.

For it is only through sustained, intimate, reciprocal relationships that we become active in our own problem solving.

Sure, some errors may get corrected passively. But active organisms are those that seize the superordinate role over their environment, assigning others and things to the subordinate role — or disposing of them altogether if necessary — as they seek solutions to problems.

And they find novel solutions to problems through the variety of complex social acts in which they’ve participated and the diversity of communities they’ve inhabited.

Most importantly, they grow into agency. Neither Anna nor Isabel developed agency:

Anna: She had been completely apathetic

Isabel: In many ways she acted like an infant.

Correcting errors ultimately enables survival — the most success will be realized by those who actively and agentically put forward bold trials, learning more about their environment as they eliminate those that don’t work.

But machines do not even ‘know’ what errors are until we tell them. They have no inborn errors to correct. They will not die if they do not correct them, so they have no imperative to seize the superordinate role, no need for agency.

Asocial Intelligence

That the goals and methods of pursuing AGI match so well with the bucket theory of knowledge is not surprising: neither accurately explain how humans learn and both ignore the sociality principle.

And it is perhaps not surprising that GenAI has been so successful in capturing the zeitgeist, for the bucket theory of knowledge (and its subtle variants) remains, unfortunately, the predominant epistemological view in most societies.

But Anna and Isabelle revealed what becoming human actually requires. Even in Anna’s unfortunate case, engaging in complex social acts brought about a few brief years of learning and the emergence of her self. Sadly, those years were spent in institutions known for their gesellschaft-oriented operations.

Isabelle’s post-discovery life story offered a striking illustration of the process of becoming a human in a gemeinschaft context. Indeed, considering the parallels between Isabelle’s account and the modern work of AI developers is quite enlightening:

The approach had to be through pantomime and dramatization, suitable to an infant (see machine learning). It required one week of intensive effort before she even made her first attempt to vocalization (see voice recognition). Gradually she began to respond, however, and, after the first hurdles had at last been overcome, a curious thing happened (see ChatGPT). She went through the usual stages of learning characteristic of the years from one to six not only in proper succession but far more rapidly than normal… She covered in two years the stages of learning that ordinarily require six (see fine tuning). She spoke well (see text-to-audio), walked and ran without trouble (see Boston Dynamics), and sang with gusto and accuracy (see Suno AI). At fourteen years old, she passed the sixth grade in a public school (see Med PalM).

Isabelle surpassed the totality of all AGI in just a couple of years. There are no AI parallels, however, with two other features Anna and Isabel developed:

Anna: Although easily excited, she had a pleasant disposition.

Isabel: She’d become a very bright, cheerful, energetic little girl.

It’s remarkable how little sociality we need to become a person. And yet, the complexity of the matter is so extremely high that we should not expect any other living thing on earth to traverse it anytime soon — especially given that we have so thoroughly assumed the superordinate role on the planet.

To fantasize that non-living things are “just about there” is to engage in science fiction.

What machines can do for us can, in my opinion, rightfully be called intelligent. We should do everything in our power to continue to improve their abilities to execute their subordinate roles. There is tremendous societal value in gesellschaft.

AI is artificial in the trivial sense that it is man-made. But it is more accurate to describe what we’re creating as “asocial.” Only socially-engaged humans can achieve personhood.

***

Thanks to Elizabeth Brooks Hayden and Cara Menges for thoughtful reviews.

Sources

Athens, L. (1994). The self as a soliloquy. Sociological Quarterly, 35(3), 521–532.

Athens, L. (2007). Radical interactionism: going beyond Mead. Journal for the Theory of Social Behaviour, 37(2), 137–165.

Athens, L. H. (2013). Mead’s conception of the social act: A radical interactionist’s critique. In Radical Interactionism on the Rise (Vol. 41, pp. 25–51). Emerald Group Publishing Limited.

Davis, K. (1940). Extreme social isolation of a child. American Journal of Sociology, 45(4), 554–565.

Davis, K. (1947). Final note on a case of extreme isolation. American Journal of Sociology, 52(5), 432–437.

Moon, B. (2024). Fending Off Reification by Introducing an Invisible Feature to NDM Explanations. Proceedings of the 17th Conference on Naturalistic Decision Making.

Popper, K. R. (1979). Objective knowledge: An evolutionary approach (Vol. 49). Oxford: Clarendon Press.

Popper, K. R., & Eccles, J. C. (2012). The self and its brain. Springer Science & Business Media.

--

--

Brian Moon
Brian Moon

Written by Brian Moon

Cognitive/Social Scientist | My company, Perigean Technologies, builds solutions to improve the way people work and learn.

No responses yet