A new type of mob: Group intelligence in the twenty-first century

For human beings, communication, as the pathway to understanding, forms the bedrock and vital infrastructure of our social nature.  Where it is prevalent and free, we are rewarded with a wealth of ideas and intellectual as well as social progress.  Where it is stifled, confused, or simply failed, we encounter the worst vices of mankind: exploitation, paranoia, aggression, and – inevitably – war.  The stakes are higher than ever in our modern-day culture of global economies and extreme specialization of professions.  As our technological prowess and standards of living steadily rise, legions of experts are called upon to study more and more about less and less.  The robotics engineer may know more than anyone else in the world about how to translate computer code into fluid lifelike interfaces, but probably has no idea where his breakfast comes from in the morning.  We are inextricably bound and dependent on each other for our survival, much like the different organs in a living organism, and yet we are really only beginning to study the patterns of our connections and the possibilities for intelligent cooperation. 

As I will show later on, there is ample evidence supporting the notion that a diverse mob of people can actually exhibit similar or even greater intelligence than that of the most intelligent person within it.  That a group of individuals can exhibit a collective intelligence is already enticing enough to warrant careful consideration; furthermore, once we have a stronger grasp of the underlying dynamics, perhaps it will be possible to apply learned methods to harness this mostly untapped resource of intelligence.  In order to motivate a structured and systematic overview of such group dynamics and cooperative problem-solving, it’s useful to consider analogies in a system which we naturally think of as possessing intelligence – the human brain.  I surveyed two models of the mind in order to obtain a wider range of perspectives on formulations of intelligence modeling, and will consider cooperative intelligence within the framework of what I consider the most important characteristics exhibited by these models.

Induction, by John Holland et al, builds a rigorous and highly specified model of learning and the conceptual structure of the mind.  Using the bucket brigade algorithm and classifier system we studied in his book Hidden Order, he describes a process by which an initially simple system adapts to its environment, learns, and become more complex and able to model and respond to the external world accurately.  Foremost to his construction of a mental model are what he calls “q-morphisms,” or quasi-morphisms.  He describes morphisms as mathematical structures related to how two spaces map to each other, and uses this formulism to illustrate how our minds map parts of the world into our models of it.  The external world undergoes a transition function, called T, from one state to another over time, and likewise our model of it undergoes a transition T’.  The external world both before and after the transition is mapped onto our model using a categorization function P; ostensibly it is time-invariant.

However, Holland makes the point that in case our model isn’t accurately depicting the environment, we have to create layers of exceptions and new transition and categorization functions.  It is this process of creating multiple layers of a mental model branching off of each other that he calls a “q-morphism,” because the world no longer maps uniquely into our model.  With incomplete knowledge, we can make decisions based on the default transition layers, but as more information is acquired, we can move into more complicated and specified levels. 

A vital supposition of this system is that we are able to accurately depict our environment in our model and then translate subsequent decisions into actions.  Holland doesn’t assume that these abilities are automatically built in from the start, but rather that they are acquired through testing and reinforcement processes via the bucket brigade algorithm.  He describes two types of empirical rules, analogous to “messages” that agents in his classifier system can post and match, each serving a different important purpose in building up and adapting the model.  Synchronic rules are categorical and create hierarchies of definitions and associations between different concepts.  An example given is that a dog is a type of animal, and hence it is below “animal” in the hierarchy of definitions, but it is also associated with “cat,” even though it’s not directly related to it in a hierarchical way.  Diachronic rules make predictions about the external world, such as “If you stay up too late, you’ll be tired tomorrow,” but also can dictate appropriate behaviors given certain situations, such as “If you see a car driving straight toward you, get out of the way.”  Rules are created and compete with each other in the same method as the messages which post to a bulletin board in Hidden Order, by bidding on each other and gaining or losing strength after getting feedback from the external world.  With increased experience, the model becomes more sophisticated and develops more specific rules for interpreting the world and formulating actions; essentially, the model learns.

Marvin Minksy, in his book Society of Mind, presents a much different view of a model of the mind, although the two still exhibit striking similarities.  Overall, Minsky’s version is much more qualitative, subjective, and fragmented than Holland’s.  Instead of presenting an overall model, he addresses many small issues concerning human behavior, almost independently of each other.  Because of this, it was hard to piece together a cohesive theory of the mind.  In addition, whereas Holland focused exclusively on learning and problem-solving, Minsky almost never discussed development or learning, choosing instead to examine aspects of the already formed mind such as goals and language.  It almost seems as if they were approaching the same model from two separate directions; the former building the mind from the ground up, focusing on how we learn, the latter breaking the mind down into its component parts, focusing on who we are.  Minsky viewed the mind as a very goal-driven collection of agents which can be called by each other to do subtasks.  For instance, in order to DRIVE, we must call on the subagents STEER and SPEED CONTROL.  These subsequently call other agents, and so on all the way down to mini-agents which control muscle movement.  Although very hierarchical, these hierarchies of agents are not static and their relationships can shift around so that an agent who was called by one agent may do the calling next time.  Interestingly, although he phrases it differently, Minsky also brings up the point that agents compete with each other in the mind.  If subagents are locked in tight conflict, their meta-agent would be weakened, thus paving the way for another agent to take the reins.  In addition, he recognizes that in the absence of complete information, the mind often resorts to default assumptions based on past experience. 

In both models, there is a tightly-knit system of adaptive agents which compete with each other and react to feedback from the environment with the goal of achieving something collectively.  This then begs the question: What happens if we replace the agents with people?  Is group intelligence then possible?  In The Wisdom of Crowds, James Surowiecki argues that there are already numerous manifestations of group intelligence.  In a particularly striking example, he makes the point that the decision reached by a group is often eerily accurate, despite no one person in the group possessing enough information to have come to the same conclusion.  In 1968, U.S. submarine which disappeared in deep waters was all but lost when naval officer John Craven decided to pursue a unique search strategy as an experiment.  Instead of asking a few experts their opinions, he asked a diverse group of people with a wide range of knowledge to bet on different scenarios.  From their responses, he concocted a likely location for the submarine, and amazingly enough it was found 220 meters away, an amazing feat considering how little information they had to go on.  Surowiecki’s main point is that aggregating diverse opinions which were reached independently can yield surprisingly brilliant results. 

On the other hand, there are also numerous examples of groups making poor decisions.  An obvious instantiation of this would be “mob mentality,” in which otherwise perfectly rational people exhibit extreme irrationality and volatility.  In a group where people are pressured to conform and heavily influence each other, the bulk of the decision-making often falls to the most vocal or nominally important person, which is not necessarily the person with the most information and best judgment.  As an example, Surowiecki cites the Mission Management Team which was in charge of ensuring the safe return of the space shuttle Columbia in 2003.  Although several people on the team actually expressed concern about a large piece of foam which had broken off and damaged the shuttle during launch, the group as a whole chose to ignore this assessment because the leadership had unilaterally decided that the foam was of no import and never put it before the whole group or opened up any sort of debate.  Ultimately this decision proved to be a very poor one which tragically cost seven crew members their lives. 

Indeed, groups can often leave meetings much more polarized than before, as a result of extremely vocal individuals swaying moderate opinion-holders to their side or pressures on the group to come quickly to a consensus.  In light of this, perhaps the close and intimate social settings in which juries are forced to make decisions might not be the most conducive environment to rational verdicts.  It seems as if groups are intellectually capable of collective intelligence, but the complex social interactions between people complicate and hinder this ability.  In general, humans are almost blinded from practical group decision-making by our myriad social interests and susceptibilities. 

However, I believe that we as a species are at a unique turning point in our development, with the advent of the internet and wireless communication.  Never in history has it been possible to communicate instantaneously with anyone in the world, to join an online virtual community of everyone who shares your same interests, or to share any idea you want with all of the World Wide Web.  Essentially, the internet is as close to a meritocracy of ideas as we can get.  Whereas in the past, we were bound to our geographic communities and therefore tended to form dense networks where everyone knew everyone else, our social networks are now much more dispersed and dependent on common interests.  Communication on the internet is both selective and nonselective.  Ideas propagate freely, but personal relationships are determined by choice and similar interests.  This creates a fertile environment for breeding new ideas within the tight communities sharing similar interests and rapidly disseminating them to the masses.  I believe that this change in network topology also tends to minimize the social complications which hinder intelligent decision-making.  People are not forced to listen to or be influenced by an extreme minority which is simply in the same vicinity, because the internet has no boundaries and it is all too easy to seek out a wide variety of opinions. 

The internet also tends to screen out status cues which might otherwise bias how people receive ideas.  In real life, people are awarded status in decision-making scenarios even when they are unqualified.  Even in academia, bias remains a major problem which precludes work from judged solely based on their merit.  Surowiecki remarks that “most scientific papers are read by almost no one, while a small number of papers are read by many people” (170).  This seems contrary to the scientific ideal of objectivity and reverence for the pursuit of truth, but unfortunately all scientists are still human and subject to the same social influences as anyone.  On the internet, however, this seems to be less of a problem, because even though reputation-based systems are common, the ability for rapid dissemination of ideas ensures that most of the time these reputations are well-deserved.  Despite the internet being an unimaginably vast place, the most obscure of entertaining and useful pieces manages to find its way to virtual stardom, motivating those already with solid reputations to work hard to maintain them. 

In fact, the concept of ideas proliferating on their own merit is not by any means new.  It was first introduced in 1976 by Richard Dawkins under the title of “memes.”  He described them in the following way:

Examples of memes are tunes, ideas, catch-phrases, clothes fashions, ways of making pots or of building arches.  Just as genes propagate themselves in the gene pool by leaping from body to body via sperms or eggs, so memes propagate themselves in the meme pool by leaping from brain to brain via a process which, in the broad sense, can be called imitation.” (194)

Considering ideas to be in evolutionary competition with each other is very reminiscent of Holland’s classifier system, with the internet as the “bulletin board” and the ideas propagating from person to person as the messages.  The proliferation of reputation-based sites such as Google or Amazon might be analogous to his bucket brigade system which assigns strength to ideas based on their performance.  If I’m searching for a particular web site, I can type in a few key words and use the combined experience of all those who used Google before me to track it down almost instantly. 

Of course, there are two parts to intelligence: the ability to accurately portray the world, and the ability to act on them.  Memes proliferating throughout cyberspace might correspond to the former, but in order for the latter to be true, there must be a form of rational, coordinated, and planned group cooperation.  I want to stress the “rational” and “planned” portion of the previous statement because although there are many instances of coordinated human activity throughout history, very few of them display a rationality and self-awareness which is evident in true intelligence.  To illustrate what I mean, I’d like to turn to another book which focuses on our new culture of “constantly connected.”  In Smart Mobs, Howard Rheingold writes about this new phenomenon of rapid wireless communication and state-of-the-art inventions which should revolutionize how we connect.  Although the possible technologies he describes are indeed sexy, far more intriguing are his presentations of group cooperation made possible by this new culture.  When the impeachment trials for President Estrada of the Philippines suddenly halted on January 20, 2001 as a result of efforts by senators linked to Estrada, “[o]pposition leaders broadcast text messages, and within seventy-five minutes […], 20,000 people converged on Edsa [a popular and well-known meeting spot…]  Over four days, more than a million people showed up.  The military withdrew support from the regime; the Estrada government fell” (160).  Instead of a spontaneous mob born out of passion or random self-interests, the group that gathered at Edsa was well-informed and had the intention to accomplish a goal only possible through cooperative effort and coordination.  This would not have been possible without mobile wireless communication and the widespread use of SMS messaging throughout the country. 

It seems that effective communication is the key to harnessing the vast knowledge and considerable abilities of a diverse group of people, and only now are we in an opportune position to explore the ramifications of instantaneous, widespread sharing of ideas.  Far from being able to offer any serious predictions about the future, I’d at least like to make some observations and indulge in a bit of wild speculation.  The biggest limitation on the Holland classifier analogy to group intelligence, and also the most intriguing aspect, is that with a model of the mind, the external world is very well-defined.  When we extend the model to groups of people, it is no longer clear where the feedback comes from.  Because the largest problems facing humanity are in fact manmade, in order to engage in group problem-solving, we will be forced to model ourselves.  When a system becomes capable of modeling itself, an argument can be made that it has actually gained self-awareness and moved into the realm of consciousness.  Is it completely preposterous to contemplate the idea of a future collective consciousness of which we, as mere simple agents, will not be not aware?  Although I cannot say for sure that it’s out of the realm of possibility, I also cannot claim that such an analogy even holds when we’re dealing with groups of already sentient and intelligent beings.  In this case it’s extremely difficult to define at what stage the modeling actually occurs, inside the mind of each human, or between them. 

In addition, there are many complicating factors which could prevent this idealistic model from ever becoming reality.  It is not clear whether simply spreading an idea rapidly will necessarily mobilize cooperative action.  Because of the wide range of interests which exist on the internet, it’s possible that people would be unmotivated to act in accordance with each other.  In addition, in order for novel ideas to be created, people must maintain their diversity.  It’s unclear if the internet, despite being a haven for those who wish to indulge in their interests, will encourage others to develop interests in the first place.  If anything we can think of creating is already on the net somewhere, what is the point?  Can we manage to hold onto our very dear concepts of individuality in the face of an avalanche of other people’s thoughts and ideas?  The future could be a bleak and uneventful landscape of legions of people net-surfing all day, never contributing a thing. 

In spite of these objections and concerns, I find it exciting that the infrastructure is there nonetheless.  Mostly, I believe it’s important to acknowledge the significant impact that widespread mobile wireless communication will have on our ability to solve problems as a society.  At best, it will be a boon to our cultural and technological progress; at worst, it will create a culture of hyper-mass-consumerism.  Either way, it’s the beginning of a new era.

  

Works Cited:

    Dawkins, Richard. The Selfish Gene. New York: Oxford University Press Inc., 1976.

    Holland, John H., Keith J. Holyoak, Richard E. Nisbett, and Paul R. Thagard. Induction:

    Processes of Inference, Learning, and Discovery. Cambridge: MIT Press, 1989.

    Minsky, Marvin. Society of Mind. New York: Simon & Schuster, 1988.

    Rheingold, Howard. Smart Mobs. New York: Perseus Books Group, 2002.

    Surowiecki, James. The Wisdom of Crowds. New York: Random House, 2004.

(Written 2007)