Suffixes and prefixes in languages related to different human cognitions

0
510

Linguistic researchers use an extensive body of research on English and other Western languages to make broad assumptions about trends in human language, including an apparent universal preference for suffixes (e.g., less, able, ment) over prefixes (e.g., fore, anti, trans).

Since psychological scientists recognize the powerful link between language and cognition, a tendency for suffixes to dominate human language may reflect a universal trait of how we think and process the world around us.

However, new research published in the journal Psychological Science reveals that even though many populations favor suffixes in the same way English speakers do, others do not, including speakers of the African Bantu language Kîîtharaka.

This unexpected discovery challenges the idea that Western languages are sufficient when studying language and its connection to psychological science.

“The original hypothesis that humans generally prefer suffixes makes a lot of intuitive sense, at least to us English speakers,” said Alexander Martin, a language researcher at the University of Edinburgh and lead author on the paper. “We were surprised, therefore, to see just how starkly the two populations [English speakers and Kîîtharaka speakers] differed in this regard.”

For their research, Martin and his colleagues studied specific word characteristics among the two populations—one whose language relies more frequently on suffixes (51 English speakers) and one whose language relies more on prefixes (72 Kîîtharaka speakers).

Participants were presented with a sequence of either shapes or syllables followed by two additional sequences. They were then asked to identify the sequence most similar to the original sequence. Based on their results, the researchers were able to identify which parts of sequences the speakers considered most important and therefore less likely to be modified.

English speakers considered the beginnings of words as more important, a language characteristic that reflects English’s use of suffixes. Kîîtharaka speakers, however, were inclined to treat endings as more important, opting to select sequences that altered the beginnings of words.

“This finding really challenged a previous claim about human language,” said Martin. “It showed that the abundance of suffixes across the world’s languages might not simply be a reflection of general human perception.”

A preference for prefixes over suffixes by some language speakers has larger implications than diverse human cognition. It might be a sign that language research has not been exhaustive in the past.

“The important take-home here is that if we want to understand how language is shaped by universal features of human cognition or perception, we need to look at a diverse sample of humans,” said Martin.

While speakers of English and other Western languages prefer using suffixes more than prefixes, a new study reveals that this preference is not as universal as once thought. These findings stress the need for more diverse populations in language research and may shed light on human cognition. Credit: APS

The WEIRD Preference for Suffixes

Prior research has established that English speakers favor the beginnings of words. This is reflected in the structure of English: when modifying a word to change its meaning, English tends to add suffixes. For example, common suffixes in the English language include “-wise,” which you might add onto the end of “clock,” or “lik

e,” or “-al,” which often tails behind “accident” or “fiction.”

These past language studies, however, have focused predominantly on Western, educated, industrialized, rich, and democratic (WEIRD) populations. Such studies have concluded that suffixes are generally preferred over prefixes.

Martin and his colleagues observed that such research excludes populations that do not fall into “WEIRD” categories, and conclusions drawn from them could therefore be unrepresentative of universal human cognition.

The Nexus of Language and Cognition

“How the human brain perceives and processes the world around it impacts language, but not every feature of language is a direct reflection of this,” said Martin. “For example, how we use language, like for communication, can also affect language patterns.”

The study’s conclusions further illuminate the relationship between human cognition and language systems and patterns. However, Martin cautioned against assuming that different languages must mean drastically different perceptions of the world.

“When we look at speakers of other languages, especially those who speak languages that haven’t been studied extensively, we are able to understand that we’ve been seeing the world through a biased lens. That’s something we think psychologists should care about,” he said.


The big psychological question in evolutionary theory remains as perplexing and as unanswered today as in Darwin’s lifetime: How can Homo sapiens be biologically so similar to other animal species and yet cognitively1 so different?

In the 21st century, there has been a flood of books and articles on this topic. Notably, several concrete hypotheses have been formulated about the “mindful ape” concerning the emergence of

(i) language,

(ii) tool-usage, and

(iii) social cooperation.

These are the behaviors where human cognition appears to be most exceptional and consequently which have received the most consideration by many generations of scholars (e.g., Pasternak, 2007). Through a combination of conceptual insight and experimental ingenuity, significant progress has been made in specifying what is truly unusual about the cognition underlying those skills – and indeed which aspects are common to other animal species.

Controversies are numerous, but one of the biggest obstacles in evaluating hypotheses concerning the human mind lies in the fact that human cognitive skills have blossomed into such complex behaviors that the “core” cognitive talents are far from obvious.

In the reductionist tradition of the natural sciences, the search for origins has consequently focused on simplified phenomena – in animals, in infants, and most importantly in the reduced dimensions of laboratory cognitive science.

Two research strategies have become dominant. The first deals with differences in currently existing cognition among human adults, human infants, and various animal species, notably Primates. Interspecies comparisons in particular are notoriously difficult, but potentially provide a means to evaluate human behavior from a non-anthropocentric viewpoint.

The second strategy is the study of the evolutionary record. As sparse and as inherently haphazard as the findings of paleoanthropology may be, fossils have the extreme merit of providing an unambiguous chronological sequence of the major events in the evolutionary history of our species (see Appendix: The Timeline of Human Evolution).

Both the experimental and the historical approaches have proven to be invaluable, but, whatever insights can be obtained, most researchers expect that the explanation of human cognition will be consistent with the known processes of biological evolution.

In that respect, it is of interest that there is agreement among three of the most incisive modern thinkers on the cognitive evolution of H. sapiens regarding the step from pre-modern to modern mentality.

That is, Donald (2001), Corballis (2011), and Tomasello (2014) have separately noted that, in accord with conventional evolutionary theory, the Primate brain could have undergone at most only one major “rewiring” in the transition from ape to human cognition over the relatively brief timespan that separates us from our pre-modern ancestors.

That revolutionary re-wiring may have been driven by innovative tool construction some two million years ago, the invention of language during an Ice Age survival crisis, or perhaps the emergence of social cooperation on the African savannah as our ancestors needed each other’s help to hunt together.

Alternatively, the evolution of human “mindfulness” might have its origins in a more complex type of associational process that was then exploited in the development of our various cognitive talents. Several plausible hypotheses of this kind have been forwarded – often with a focus on tool-making and tool-using skills (Klein and Edgar, 2002; Corballis, 2011; Stringer, 2012; Tattersall, 2012; Suddendorf, 2013), sometimes with a focus on language (Bickerton, 1990; Jackendoff, 2002; Berwick and Chomsky, 2016) or speech (Jaynes, 1976; Lieberman, 2007), and sometimes with an emphasis on social cooperation (Deacon, 1997; Tomasello, 1999, 2014; Whiten, 1999; Saxe et al., 2004; Wrangham, 2009; Dunbar, 2016).

Not surprisingly, linguists have emphasized the supreme importance of language in the emergence of all types of characteristically human behavior. Without at least rudimentary language, they ask, what kinds of tool creation and group behaviors can realistically be expected to have occurred among our ape-like ancestors?

In contrast, developmental psychologists and experts on animal behavior tend to see the inherently cooperative, social behavior of H. sapiens as the hallmark of our species. If, in times of crisis, our early ancestors came to empathize with one another and were inclined to find collective solutions to collective problems, then cooperative behavior may have preceded and motivated the subsequent development of tools and language.

And, while acknowledging the importance of both language and social cooperation, paleoanthropologists understandably emphasize the long history of tool-making and tool-usage – and the unambiguous chronology of material artifacts. Specifically, the historical record on tools extends back 2 ∼ 3 million years, whereas tangible evidence of cooperative social activity and language is tenuous for all phenomena dating from more than 100,000 years ago.

Most scholars on human evolution would of course argue for the synergistic development of all three of these (and perhaps other) fundamental human skills (e.g., Deacon, 1997; Tattersall, 1998) – each contributing to the advancement of the others. But the sequence of evolutionary events and the precise nature of the “rewiring” of the human brain remain entirely speculative (cortical expansion? the addition of cross-modal sensory processing?

the emergence of hemispheric specialization? the development of neuronal circuitry to sustain Boolean logic? etc.). Whether used first in tool-making, language, or social organization, once a new talent had become established, the novel capabilities of the newly wired human brain could then have been applied diversely to various modalities to enlarge the cognitive toolkit (Mithen, 1996, 2005) of H. sapiens.

The alternative hypothesis to the “once-only revolutionary rewiring” of the human brain is the rather unparsimonious possibility of successive mutations that separately facilitated language, tool use, social cooperation, symbolic thought, face recognition, throwing, cooking, dance, music, art, and so on – with no real linkage among these human talents.

Efforts have in fact been made to enumerate the “universals” of human cognition (e.g., Brown, 1991), but Tattersall (2007), p. 134 has noted that

“the problem with such lists is that they can never be complete; there’s always something else to add… And none of these features in itself specifies anything about the human condition; we simply can’t know which of them, if any, is the ‘key’ human attribute, the one that was targeted by past natural selection.”

In the essay that follows, I summarize the case for thinking that five of the “universals” of human cognition that others have previously identified, emphasized, and described explicitly as “triadic” do indeed have a cognitive triad at their core. No attempt is made to delimit our triadic talents to these five phenomena alone, but they are, by consensus, arguably the most distinct and, moreover, the talents that researchers interested in animal cognition have the most difficulty relating to the full-blown talents of H. sapiens.

In the context of the types of essays published in Frontiers in Psychology, the present essay is clearly an “Opinion” piece – in attempting to bring together five highly contentious subfields of human psychology within a novel triadic hypothesis. At the same time, however, it can be said that the evidence indicating the importance of cognitive triads has already been presented by others in explication of the unusualness of human cognition separately in each of these subfields.

In that respect, the present essay can be seen as a “Review” of current ideas in human cognition – with, to be sure, an emphasis on the supporting views of others that have focused on the perceptual/cognitive triads in language, tool-making, social cooperation, art and music.

While I am unaware of any academic work that has argued explicitly against the triadic hypothesis, the vast majority of theorizing on the evolution of human cognition does not focus on “triads” – and, in that regard, the present work represents a personal “Opinion” that may or may not withstand the test of time. In any case, it may inspire further debate on the topic of “What Makes Us Human.”

Here, I outline the view that the “once-only” revolution in human cognition was the emergence of triadic neuronal processing – or the ability to handle the relationships among three items of information at the same time (Cook, 2012), as distinct from dyadic associations, i.e., simple binary correlations.

By definition, triadic cognition includes both trimodal processing (where, for example, visual, somatosensory, and auditory information is used for task performance) and unimodal processing (where, for example, several distinct types of visual cue – occlusion, shadows, and perspective lines – each provide information for the understanding of visual depth). Stated as such, “triadic processing” is rather vague and in need of concrete explication.

Fortunately, polymodal (multisensory, cross-modal) sensory processing has become a robust field of empirical research (e.g., Calvert et al., 2004; Murray and Wallace, 2012; Plaisier and Kappers, 2016), and the relationships among relevant cues in simplified perceptual tasks can often be specified in laboratory experiments and conclusions drawn concerning the relevance of dyadic versus triadic processing.

It is crucial for a proper understanding of triadic cognition to distinguish between the simple numerosity of perceptual/cognitive cues, on the one hand, and the complexity of the relationships among those cues, on the other.

In earlier versions of the triadic hypothesis (e.g., Cook, 2012), I did not attempt a general definition of “threeness” under the assumption that the definition was self-evident. Prompted by reviewer comments, however, I now conclude that the “triad” in triadic cognition can and must be defined as the three relationships that are inherent to any set of three items.

The numerosity of the cues themselves is not the issue, but research on short-term memory (e.g., Jonides et al., 2008), “chunking” (e.g., Cowan, 2001), and their development over the first few years of life (e.g., Oakes and Bauer, 2007) clearly indicates the involvement of both numerosity and causal relationships among items in memory in cognitive development.

That having been said, an inevitable confusion in the discussion of cognitive operations that involve small numbers of items, however, is the fact that – unlike all other sets – there are precisely three relationships among three items, whereas there is but one relationship between two items, already 6 relationships among four items, 15 among five items, and so on.

In other words, no problems arise by conflating “items” and “relationships” in the case of three, but important differences do arise with any numerosity other than three. For the discussion that follows, the most convenient labels are those that indicate the numerosity of cues (dyadic vs. triadic, etc.), but the cognitive complexity arises from the number of distinguishable relationships among the cues.

In our own work in empirical musicology (Cook, 2002, 2009, 2017; Cook and Fujisawa, 2006; Cook et al., 2006; Cook and Hayashi, 2008; Fujisawa and Cook, 2011), and visual aesthetics (Cook et al., 2002, 2008a,b; Hayashi et al., 2007; Cook, 2012), we have manipulated the simplest of auditory and visual stimuli, and found that there is a dramatic leap in perceived complexity as one moves specifically from two to three sensory cues.

In contrast, there is a trend toward increased complexity in the transition from three to four cues, or from four to five cues (etc.), but it is statistically rarely significant. In a word, there is something special in the auditory or visual “depth” of harmonies or images containing three (well-placed) tones or objects in comparison with only two.

Recursively building on the perceptual triad by adding further auditory or visual cues is endlessly enriching (intriguing and aesthetically pleasing), but the leap from “sensation to art” appears to begin at the transition from the perception of 1 isolated correlation (inherent to 2 cues) to the perception of the 3 relationships (among 3 cues).

Having found empirical indications of the importance of specifically triadic processes in our own data, we returned to the literature (initially, on stimulus numerosity) in other fields where human “uniqueness” has been a traditional (if somewhat dubious) claim.

In tracking the major evolutionary events that led from the mentality of our chimpanzee-like ancestors some 7 million years ago to the human mind today, it became apparent that others have stumbled onto similar cognitive “leaps” – sometimes using the labels of “triadic” vs. “dyadic” associations, but, more commonly, simply noting the inherent explosion of “complexity” as sensory cues proliferate.

Hypotheses concerning the number of perceptual/cognitive processes that can be simultaneously held “in mind” are necessarily controversial2, but they are attractive in their conceptual simplicity and consequent empirical testability. In effect, the hypothesis of triadic cognition is both “radical” (in claiming to identify the cognitive functions underlying the transition from pre-modern to modern H. sapiens) and also surprisingly “conservative” (in being constrained by well-established findings in perceptual and cognitive psychology).

While there remain several lacuna of unexplored issues, the basic hypothesis of triadic processing can be easily understood under the following five headings. There may indeed be other fundamental cognitive realms where human capabilities are qualitatively different (dance, cuisine, sports?), but the following are well documented in the literature on human evolution.

Language

The cognitive triad that lies at the heart of modern linguistic theory is the “phrase” – advocated since the 1950s by Noam Chomsky in the form of “transformational grammar” (1965) [later called “head-driven phrase structure grammar” (Pollard and Sag, 1994) in recognition of the central role of head-rotation]. Note that the latest incarnation of transformational grammar is now labeled the “minimalist program” (Boeckx, 2006), and is an attempt to reduce triadic phrase structures to multiple dyadic “merge” functions.

I agree with both Bickerton (2014) and Tomasello (2014) that the emphasis on dyadic “merging” is a possible alternative expression of phrase structure, but is perhaps an unnecessary confusion that detracts from more than 50 years of linguistic theory based on phrase structure.

Although coherent explanations of linguistic principles can follow from either the dyadic merge mechanism or the triadic phrase structure, the traditional emphasis on phrase structure greatly facilitates an explanation of the generality of triadic mechanisms in the “higher” cognition of H. sapiens.

In either case, a coherent theory of syntax has already been built upon the linguistic insight that every phrase (noun phrase, verb phrase, prepositional phrase, etc.) entails the “merging” of two words through a connecting “head” (Figure ​Figure55).

An external file that holds a picture, illustration, etc.
Object name is fpsyg-09-01060-g005.jpg
FIGURE 5
Phrases are cognitive triads consisting of pairs of spoken words (in red) joined through an unspoken “head.” On the left is shown the structure of a noun phrase that includes a specifier, a complement, and a noun (e.g., “a nice tune”). On the right is shown the recursive phrase structure of an entire sentence (e.g., “Klaus fed Nadia”) with optional specifiers “(S)” omitted. Arrows indicate possible phrase rotations.

The task that all language users repeatedly face when producing or hearing speech is to determine the unique meaning that corresponds to a specific combination of words organized into such discrete phrases. English-speakers pay attention primarily to the word-order within and between phrases.

In other languages, the prefixes and suffixes of words and their agreement among the parts of speech can be more important than word-order, but in all languages comparable rules of syntax must be followed to indicate the relationships among words organized into phrases with specific – normally unique – meanings.

As Bickerton (1990, p. 59), has noted, human beings “have a kind of template or model of what a phrase must be like. Not just a noun phrase: any kind of phrase. For the remarkable thing is that phrases of all kinds… are constructed in the same way. A phrase consists of three parts.”

What Bickerton calls the phrase “template” is the foundational cognitive triad on which all of language is built. Without triadic structures, we (and all animal species) have only an amorphous soup of associations with no possibility of coding or decoding precise causality.

Understanding the meaning of two nouns and one verb (e.g., Figure ​5, right), we immediately know of the kinds of events that might be conveyed through such language, but without a familiarity with the arbitrary rules of phrase-ordering, we do not know who did what to whom. Dyadic associations do not suffice for explicating causality.

In triadic phrases, words are necessarily connected two at a time in a temporal sequence (because of the linear ordering demanded by speech), but the human ability to understand the “chunking” of lexical units into phrases is still a deep mystery.

According to Chomsky (2000), language capabilities are hardwired – and as “instinctual” as seeing the depth in a flat picture or hearing the emotional ring of a simple melody.

Interestingly, the assignment of the order of the spoken words in each phrase is clearly not hard-wired, but learned – language-by-language, individual-by-individual, sentence-by-sentence (Evans, 2014). As most people know from the experience of studying foreign languages, the sequence of words in phrases is as arbitrary as the momentary linear order seen, for example, in a Calder mobile (Figure 6).

An external file that holds a picture, illustration, etc.
Object name is fpsyg-09-01060-g006.jpg
FIGURE 6
Depending on the arbitrary rules of different language communities, the same meaning can be translated into a foreign tongue by rotating phrases (NP1, NP2, VP1, VP2, etc.) around their heads, like a mobile twisting freely in space. Serial lexical replacements will normally not suffice for translation, but lexical replacements plus phrase rotations will often succeed. Here, an English sentence can be transformed into German by rotation of the VP2 phrase, and the German into Japanese by further rotation of the VP1 phrase.

In other words, while the ability for phrasal “chunking” may be inborn, syntax is certainly not instinctual at the level of word-order.

Indeed, in the world’s ∼6000 languages, every possible sequencing of subject (S), verb (V) and object (O) is used as the default structure. Most (90%) begin with subjects (SOV and SVO), but verb-initial languages (VSO and VOS) are not uncommon (Hawaiian and Celtic languages) and sentences beginning by default with direct objects are also known (Carnie and Guilfoyle, 2000).

For any given language, there are often uniquely correct sequences, but the “correct” sequence is generally different in, for example, German, English, and Japanese – and translated into one another by means of phrase rotation.

What remains constant across all languages is the presence of phrasal units that can be arranged recursively into larger-scale phrases and ultimately whole sentences.

With locally agreed-upon rules of sequencing, individual phrases have “correct” or “incorrect” temporal order to convey a specific meaning, but they can be rotated at will to agree with the sequencing rules of other languages to produce, once again, meaningful sentences with unambiguous semantics.

Moving an adjective from its position before a noun (as in English) to after it (as in Thai), or transplanting a verb from its early position in English to its end position in Japanese or Latin may seem “unnatural” to English speakers, but those are precisely the kinds of syntactic rules that every young child absorbs from a language community, and soon masters.

Because of such syntactic variability, successful translation therefore requires more than a one-to-one replacement of words with their lexical equivalents in a foreign tongue.

The more challenging syntactic task (for second language learners) is to rotate the branches in a linguistic tree so that the same meaning is conveyed in a different language – often using a radically different sequence of spoken words (Figure 6).

Where do other species stand in their understanding of language?

Remarkably, chimpanzees can learn the meaning of several hundred arbitrary symbols (Savage-Rumbaugh et al., 2001) and minor birds are astoundingly capable phoneticians (Pepperberg, 1999).

But can these species learn syntax and, specifically, do they detect the semantic significance of phrase structure? The academic debate is far from resolved, but there is one issue concerning which the empirical data are clear.

Analysis of the “utterances” of chimpanzees in both manually signed languages and keyboard-token communications has indicated that non-repetitive, three-word sentences are a rarity (Terrace, 1979; Pinker, 1994). Dyadic associations? Yes.

Triadic patterns? No. Both semantics and phonetics are not beyond the cognitive capacities of various species, but a cognitive barrier arises early in the realm of syntax, where the sequential ordering of three items plays an important role.

Unlike human children (who rapidly progress from isolated words to two-, three-, and multi-word sentences), animals proceed to dyadic associations without an intrinsic sequential order – and their repetition. Failing to grasp the triadic principles of phrase structure – through which causality, as distinct from simple correlation, can be conveyed – grammatically “complex” linguistic structures remain a challenge to all species except H. sapiens.

Appendix: The Timeline of Human Evolution

A plausible scenario for the sequence of events that led to the cognition of modern H. sapiens is shown in Appendix ​AppendixA1A1 and can be summarized as “The Seven Steps to Modernity.” While precise dates are not known and many details are missing, these seven stages are fully consistent with the chronology of the empirical fossil record.

FIGURE A1

An external file that holds a picture, illustration, etc.
Object name is fpsyg-09-01060-g00A1.jpg

At least six subspecies of our early ancestors from Africa and the Middle East are well represented in the fossil record. Collectively, they show a remarkable increase in brain size from the chimpanzee (400 cc) to modern Homo sapiens (1400 cc) – over a period of 7 million years. During that period, no other African mammal showed comparable increases in brain size! Particularly difficult for paleontologists to explain is the era known as “the boring one million years.”

This was when human brain expansion was most vigorous and yet the changes in stone-tool structure seen in the transition from the Oldowan toolkit to the Acheulean toolkit were extremely modest. Complex hafted tools (consisting of two or more components) are not found until much later, but one noteworthy change in the emergence of Acheulean tools was the establishment of cerebral dominance.

That development is inferred from microscopic analysis of the ballistic strikes needed to produce tools, and indicate that our ancestors who crafted the Acheulean hand-axes were right-handed (McManus, 2002). Insofar as such tool-making entails prolonged motor training for the appropriate removal of flakes from the core, it is inconceivable that hand-ax makers would have alternated between left and right hands.

Training of one hand (and the motor and premotor cortex of the contralateral cerebral hemisphere) would have been a sufficient challenge without the additional chore of achieving ambidexterity. In this regard, “the boring one million years” is likely to have been a period of consolidation of the dominance of the left cerebral hemisphere – i.e., the organization of executive motor functions in one hemisphere, while freeing the other hemisphere to specialize in other tasks (Figure adapted from Oppenheimer, 2003, p. 17).

The Seven Steps to Modernity:

(Step 0) Climate change

During a series of ice ages that struck northern Europe some 7–8 million years ago, our Primate ancestors in central/east Africa experienced arid conditions that transformed bountiful jungles into less bountiful woodlands and savannahs. The paucity of fruit-bearing trees made their normal arboreal existence and vegetarian diet impossible, and led to:

(Step 1) Bipedal Locomotion

Being inherently slower than quadrupedal locomotion, the bipedal hominid, Australopithecine, found itself at a disadvantage in relation to predatory carnivores (Tattersall, 2002, p. 15). Despite being somewhat slower, those hunter/gatherer ancestors thrived, probably as a consequence of the unprecedented advantages of:

(Step 2) Dexterous Hands

In contrast to the radical changes in the pelvis and spine that were required for bipedalism, the fossil record shows only small changes in the morphology of the hands of Homo habilis, as the hands themselves were employed, in effect, as tools (Oppenheimer, 2003). Limbs that had previously been used primarily for jungle agility could now be employed for new purposes: carrying and manipulating objects. The dexterity of hands with powerful opposable thumbs was eventually exploited in the invention of:

(Step 3) Simple Stone Tools

The earliest tools of the so-called Oldowan type exhibit little more than a sharp edge, but that was enough for the purposes of scavenging the meat and hides of megafauna (Stringer, 2012). Following upon the behavioral diversity implied by simple tool usage, the improved nutrition provided by meat-eating allowed for huge increases in brain volume (Wrangham, 2009Herculano-Houzel, 2016). Having many more neurons in the central nervous system was certainly beneficial in allowing for greater cognitive complexity, but the true significance lay in:

(Step 4) The Expansion of the Neocortex

This period of brain enlargement occurred at a time when there were few changes in the morphology of tools – a period nicknamed by paleoanthropologists as “the boring one million years.” The prolonged era of behavioral stagnancy was first noted by Jelinek (1977), but has since been endorsed by Tattersall (20022012), p. 104, p. 42, Coolidge and Wynn (2009), pp. 155–156, Stringer (2012), p. 244, and Suddendorf (2013), p. 253.

The enigma is that it is hard to understand how new tools would not be developed (given the already-established basic stone tool technology of hammer and core), and yet there were essentially no technological innovations during this millennium of millennia. Oppenheimer (2003), p. 23 has argued that the most significant mutation event ever to occur in the evolution of Homo sapiens took place at the advent of the Oldowan era – a developmental change that produced brain enlargement, in general, and expansion of the cerebral neocortex, in particular.

While paleoanthropologists have noted that surprisingly few new behaviors accompanied the increase in brain volume, there was nonetheless a remarkable change in brain morphology that has since influenced all subsequent human evolution (Zaidel and Iacoboni, 2003). That is, the transition from Oldowan to Acheulean tools was accompanied by the emergence of:

(Step 5) Lateralized Cerebral Dominance and Handedness

Note that an Oldowan tool can be created with a mere 1 ∼ 6 ballistic strikes to a core stone, whereas the Acheulean hand-ax cannot be produced with less than 50 strikes (and probably many more) of similar strength, force, and orientation at appropriate sites on the core.

The qualitative conclusion drawn from such a simple quantitative finding is that the makers of Acheulean tools were necessarily “handed” – not ambidextrous, because of the need to train one hand.

The creation of hand-axes by alternating between the left and right hands would have demanded twice the time to train the motor cortex of both hemispheres, whereas the consistent use of one hand would have been more efficient – both today and 2 million years ago.

The “boring one million years” may therefore have been “boring” from a behavioral perspective, but nevertheless a time during which the specialization of one cerebral hemisphere for motor dominance in specifically tool creation was consolidated (Frost, 1980).

Alone, the species-level preference for using the right hand when striking a core stone to produce sharp-edged flakes might have had little significance for human evolution, but the prolonged era of the motor dominance of the right hand (left hemisphere) was followed by:

(Step 6) Lateralized Cerebral Specialization

Unilateral motor dominance was important for the training of the favored hand in the motor skills needed for producing stone tools, but particularly noteworthy was the liberation of the contralateral motor cortex from the training of motor skills. That freedom made possible the specialization of the frontal neocortex of the right hemisphere for other forms of cognition (Jaynes, 1976Cook, 1986Hugdahl and Westerhausen, 2010).

Early “non-dominant” cerebral hemisphere talents would have included understanding the visuospatial geometrical constraints of creating an Acheulean hand-ax and maintaining a visual image of the intended product “in mind” – talents reminiscent of modern-day right hemisphere skills. Having thus developed a dual-control neuronal mechanism for the construction of tools, Homo sapiens with functionally lateralized brains subsequently adapted the dual control architecture in the supreme motor behavior of our species:

(Step 7) Spoken Language

In the modern human brain, the single most unambiguous aspect of functional brain asymmetry is that found for speech. Although mixed dominance is often found for language perception, semantics, and prosody, the need for unilateral executive control over motor output (speech) remains uncompromising: fully 97% of right-handers and 80% of left-handers exhibit unilateral motor control over the organs of speech (Warrington and Pratt, 1973).

Conversely, the absence of unilateral functional dominance during speech is associated with stuttering (e.g., Watkins et al., 2008). The significance of hemispheric dominance and lateralized specialization for virtually every other aspect of human psychology remains controversial (Hugdahl and Westerhausen, 2010Ocklenburg et al., 2016), but the asymmetrical activation of the cerebral hemispheres during language production is the rule rather than the exception in Homo sapiens.

There are few indications in the fossil record concerning precisely when language emerged, but it is thought unlikely to have predated the making of simple hafted tools. What that implies is that art, science, and technology have blossomed worldwide in the remarkably short period of two or three thousand years following the emergence of the precise sequentialization of unilateral motor commands for both speech and tool-making.

These seven steps leading to modern cognition can be succinctly stated as follows:

Step 1 freed the hands from the chores of locomotion.

Step 2 was the emergence of dexterous hands capable of manipulating the available raw materials of stone, wood, animal hides, and plant fiber.

Step 3 was the nutritional gain that primitive tools made possible through meat-eating.

Step 4 was the subsequent brain enlargement, producing relatively large regions of polymodal association cortex.

Step 5 was the beginning of the heavily repetitive manual activity of stone tool manufacture that required the training of a dominant hand (cerebral hemisphere) for executive motor functions.

Step 6 was the emergence of non-dominant (right) hemisphere specializations that were unrelated to motor skills, but were relevant to the cognitive processing of affective and visuospatial information.

And Step 7 was the development of the dual cognitive functions of spoken language in the left hemisphere and contextual processing in the right hemisphere (Geschwind, 1965). It is this combination of executive skills together with paralinguistic, affective and contextual functions that are today considered to be the essence of human “intelligence.”


More information: Alexander Martin et al, Revisiting the Suffixing Preference: Native-Language Affixation Patterns Influence Perception of Sequences, Psychological Science (2020). DOI: 10.1177/0956797620931108

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Questo sito usa Akismet per ridurre lo spam. Scopri come i tuoi dati vengono elaborati.