When we have conscious thoughts, we can often hear a voice inside our heads –…

熱度37票  瀏覽7次 時間:2020年10月02日 06:29

For much of history, reading was a fairly noisy activity. On clay tablets written in ancient Iraq and Syria some 4,000 years ago, the commonly used words for “to read” literally meant “to cry out” or “to listen”. “I am sending a very urgent message,” says one letter from this period. “Listen to this tablet. If it is appropriate, have the king listen to it.”

Only occasionally, a different technique was mentioned: to “see” a tablet – to read it silently.

Today, silent reading is the norm. The majority of us bottle the words in our heads as if sitting in the hushed confines of a library. Reading out loud is largely reserved for bedtime stories and performances.

But a growing body of research suggests that we may be missing out by reading only with the voices inside our minds. The ancient art of reading aloud has a number of benefits for adults, from helping improve our memories and understand complex texts, to strengthening emotional bonds between people. And far from being a rare or bygone activity, it is still surprisingly common in modern life. Many of us intuitively use it as a convenient tool for making sense of the written word, and are just not aware of it.

Colin MacLeod, a psychologist at the University of Waterloo in Canada, has extensively researched the impact of reading aloud on memory. He and his collaborators have shown that people consistently remember words and texts better if they read them aloud than if they read them silently. This memory-boosting effect of reading aloud is particularly strong in children, but it works for older people, too. “It’s beneficial throughout the age range,” he says.

Reading aloud is often encouraged in school classrooms, but most adults tend to do most of their reading silently (Credit: Alamy)

Reading aloud is often encouraged in school classrooms, but most adults tend to do most of their reading silently (Credit: Alamy)

MacLeod has named this phenomenon the “production effect”. It means that producing written words – that’s to say, reading them out loud – improves our memory of them.

The production effect has been replicated in numerous studies spanning more than a decade. In one study in Australia, a group of seven-to-10-year-olds were presented with a list of words and asked to read some silently, and others aloud. Afterwards, they correctly recognised 87% of the words they’d read aloud, but only 70% of the silent ones.

In another study, adults aged 67 to 88 were given the same task – reading words either silently or aloud – before then writing down all those they could remember. They were able to recall 27% of the words they had read aloud, but only 10% of those they’d read silently. When asked which ones they recognised, they were able to correctly identify 80% of the words they had read aloud, but only 60% of the silent ones. MacLeod and his team have found the effect can last up to a week after the reading task.

You might also like:

Even just silently mouthing the words makes them more memorable, though to a lesser extent. Researchers at Ariel University in Israel discovered that the memory-enhancing effect also works if the readers have speech difficulties, and cannot fully articulate the words they read aloud.

Reading aloud can also make certain memory problems more obvious, and could be helpful in detecting such issues early on

MacLeod says one reason why people remember the spoken words is that “they stand out, they’re distinctive, because they were done aloud, and this gives you an additional basis for memory”.

We are generally better at recalling distinct, unusual events, and also, events that require active involvement. For instance, generating a word in response to a question makes it more memorable, a phenomenon known as the generation effect. Similarly, if someone prompts you with the clue “a tiny infant, sleeps in a cradle, begins with b”, and you answer baby, you’re going to remember it better than if you simply read it, MacLeod says.

Another way of making words stick is to enact them, for instance by bouncing a ball (or imagining bouncing a ball) while saying “bounce a ball”. This is called the enactment effect. Both of these effects are closely related to the production effect: they allow our memory to associate the word with a distinct event, and thereby make it easier to retrieve later.

The production effect is strongest if we read aloud ourselves. But listening to someone else read can benefit memory in other ways. In a study led by researchers at the University of Perugia in Italy, students read extracts from novels to a group of elderly people with dementia over a total of 60 sessions. The listeners performed better in memory tests after the sessions than before, possibly because the stories made them draw on their own memories and imagination, and helped them sort past experiences into sequences. “It seems that actively listening to a story leads to more intense and deeper information processing,” the researchers concluded. 

Many religious texts and prayers are recited out loud as a way of underlining their importance (Credit: Alamy)

Many religious texts and prayers are recited out loud as a way of underlining their importance (Credit: Alamy)

Reading aloud can also make certain memory problems more obvious, and could be helpful in detecting such issues early on. In one study, people with early Alzheimer’s disease were found to be more likely than others to make certain errors when reading aloud.

There is some evidence that many of us are intuitively aware of the benefits of reading aloud, and use the technique more than we might realise.

Sam Duncan, an adult literacy researcher at University College London, conducted a two-year study of more than 500 people all over Britain during 2017-2019 to find out if, when and how they read aloud. Often, her participants would start out by saying they didn’t read aloud – but then realised that actually, they did.

“Adult reading aloud is widespread,” she says. “It’s not something we only do with children, or something that only happened in the past.”

Some said they read out funny emails or messages to entertain others. Others read aloud prayers and blessings for spiritual reasons. Writers and translators read drafts to themselves to hear the rhythm and flow. People also read aloud to make sense of recipes, contracts and densely written texts.

“Some find it helps them unpack complicated, difficult texts, whether it’s legal, academic, or Ikea-style instructions,” Duncan says. “Maybe it’s about slowing down, saying it and hearing it.”

If reading aloud delivers such benefits, why did humans ever switch to silent reading?

For many respondents, reading aloud brought joy, comfort and a sense of belonging. Some read to friends who were sick or dying, as “a way of escaping together somewhere”, Duncan says. One woman recalled her mother reading poems to her, and talking to her, in Welsh. After her mother died, the woman began reading Welsh poetry aloud to recreate those shared moments. A Tamil speaker living in London said he read Christian texts in Tamil to his wife. On Shetland, a poet read aloud poetry in the local dialect to herself and others.

“There were participants who talked about how when someone is reading aloud to you, you feel a bit like you’re given a gift of their time, of their attention, of their voice,” Duncan recalls. “We see this in the reading to children, that sense of closeness and bonding, but I don’t think we talk about it as much with adults.”

If reading aloud delivers such benefits, why did humans ever switch to silent reading? One clue may lie in those clay tablets from the ancient Near East, written by professional scribes in a script called cuneiform.

Many of us read aloud far more often in our daily lives than we perhaps realise (Credit: Alamy)

Many of us read aloud far more often in our daily lives than we perhaps realise (Credit: Alamy)

Over time, the scribes developed an ever faster and more efficient way of writing this script. Such fast scribbling has a crucial advantage, according to Karenleigh Overmann, a cognitive archaeologist at the University of Bergen, Norway who studies how writing affected human brains and behaviour in the past. “It keeps up with the speed of thought much better,” she says.

Reading aloud, on the other hand, is relatively slow due to the extra step of producing a sound.

“The ability to read silently, while confined to highly proficient scribes, would have had distinct advantages, especially, speed,” says Overmann. “Reading aloud is a behaviour that would slow down your ability to read quickly.”

Perhaps the ancient scribes, just like us today, enjoyed having two reading modes at their disposal

In his book on ancient literacy, Reading and Writing in Babylon, the French assyriologist Dominique Charpin quotes a letter by a scribe called Hulalum that hints at silent reading in a hurry. Apparently, Hulalum switched between “seeing” (ie, silent reading) and “saying/listening” (loud reading), depending on the situation. In his letter, he writes that he cracked open a clay envelopeMesopotamian tablets came encased inside a thin casing of clay to prevent prying eyes from reading them – thinking it contained a tablet for the king.

“I saw that it was written to [someone else] and therefore did not have the king listen to it,” writes Hulalum.

Perhaps the ancient scribes, just like us today, enjoyed having two reading modes at their disposal: one fast, convenient, silent and personal; the other slower, noisier, and at times more memorable.

In a time when our interactions with others and the barrage of information we take in are all too transient, perhaps it is worth making a bit more time for reading out loud. Perhaps you even gave it a try with this article, and enjoyed hearing it in your own voice?

--

Join one million Future fans by liking us on Facebook, or follow us on Twitter or Instagram.

If you liked this story, sign up for the weekly bbc.com features newsletter, called “The Essential List”. A handpicked selection of stories from BBC FutureCultureWorklife, and Travel, delivered to your inbox every Friday.

From The MIT Press Reader

When we have conscious thoughts, we can often hear a voice inside our heads – now new research is revealing why.
W

Why do we include the sounds of words in our thoughts when we think without speaking? Are they just an illusion induced by our memory of overt speech?

These questions have long pointed to a mystery – one relevant to our endeavour to identify impossible languages — those that cannot take root in the human brain. This mystery is equally relevant from a methodological perspective, since to address it requires radically changing our approach to the relationship between language and the brain. It requires shifting from identifying (by means of neuroimaging techniques) where neurons are firing to identifying what neurons are firing when we engage in linguistic tasks.

Consider this simple question: what is language made of? Sure, language consists of words and rules of combination, but from the point of view of physics, it exists in two different physical spaces – outside our brain and inside it. When it lives outside our brain, it consists of mechanical, acoustic waves of compressed and rarefied molecules of air – ie sound. When it exists inside our brain, it consists of electric waves that are the channel of communication for neurons. Waves in either case. This is the concrete stuff of which language is physically made.

There is one obvious connection between sound waves and the brain. Sound is what allows the contents of one brain, as expressed in words, to enter another brain. There are, of course, other ways for two brains to exchange linguistic information – through the eyes, via sign language, or through tactile systems such as Braille or the Tadoma Method, for example.

Sound enters us through our ears, traveling across the tympanic membrane, the three tiniest bones in our body known as the ossicles, and the Corti organ in the cochlea – a snail-shaped organ that plays a crucial role in this process. This complex system translates the acoustic signal’s mechanical vibrations into electric impulses in a very sophisticated way, decomposing the complex sound waves into the basic frequencies that characterise them. The different frequencies are then mapped onto dedicated slots in the primary auditory cortex, at which point the sound waves are replaced by electric waves.

Not all linguistic communication uses soundwaves to communicate – braille relies upon the sense of touch (Credit: Alamy)

Not all linguistic communication uses soundwaves to communicate – braille relies upon the sense of touch (Credit: Alamy)

At least since the pioneering work of Nobel Prize-winning electrophysiologist Lord Edgar Adrian we have known that no physical signal is ever completely lost when it reaches the brain. What we’ve more recently discovered is surprising: apparently electric waves preserve the shape of their corresponding sound waves in non-acoustic areas of the brain, such as in the Broca’s area, the part of the brain responsible for speech production.

These findings shed important light on the relationship between sound waves and electric waves in the brain, but almost all of them rely on one aspect of the neuropsychological processes related to language: namely, sound emission decoding. Yet we know that language can also be present in the absence of sound, when we read (just as most of you are probably experiencing at this very moment) or when we use words while thinking – in technical terms, when we engage in endophasic activity.

This simple fact immediately raises the following crucial question: what happens to the electric waves in our brain when we generate a linguistic expression without emitting any sound?

The acoustic information is not implanted later, when a person needs to communicate with someone else, it is part of the code from the beginning

In 2014, my colleagues and I set out in search of answers. We compared the shape of the electric waves characterising the activity in the Broca’s area with the shape of the sound waves, not just when speakers were hearing sound, but also when they were reading linguistic expressions in absolute silence – that is, when the input was not acoustic at all.

Analysing inner speech is not a novel idea in neuropsychology, as we know from sources ranging from the Soviet psychologist Lev Vygotsky’s speculations on psychological development to analyses based on neuroimaging. But the technique we used to explore this phenomenon was unusual and illuminating, and the results were unexpected, to say the least.

In our experiment, data were collected by means of so-called awake surgery. This technique offers the possibility of stimulating and analysing the electrophysiological cortical activity of patients who have been awakened after a portion of their skullcap was removed. The invasive nature of this technique, the fragility of the organ involved, and the cooperation of patients in an extremely delicate emotional state make this research very difficult for obvious psychological, technical, and ethical reasons.

The surgeon who cuts the cerebral cortex to remove a tumour, for example, cannot know in advance (except in specific cases) whether cutting the cerebral tissue will interrupt a neuronal network and thus impair or destroy a cognitive, motor, or perceptual capacity that is supported or conveyed by that network. To minimise any potential damage from the surgery, then, once the patient has been anaesthetised and a portion of the skullcap has been removed to access the surgical site, the surgeon wakes the patient for a short transitional period of about 10 to 20 minutes and asks him or her to perform some simple tasks that should require their utilising the exposed cortex.

You might also like:

As they perform them, the surgeon stimulates the patient’s cortex by means of small electrodes, which causes no pain since there are no pain receptors in the brain. If the electrical stimulation in a certain portion of the cortex interferes with the performance of a given task, the surgeon knows that cutting that fragment of cortex could permanently damage the patient and can evaluate whether an alternative surgical site is available.

Stimulating the brain during "awake surgery" has allowed surgeons to determine the function of different networks of neurons (Credit: Alamy)

Stimulating the brain during "awake surgery" has allowed surgeons to determine the function of different networks of neurons (Credit: Alamy)

The patient gains an invaluable advantage from these exercises, and one that is practically impossible to obtain through any other technique. At the same time, this technique provides us with a unique opportunity to investigate brain functioning and obtain extremely important data.

First, the surgeon can establish the position where a crucial node of a neuronal network associated with a specific task is located in any given patient, which neutralises one of the major problems related to neuroimaging techniques – the fact that subjects may vary considerably as to precisely where a certain function is carried out in the brain. The surgeon can also record with progressive precision neuronal electrical activity down to the level of a single neuron – although this level is only reached in extremely rare cases with current technology.

This technique has increasingly been used for pathologies other than focal lesions – for example, cases of pharmacologically intractable epilepsy. In such cases, the surgeon can also implant temporary electrodes that, once the skullcap has been closed, provide continuous information for a lengthy period of time in an everyday environment, and information that is not limited to the scope of the operating room. This measuring method offers us a further step forward in comprehending the neurophysiological processes taking place in the brain. It provides a more precise and defined level of spatial resolution than what neuroimaging techniques are capable of and provides specific measures of electrical activity not available through indirect other means of measurement.

Let us now turn back to our experiment. Sixteen patients were asked to read linguistic expressions aloud, either isolated words or full sentences. We then compared the shape of the acoustic waves with the shape of the electric waves in the Broca’s area and observed a correlation (which was not unexpected).

The very fact that the majority of human communication takes place via waves may not be a casual fact

The second step was crucial. We asked the patients to read the linguistic expressions again, this time without emitting any sound – they just read them in their mind. By analogy, we compared the shape of the acoustic wave with the shape of the electric wave in the Broca’s area. I should note that a signal was indeed entering the brain, but it was not a sound signal – instead, it was the light signal carried by electromagnetic waves, or, to put it more simply, a signal conveyed by the alphabetical letters we use to represent words (ie writing) but definitely not an acoustic wave.

Remarkably, we found that the shape of the electric waves recorded in a non-acoustic area of the brain when linguistic expressions are being read silently preserves the same structure as those of the mechanical sound waves of air that would have been produced if those words had actually been uttered. The two families of waves where language lives physically are then closely related – so closely in fact that the two overlap independently of the presence of sound.

The acoustic information is not implanted later, when a person needs to communicate with someone else, it is part of the code from the beginning, or at least before the production of sound takes place. It also excludes that the sensation of exploiting sound representation while reading or thinking with words is just an illusory artifact based on a remembrance of the overt speech.

The discovery that these two independent families of waves of which language is physically made strictly correlate with each other – even in non-acoustic areas and whether or not the linguistic structures are actually uttered or remain within the mind of an individual – indicates that sound plays a much more central role in language processing than was previously thought.

It is as if this unexpected correlation provided us with the missing piece of a “Rosetta stone” in which two known codes – the sound waves and the electric waves generated by sound – could be exploited to decipher a third one, the electric code generated in the absence of sound, which in turn could hopefully lead to the discovery of the “fingerprint” of human language.

The brain waves associated with processing language bear more than a passing resemblance to sound waves (Credit: Getty Images)

The brain waves associated with processing language bear more than a passing resemblance to sound waves (Credit: Getty Images)

Among the questions this discovery raises is what kind of electrical activity is elaborated in a language network (one that includes the Broca’s area) by persons who have never been able to hear any sound from birth? Can we exploit electro-cortical information to access the linguistic thinking of aphasic patients whose articulatory apparatus alone has been damaged, and hear them speak again, albeit through an artificial device? Can we get a better understanding of language used in dreaming or in patients who are in a minimally conscious state? Can we consider severe stuttering as a form of miscoordination between different sound representations in different networks and hope to intervene and cure it? Can these discoveries lead to an unethical use of devices to excerpt linguistic thought from people who do not want to communicate it?

The very fact that the majority of human communication takes place via waves may not be a casual fact – after all, waves constitute the purest system of communication since they transfer information from one entity to the other without changing the structure or the composition of the two entities. They travel through us and leave us intact, but they allow us to interpret the message borne by their momentary vibrations, provided that we have the key to decode it. It is not at all accidental that the term information is derived from the Latin root forma (shape) – to inform is to share a shape.

In his Philosophical Investigations, Ludwig Wittgenstein asked: “Is it conceivable that people should never speak an audible language, but should nevertheless talk to themselves inwardly, in the imagination?” The results of this experiment unexpectedly revive this prophetic question under a new light, and more importantly, they suggest new questions altogether.

* This article originally appeared in The MIT Press Reader, and is republished with permission. Andrea Moro is Professor of General Linguistics at the University School for Advanced Study (IUSS) in Pavia, Italy. He is the author of several books, including “The Boundaries of Babel”, “A Brief History of the Verb To Be,” and “Impossible Languages,” from which this article is adapted.

--

Join one million Future fans by liking us on Facebook, or follow us on Twitter or Instagram.

If you liked this story, sign up for the weekly bbc.com features newsletter, called “The Essential List”. A handpicked selection of stories from BBC FutureCultureWorklife, and Travel, delivered to your inbox every Friday.

頂:2 踩:1
對本文中的事件或人物打分:
當前平均分:-1.43 (7次打分)
對本篇資訊內容的質量打分:
當前平均分:0.2 (15次打分)
【已經有12人表態】
上一篇 下一篇