Publications of Ágnes Melinda Kovács

Can 18- and 36-month-olds revise attributed mental states by episodic re-computation?

A current debate in psychology and cognitive science concerns the nature of young children’s ability to attribute and track others’ beliefs. Beliefs can be attributed in at least two different ways: prospectively, during the observation of belief-inducing situations, and in a retrospective manner, based on episodic retrieval of the details of the events that brought about the beliefs. We developed a task in which only retrospective attribution, but not prospective belief tracking, would allow children to correctly infer that someone had a false belief. Eighteen- and 36-month-old children observed a displacement event, which was witnessed by a person wearing sunglasses (Experiment 1). Having later discovered that the sunglasses were opaque, 36-month-olds correctly inferred that the personmust have formed a false belief about the location of the objects and used this inference in resolving her referential expressions. They successfully performed retrospective revision in the opposite direction as well, correcting a mistakenly attributed false belief when this was necessary (Experiment 3). Thus, children can compute beliefs retrospectively, based on episodic memories, well before they pass explicit false-belief tasks. Eighteen-month-olds failed in such a task, suggesting that they cannot retrospectively attribute beliefs or revise their initial belief attributions. However, an additional experiment provided evidence for prospective tracking of false beliefs in 18-month-olds (Experiment 2). Beyond identifying two different modes for tracking and updating others’ mental states early in development, these results also provide clear evidence of episodic memory retrieval in young children.

Reading your mind while you are reading – Evidence for spontaneous visuospatial perspective-taking during a semantic categorization task.

Recent studies have demonstrated people’s propensity to adopt others’ visuospatial perspectives (VSPs) in a shared physical context. The present study investigated whether spontaneous VSP taking occurs in mental space where another person’s perspective matters for mental activities rather than physical actions. Participants sat at a 90° angle to a confederate and performed a semantic categorization task on written words. From the participants’ point of view, words were always displayed vertically, while for the confederate, these words appeared either the right way up or upside down, depending on the confederate’s sitting position. Participants took longer to categorize words that were upside down for the confederate, suggesting that they adopted the confederate’s VSP without being prompted to do so. Importantly, the effect disappeared if the other’s visual access was impeded by opaque goggles. This demonstrates that human adults show a spontaneous sensitivity to others’ VSP in the context of mental activities, such as joint reading.

Out of your sight, out of my mind: Knowledge about another person's visual access modulates spontaneous visuospatial perspective-taking.

Accumulating evidence suggests that humans spontaneously adopt each other’s visuospatial perspective (VSP), but many aspects about the underlying mechanisms remain unknown. The aim of this study was to investigate whether knowledge about another’s visual access systematically modulates spontaneous VSP-taking. In a spatial compatibility task, a participant and a confederate sat at a 90°angle to each other, with visual stimuli being aligned vertically for the participants and horizontally for the confederate. In this task, VSP-taking is reflected in a spatial compatibility effect in the participant, because stimulus– response compatibility occurs only if the participant takes the confederate’s perspective. We manipulated the visual access of the confederate during the task by means of glasses with adjustable shutters that allowed or prevented the confederate from seeing the visual stimuli. The results of 2 experiments showed that people only adopted their task partner’s VSP if that person had unhindered visual access to the stimuli. Provided that the confederate had visual access to the participant’s stimuli, VSP-taking occurred regardless of whether the confederate performed the same visual task as the participant (Experiment 1) or a different, auditory task (Experiment 2). The results suggest that knowledge about another’s visual access is pivotal for triggering spontaneous VSP-taking, whereas having the same task is not. We discuss the possibility that spontaneous VSP-taking can effectively facilitate spatial alignment processes in social interaction.

Nonverbal components of Theory of Mind in typical andatypical development

To successfully navigate the human social world one needs to realize that behavior is guidedby mental states such as goals and beliefs. Humans are highly proficient in using mentalstates to explain and predict their conspecific’s behavior, which enables adjusting one’sown behavior in online social interactions. Whereas according to recent studies even younginfants seem to integrate others’ beliefs into their own behavior, it is unclear what processescontribute to such competencies and how they may develop. Here we analyze a set of pos-sible nonverbal components of theory of mind that may be involved in taking into accountothers’ mental states, and discuss findings from typical and atypical development. To trackan agent’s belief one needs to (i) pay attention to agents that might be potential beliefholders, and identify their focus of attention and their potential belief contents; (ii) keeptrack of their different experiences and their consequent beliefs, and (iii) to make behav-ioral predictions based on such beliefs. If an individual fails to predict an agent’s behaviordepending on the agent’s beliefs, this may be due to a problem at any stage in the aboveprocesses. An analysis of the possible nonverbal processes contributing to belief trackingand their functioning in typical and atypical development aims to provide new insights intothe possible mechanisms that make human social interactions uniquely rich.

Seeing behind the surface: communicative demonstration boosts category disambiguation in 12-month-olds

In their first years, infants acquire an incredible amount of information regarding the objects present in their environment. While often it is not clear what specific information should be prioritized in encoding from the many characteristics of an object, different types of object representations facilitate different types of generalizations. We tested the hypotheses that 1-year-old infants distinctively represent familiar objects as exemplars of their kind, and that ostensive communication plays a role in determining kind membership for ambiguous objects. In the training phase of our experiment, infants were exposed to movies displaying an agent sorting objects from two categories (cups and plates) into two locations (left or right). Afterwards, different groups of infants saw either an ostensive or a non-ostensive demonstration performed by the agent, revealing that a new object that looked like a plate can be transformed into a cup. A third group of infants experienced no demonstration regarding the new object. During test, infants were presented with the ambiguous object in the plate format, and we measured generalization by coding anticipatory looks to the plate or the cup side. While infants looked equally often towards the two sides when the demonstration was non-ostensive, and more often to the plate side when there was no demonstration, they performed more anticipatory eye movements to the cup side when the demonstration was ostensive. Thus, ostensive demonstration likely highlighted the hidden dispositional properties of the target object as kind-relevant, guiding infants’ categorization of the foldable cup as a cup, despite it looking like a plate. These results suggest that infants likely encode familiar objects as exemplars of their kind and that ostensive communication can play a crucial role in disambiguating what kind an object belongs to, even when this requires disregarding salient surface features.

Cognitive adaptations induced by a multi-language input in early development

Children around the world successfully adapt to the specific requirements of their physical and social environment, and they readily acquire any language they are exposed to. Still, learning simultaneously two languages has been a continuous concern of parents, educators and scientists. While the focus has shifted from the possible costs to the possible advantages of bilingualism, the worries still linger that early bilingualism may cause delays and confusion. Here we adopt a less dichotomist view, by asking what specific adaptations might result from simultaneously learning two languages. We will discuss findings that point to a surprising plasticity of the cognitive system allowing young infants to cope with the bilingual input and reaching linguistic milestones at the same time as monolinguals.

When do humans spontaneously adopt another’s visuospatial perspective?

Perspective-taking is a key component of social interactions. However, there is an ongoing controversy about whether, when and how instances of spontaneous visuospatial perspective-taking occur. The aim of this study was to investigate the underlying factors as well as boundary conditions that characterize the spontaneous adoption of another person's visuospatial perspective (VSP) during social interactions. We used a novel paradigm, in which a participant and a confederate performed a simple stimulus-response (SR) compatibility task sitting at a 90° angle to each other. In this set-up, participants would show a spatial compatibility effect only if they adopted the confederate's VSP. In a series of 5 experiments we found that participants reliably adopted the VSP of the confederate, as long as he was perceived as an intentionally acting agent. Our results therefore show that humans are able to spontaneously adopt the differing VSP of another agent and that there is a tight link between perspective-taking and performing actions together. The results suggest that spontaneous VSP-taking can effectively facilitate and speed up spatial alignment processes accruing from dynamic interactions in multiagent environments.

Neural signatures for sustaining object representations attributed to others in preverbal human infants

A major feat of social beings is to encode what their conspecifics see, know or believe. While various non-human animals show precursors of these abilities, humans perform uniquely sophisticated inferences about other people’s mental states. However, it is still unclear how these possibly human-specific capacities develop and whether preverbal infants, similarly to adults, form representations of other agents’ mental states, specifically metarepresentations.We explored the neurocognitive bases of eight-month-olds’ ability to encode the world from another person’s perspective, using gamma-band electroencephalographic activity over the temporal lobes, an established neural signature for sustained object representation after occlusion. We observed such gammaband activity when an object was occluded from the infants’ perspective, as well as when it was occluded only from the other person (study 1), and also when subsequently the object disappeared, but the person falsely believed the object to be present (study 2). These findings suggest that the cognitive systems involved in representing theworld from infants’ own perspective are also recruited for encoding others’ beliefs. Such results point to an early-developing, powerful apparatus suitable to deal with multiple concurrent representations, and suggest that infants can have a metarepresentational understanding of other minds even before the onset of language.

Pointing as epistemic request: 12-month-olds point to receive new information

Infants start pointing systematically to objects or events around their first birthday. It has been proposed that infants point to an event in order to share their appreciation of it with others. In the current study, we tested another hypothesis, according to which infants' pointing could also serve as an epistemic request directed to the adult. Thus, infants' motivation for pointing could include the expectation that adults would provide new information about the referent. In two experiments, an adult reacted to 12-month-olds’ pointing gestures by exhibiting 'informing' or 'sharing' behavior. In response, infants pointed more frequently across trials in the informing than in the sharing condition. This suggests that the feedback that contained new information matched infants' expectations more than mere attention sharing. Such a result is consistent with the idea that not just the comprehension but also the production of early communicative signals is tuned to assist infants' learning from others.

Hierarchical processing in 7-month-olds

Hierarchical structures are crucial to many aspects of cognitive processing and especially for language. However, there still is little experimental support for the ability of infants to learn such structures. Here, we show that, with structures simple enough to be processed by various animals, seven-month- old infants seem to learn hierarchical relations. Infants were presented with an artificial language composed of “sentences” made of three-syllable “words.” The syllables within words conformed to repetition patterns based on syllable tokens involving either adjacent repetitions (e.g., dubaba) or nonadjacent repetitions (e.g., dubadu). Importantly, the sequence of word structures in each sentence conformed to repetition patterns based on word types (e.g., aba-abb-abb). Infants learned this repetition pattern of repetition patterns and thus likely a hierarchical pattern based on repetitions, but only when the repeated word structure was based on adjacent repetitions. While our results leave open the question of which exact sentence- level pattern infants learned, they suggest that infants embedded the word-level patterns into a higher-level pattern and thus seemed to acquire a hierarchically embedded pattern.

Extracting regularities from noise: Do infants encode patterns based on same and different relations?

A fundamental task of the young learner is to extract adjacent and distant dependency relations from the linguistic signal. Previous research suggests that infants successfully learn regularities from mini-grammars that contain a single consistent pattern. However, outside laboratory infants are exposed to a ‘noisy’ linguistic signal that contains multiple regularities and patterns that infants cannot yet interpret. In four experiments we explore how infants extract regularities from a 50 % ‘noisy’ input. Using an eye-tracker methodology we investigate how infants integrate patterns of varying complexity (e.g., adjacent and nonadjacent identity relations, or identity- and diversity-based relations) into differential anticipatory eye-movements. In Experiment 1, 7- and 12-month-olds were simultaneously exposed to AA and AB patterns (where As and Bs stand for syllables, such as in vava, valu) and they showed successful generalization for AA, but not for AB tokens. In Experiment 2, 7-month-olds heard AAB and ABA patterns and generalized only the AAB patterns. However, in Experiment 3 they could learn the ABA patterns when these were paired with ABC structures. Experiment 4 asked whether identity-based generalizations are restricted to exact physical identity. Infants generalized the AhAlBh patterns (where h stands for high pitch and l for low pitch), but no the AhBlAh ones, although in these the A syllables were physically identical. The results show that preverbal infants posses powerful abilities to extract regularities from a noisy signal, suggesting a possible hierarchy in encoding. Adjacent repetitions were computationally preferred over nonadjacent ones, even when adjacent repetitions were based on phonological identity, while nonadjacent ones on physical identity. While infants readily generalized the bi- or tri-syllabic structures based on identity relations they did not do so with the non-identity relations. Conceivably, encoding structures based on 'same’ relations are easier for infants than encoding diversity, suggesting that generalizations based on ‘different’ relations pose a challenge for the developing cognitive system.

Are all beliefs equal? Implicit belief attributions recruiting core brain regions of Theory of Mind.

Humans possess efficient mechanisms to behave adaptively in social contexts. They ascribe goals and beliefs to others and use these for behavioural predictions. Researchers argued for two separate mental attribution systems: an implicit and automatic one involved in online interactions, and an explicit one mainly used in offline deliberations. However, the underlying mechanisms of these systems and the types of beliefs represented in the implicit system are still unclear. Using neuroimaging methods, we show that the right temporo-parietal junction and the medial prefrontal cortex, brain regions consistently found to be involved in explicit mental state reasoning, are also recruited by spontaneous belief tracking. While the medial prefrontal cortex was more active when both the participant and another agent believed an object to be at a specific location, the right temporo-parietal junction was selectively activated during tracking the false beliefs of another agent about the presence, but not the absence of objects. While humans can explicitly attribute to a conspecific any possible belief they themselves can entertain, implicit belief tracking seems to be restricted to beliefs with specific contents, a content selectivity that may reflect a crucial functional characteristic and signature property of implicit belief attribution.

Flexible learning of multiple speech structures in bilingual infants

Children acquire their native language according to a well-defined time frame. Surprisingly, although children raised in bilingual environments have to learn roughly twice as much about language as their monolingual peers, the speed of acquisition is comparable in monolinguals and bilinguals. Here, we show that preverbal 12-month-old bilingual infants have become more flexible at learning speech structures than monolinguals. When given the opportunity to simultaneously learn two different regularities, bilingual infants learned both, whereas monolinguals learned only one of them. Hence, bilinguals may acquire two languages in the time in which monolinguals acquire one because they quickly become more flexible learners.

Cognitive gains in 7-month-old bilingual infants

Children exposed to bilingual input typically learn 2 languages without obvious difficulties. However, it is unclear how preverbal infants cope with the inconsistent input and how bilingualism affects early development. In 3 eye-tracking studies we show that 7-month-old infants, raised with 2 languages from birth, display improved cognitive control abilities compared with matched monolinguals. Whereas both monolinguals and bilinguals learned to respond to a speech or visual cue to anticipate a reward on one side of a screen, only bilinguals succeeded in redirecting their anticipatory looks when the cue began signaling the reward on the opposite side. Bilingual infants rapidly suppressed their looks to the first location and learned the new response. These findings show that processing representations from 2 languages leads to a domain-general enhancement of the cognitive control system well before the onset of speech.

Early bilingualism enhances mechanisms of false-belief reasoning

In their first years, children's understanding of mental states seems to improve dramatically, but the mechanisms underlying these changes are still unclear. Such ‘theory of mind’ (ToM) abilities may arise during development, or have an innate basis, developmental changes reflecting limitations of other abilities involved in ToM tasks (e.g. inhibition). Special circumstances such as early bilingualism may enhance ToM development or other capacities required by ToM tasks. Here we compare 3-year-old bilinguals and monolinguals on a standard ToM task, a modified ToM task and a control task involving physical reasoning. The modified ToM task mimicked a language-switch situation that bilinguals often encounter and that could influence their ToM abilities. If such experience contributes to an early consolidation of ToM in bilinguals, they should be selectively enhanced in the modified task. In contrast, if bilinguals have an advantage due to better executive inhibitory abilities involved in ToM tasks, they should outperform monolinguals on both ToM tasks, inhibitory demands being similar. Bilingual children showed an advantage on the two ToM tasks but not on the control task. The precocious success of bilinguals may be associated with their well-developed control functions formed during monitoring and selecting languages.