Jump to content

User:Amylynn0815/sandbox

From Wikipedia, the free encyclopedia

Artificial Grammar Learning is a paradigm of study within cognitive psychology. Its goal is to investigate the processes that underlie human language learning, by testing subjects' ability to learn a made-up language in a laboratory setting. The area of interest is typically the subjects' ability to detect patterns and statistical regularities during a training phase, and then use their new knowledge of those patterns in a testing phase. The testing phase can either use the symbols or sounds used in the training phase or transfer the patterns to another set of symbols or sounds as surface structure. It was developed to evaluate the processes of human language learning, but has also been utilized to study implicit learning. Many researchers propose that the the rules of the artificial grammar are learned on an implicit level since the rules of the grammar are never explicitly presented to the participants. The paradigm has also recently been utilized to study certain language learning disabilities and to evaluate neurological structures that contribute to grammar learning.

Characteristics of AGL

[edit]

Standard procedure: How it really works? The paradigm of artificial grammar learning (AGL) is a learning paradigm in any sense of the word. It could be explicit or implicit, although it is most often used to test implicit learning (IL). The tested target is required to learn a set of symbols (usually a string of letters) assembled from a specific grammar rule. That rule is composed from a number of basic rules creating a limited number of letter strings. The length of the strings usually ranges from 2-9 letters per string. An example of that kind of rule is demonstrated on figure 1.

Figure 1: Example of an artificial grammar rule • Ruleful strings:VXVS, TPTXVS Unruleful strings:VXXXS, TPTPS


In order to compose a grammatical "ruleful" string of letters, according to the predetermined grammar rule, a subject must follow the rules of the AGL as represented in the model (figure 1). When observing a violation of the rule system that composes the string, it considers the string to be an "unruleful", randomly constructed string. In an AGL experiment participants are required to memorize a list of letter strings composed from a specific grammar rule. In the case of a standard AGL implicit learning task [1], participants are not aware of the fact that the strings were composed out of a specific grammar rule or logic. They are simply instructed to memorize the letters strings that are presented for a memory test during a later stage of the experiment. After they have finished the learning phase, they are told that the letter strings they memorized were composed out of certain logic - a specific grammar rule. However the actual rule is hidden from them. The common way to analyze the implicit learning AGL test results is to sort out new letter strings as ruleful or unruleful strings. Discriminate between positive (ruleful) and negative (unruleful) strings in the test phase. The dependent variable usually measured is the correct answers percentage. In other words, the percentage of new strings sort correctly out of all strings presented. The ability to differentiate string types is present when the correct answers percentage is significantly higher than chance level. That kind of results indicate an existence of a learning process that is beyond memorizing the presented letter strings [2].

Bayesian learning

[edit]

The implicit learning that is hypothesized to occur while people engage in Artificial Grammar Learning is statistical or, more specifically, Bayesian learning. Bayesian learning takes into account types of biases or “prior probability distributions” individuals have that contribute to the outcome of implicit learning tasks. These biases can be thought of as a probability distribution that contains the probability that each possible hypothesis is likely to be correct. Due to the structure of the Bayesian model, the inferences output by the model are in the form of a probability distribution rather than a single most probable event. This output distribution is a “posterior probability distribution”. The posterior probability of each hypothesis in the original distribution is the probability of the hypothesis being true given the data and the probability of data given the hypothesis is true [3].

This Bayesian model for learning is fundamental for understanding the process of implicit learning and, therefore, the mechanisms that underly artificial grammar learning. It is hypothesized that the implicit learning of grammar involves predicting co-ocurrences of certain words in a certain order. For example, “the dog chased the ball” is a sentence that can be learned as grammatically correct on an implicit level due to the high co-ocuurence of “chase” being one of the words to follow “dog”. A sentence like “the dog cat the ball” is implicitly recognized as grammatically incorrect due to the lack of utterances that contain those words paired in that specific order. This process is important for teasing apart thematic roles and parts of speech in grammatical processing.

History

[edit]

More than half a century ago George A. Miller [4], established the paradigm of implicit artificial grammar learning (AGL). He investigated the influence of explicit grammar structures on human learning. For that he designed a grammar model of letters with different sequence. His research showed that it was easier to remember structured grammar sequence than a random sequence of letters. His explanation was that learners could find the common characteristics between learned sequences and accordingly encode them to a memory set. For example, a specific set of letters will most likely show up together as a sequence repeatedly while other letters wouldn't. Those memory sets served participants as a strategy latter on during their memory tests. [1], doubted Miller's explanation. He claimed that if participants could encode the grammar rules as productive memory sets, they should be able to verbalize their strategy thoroughly. He conducted a research that marked a significant progress for the AGL paradigm. This research used a synthetic grammar learning model (see characteristics of AGL) to test implicit learning. That model raised AGL research popularity. It became the most used and tested model in the field. Participants were asked to memorize a list of letter strings - sequences, which were created from a structured artificial grammar rule model. It was only during the test phase, participants found out that there was a set of rules behind the letter sequences they memorized. They were then requested to perform judgments of new letter strings based on the same set of rules, which they have never seen before. Meaning that they were sorting new letter strings as "grammatical" (constructed from the grammar rule), vs. "randomly constructed" sequences. Detecting a set of rules, by sorting correctly new strings above chance level indicates that a grammar rule has been learned without any specific instruction to actually learn it. Reber [1], found what he expected, participants sorted out new strings above chance level. They reported using strategies during the sorting task although they couldn't actually verbalize those strategies. There wasn't any real connection between the strategies reported to the actual set of rules used for sorting. Learners responded better to a general grammatical nature of the stimuli, rather than learning to respond according to specific coding systems imposed upon the stimuli. This research was followed by many others [5] [6] [7] [8]. The basic conclusions of most of them were similar to Reber's hypothesis; the implicit learning process was done with no intentional learning strategies. Moreover it was presumed that the acquired stored knowledge had:

1. Abstract representation for the set of rules.

2. Unconscious strategies that can be tested with performance.

Knowlton, Ramus & Squire demonstrated that amnesic patients, who were impaired in their declarative abilities but had normal procedural functions, showed similar capabilities as normal people at AGL testing[9]. Other researchers showed that participants can transfer their knowledge to a new set of string letters based on the same regularity [10][11][5]. Gordon & Holyoak tested AGL a little differently by asking participants to rate how pleasant were the new strings of letters presented to them[12]. Their results showed that "pleasant" judgments fit the predetermined AGL set of rules above chance level. All of those studies strengthen earlier findings and promoted the research of Implicit learning using AGL.

Explanatory Modals

[edit]

Traditional approaches to AGL claim that the stored knowledge is abstract[1]. Other approaches [6] [13] argue that the stored knowledge is concrete and consists of exemplars encountered during study or their parts [7][14]. In any case, it is assumed that the information stored in memory is retrieved in the test phase and is used to aid decisions[15][16]. What kind of knowledge is being learned? How exactly is it done? 3 main approaches can help us to explain a part of the AGL phenomena:

1. Abstract Approach- according to this traditional approach, participants acquires an abstract representation of the artificial grammar rule in the learning stage. That abstract structure helps them to decide if the new string presented during the test phase is grammatical or randomly constructed [17].

2. Concrete knowledge approach- Those supporting this approach claims that during the learning stage participants learns specific examples of strings stored in their memory. At the test stage they will not sort the new strings according to an abstract rule; instead they will sort them according to the imaginary similarities of the examples in the learning stage. There are divided opinions about how concrete the learned knowledge really is. Brooks & Vokey[6] [13] argue that all the stored knowledge is represented as concrete displays. They believed it includes all the information and the full examples learned on the learning stage. The sorting during the test is done according to a full representation of the basic string examples from the learning stage. On the other hand Perruchet & Pacteau[7] , claimed that the knowledge of the strings in the learning stage is stored as "memory chunks". 2 - 3 letters are learned as a sequence and their permitted or forbidden location[7] [14].

3. Dual Factor approach- Dual process learning model, combine some of the approaches above. When a person can rely on concrete knowledge he will do so. However, at times he can't base his decision on concrete knowledge (for example on a transfer of learning task), he will base it on abstract knowledge[18] [19] [20] [5].

Research cited in the above “History” section suggests the “Dual Factor approach” may be the accurate model [11]. A series of experiments with amnesiac patients support the idea that AGL involves both abstract concepts and concrete exemplars. Amnesiacs were able to classify stimuli just as well as participants in the control group. While able to successfully complete the task, amnesiacs were not able to explicitly recall grammatical “chunks” of the letter sequence. When performing the task with a different sequence of letters than they were trained on, both amnesiacs and the control group were able to complete the task (although performance was better when the task was completed using the same set of letters used for training. The results of the experiment supported the dual factor approach to artificial grammar learning in that people use abstract information to learn rules for grammars and use concrete, exemplar-specific memory for chunks.

The "AGL Automatic discussion"

[edit]

AGL research has dealt with the "automatic question"- is AGL considers to be an automatic process? During knowledge retrieval, performance can be automatic in the sense of running without conscious monitoring, that is, without conscious guidance by the performer’s intentions. In the case of AGL, it was claimed that implicit learning is an automatic process due to the fact that it is done with no intention to learn a specific grammar rule [1]. That is a part of the classic definition of "automatic process" as fast processing, unconscious, effortless process that may start unintentionally. When aroused, it continues until it's over without the ability to stop or ignore its consequences [21] [22] [23]. This definition has been challenged many times. Alternative definitions for automatic process have been given [24] [25] [26]. Reber's AGL automatic presumption could be problematic by implying that an unintentional process is an automatic process in it essence. Focusing on AGL tests, we need to note a few dimensions. The process is complex and it contains learning and retrieving abilities. Both could be indicated as automatic processes or not. The very process of learning in AGL can be said to be automatic if it refers to a dimension that differ from the one to which the participant is responding. When what you have learned is not necessarily beneficial to the task intentionally performed [27]. We need to differentiate between implicitness as referring to the process of learning or knowledge acquisition and as referring to performance - that is, knowledge retrieval. Knowledge acquired during training may include many aspects of the presented stimuli (whole strings, relations among elements, etc.). The contribution of the various components to performance depends on both the specific instruction in the acquisition phase and the requirements of the retrieval task [16]. Hence the instructions on every phase are important in order to decide if that phase is an automatic process or not, each phase should be looked at separately.

Another hypothesis that contradicts the automaticity of AGL is the “mere exposure effect”. The mere exposure effect is increased affect towards a stimulus that is the result of nonreinforced, repeated exposure to the stimulus [28]. Results from over 200 experiments on this effect indicate that there is a positive relationship between mean “goodness” rating and frequency of stimulus exposure. Stimuli for these experiments included line drawings, polygons and nonsense words. These experiments exposed participants to each stimulus anywhere from 0 to 25 times. Following each exposure participants were asked to rate the degree to which each stimulus suggested “good” vs. “bad” affect on a 7-point scale. In addition to the main pattern of results, it was also found in several experiments that participants rated higher positive affect for previously exposed items than for novel items. Since implicit cognition should reference previous study episodes, the effects on affect ratings should not have been observed if processing of this stimuli is truly implicit.

AI and Artificial Grammar Learning

[edit]

Since the advent of computers and artificial intelligence, computer programs have been adapted that attempt to simulate the implicit learning process observed in the AGL paradigm. The AI programs first adapted to simulate grammar learning used the following basic structure:

Given: A set of grammatical sentences from some language.

Find: A procedure for recognizing and/or generating all grammatical sentences in that language.

An early model for AI grammar learning is Wolff's SNPR System. The program acquires a series of letters with no pauses or punctuation between words and sentences. The program then examines the string in subsets and looks for common sequences of symbols and defines “chunks” in terms of these sequences (these chunks are like the exemplar-specific chunks described for AGL). As the model acquires these chunks through exposure, the chunks begin to replace the sequences of unbroken letters. When a chunk precedes or follows a common chunk, then the model determines disjunctive classes in terms of the first set [29]. For example when the model encounters “the-dog-chased” and “the-cat-chased” it classifies “dog” and “cat” as being members of the same class since they both precede “chase”. While the model sorts chunks into classes, it does explicitly define these groups (e.g., noun, verb). Early AI models of grammar learning such as these ignored the importance of negative instances of grammar's effect on grammar acquisition and were also lacking in the ability to connect grammatical rules to pragmatic and semantic information. Newer models have attempted to factor these details in. The Unified Model [30] attempts to take both of these factors into account. The model breaks down grammar according to “cues”. Languages mark case roles using five possible cue types: word order, case marking, agreement, intonation and verb-based expectation. The influence that each cue has over a language's grammar is determined by its “cue strength” and “cue validity”. Both of these values are determined using the same formula, except that cue strength is defined through experimental results and cue validity is defined through corpus counts from language databases. The formula for cue strength/validity is as follows:

Cue strength/cue validity = cue availability * cue reliability

Cue availability is the proportion of times that the cue is available over the times that it is needed. Cue reliability is the proportion of times that the cue is correct over the total occurrences of the cue. By incorporating cue reliability along with cue availability, The Unified Model is able to account for the effects of negative instances of grammar since it takes accuracy and not just frequency into account. As a result, this also accounts for the semantic and pragmatic information since cues that do not produce grammar in the appropriate context will have low cue strength and cue validity.

Other Uses: contemporary and ongoing research

[edit]

Agrammatic aphasic patients were tested with AGL tests. The results show that breakdown of language in agrammatic aphasia is associated with an impairment in artificial grammar learning, indicating damage to domain-general neural mechanisms sub serving both language and sequential learning. [31]. De Vries, Barth, Maiworm, Knecht, Zwitserlood & Flöel [32] found that electrical stimulation of brocaʼs area enhances implicit learning of an artificial grammar. Direct current stimulation may facilitate acquisition of grammatical knowledge, a finding of potential interest for rehabilitation of aphasia. Petersson, Vasiliki & Hagoort [33], examine the neurobiological correlates of Syntax, the processing of structured sequences, by comparing FMRI results on artificial and natural language syntax. They argue that the "Chomsky hierarchy" is not directly relevant for neurobiological systems through AGL testing.


See also

[edit]

References

[edit]
  1. ^ a b c d e Reber, A.S. (1967). "Implicit learning of artificial grammars". Verbal Learning and Verbal Behavior. 5 (6): 855–863. doi:10.1016/S0022-5371(67)80149-X.
  2. ^ Seger, C.A. (1994). "Implicit learning". Psychological Bulletin. 115 (2): 163–196. doi:10.1037/0033-2909.115.2.163. PMID 8165269.
  3. ^ Kapatsinski, V. (2009). "The Architecture of Grammar in Artificial Grammar Learning: Formal Biases in the Acquisition of Morphophonology and the Nature of the Learning Task". Indiana University: 1-260.
  4. ^ Miller, G.A. (1958). "Free recall of redundant strings of letters". Journal of Experimental Psychology. 56 (6): 485–491. doi:10.1037/h0044933. PMID 13611173.
  5. ^ a b c Mathews, R.C. (1989). "Role of implicit and explicit processes in learning from examples: A synergistic effect". Journal of Experimental Psychology: Learning, Memory, and Cognition. 15 (6): 1083–1100. doi:10.1037/0278-7393.15.6.1083. {{cite journal}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)
  6. ^ a b c Brooks, L.R. (1991). "Abstract analogies and abstracted grammars: Comments on Reber (1989) and Mathews et al. (1989)". Journal of Experimental Psychology: General. 120 (3): 316–323. doi:10.1037/0096-3445.120.3.316. {{cite journal}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)
  7. ^ a b c d Perruchet, P. (1990). "Synthetic grammar learning: Implicit rule abstraction or explicit fragmentary knowledge". Journal of Experimental Psychology. 119 (3): 264–275. doi:10.1037/0096-3445.119.3.264. {{cite journal}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)
  8. ^ Altmann, G.M.T. (1995). "Modality Independence of Implicitly Learned Grammatical Knowledge". Journal of Experimental Psychology: Learning, Memory & Cognition. 21 (4): 899–912. doi:10.1037/0278-7393.21.4.899. {{cite journal}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)
  9. ^ Knowlton, B.J. (1992). "Intact Artificial Grammar Learning in Amnesia: Dissociation of Classification Learning and Explicit Memory for Specific Instances". Psychological Science. 3 (3): 172–179. doi:10.1111/j.1467-9280.1992.tb00021.x. S2CID 10862785. {{cite journal}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)
  10. ^ Gomez, R.L. (1994). "What is learned from artificial grammars? Transfer tests of simple associative knowledge". Journal of Experimental Psychology: Learning, Memory, and Cognition. 20 (2): 396–410. doi:10.1037/0278-7393.20.2.396. {{cite journal}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)
  11. ^ a b Knowlton, B.J. (1996). "Artificial Grammar Learning Depends on Implicit Acquisition of Both Abstract and Exemplar-Specific Information". Experimental Psychology: Learning, Memory, and Cognition. 22 (1): 169–181. doi:10.1037/0278-7393.22.1.169. PMID 8648284. {{cite journal}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)
  12. ^ Gordon, P.C. (1983). "Implicit learning and generalization of the "mere exposure" effect". Journal of Personality and Social Psychology. 45 (3): 492–500. doi:10.1037/0022-3514.45.3.492. {{cite journal}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)
  13. ^ a b Vokey, J.R. (1992). "Salience of item knowledge in learning artificial grammar". Journal of Experimental Psychology: Learning, Memory, and Cognition. 18 (2): 328–344. doi:10.1037/0278-7393.18.2.328. {{cite journal}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)
  14. ^ a b Servan-Schreiber, E. (1990). "Chunking as a mechanism of implicit learning". Journal of Experimental Psychology: Learning, Memory & Cognition. 16: 592–608. doi:10.1037/0278-7393.16.4.592. {{cite journal}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)
  15. ^ Pothos, E.M. (2007). "Theories of artificial grammar learning". Psychological Bulletin. 133 (2): 227–244. doi:10.1037/0033-2909.133.2.227. PMID 17338598.
  16. ^ a b Poznanski, Y. (2010). "What is implicit in implicit artificial grammar learning?". Quarterly Journal of Experimental Psychology. 63 (8): 1495–2015. doi:10.1080/17470210903398121. PMID 20063258. S2CID 28756388. {{cite journal}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)
  17. ^ Reber, A.S. (1969). "Transfer of syntactic structure in syntactic languages". Experimental Psychology. 81: 115–119. doi:10.1037/h0027454.
  18. ^ McAndrews, M.P. (1985). "Rule-based and exem- plar-based classification in artificial grammar learning". Memory & Cognition. 13 (5): 469–475. doi:10.3758/BF03198460. PMID 4088057. S2CID 43888328. {{cite journal}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)
  19. ^ Reber, A.S. (1989). "Implicit Learning and Tacit Knowledge". Journal of Experimental Psychology. 118 (3): 219–235. doi:10.1037/0096-3445.118.3.219.
  20. ^ Reber, A.S. (1978). "Analogic abstraction strategies in synthetic grammar learning: A functionalist interpretation". Cognition. 6 (189–221): 189–221. doi:10.1016/0010-0277(78)90013-6. S2CID 53199118. {{cite journal}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)
  21. ^ Hasher, L. (1979). "Automatic and effort full, processes in memory". Journal of Experimental Psychology: General. 108 (3): 356–388. doi:10.1037/0096-3445.108.3.356. {{cite journal}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)
  22. ^ Schneider, W. (1984). "Automatic and controlled processing and attention". . In R. Parasuraman & D. Davies (Eds.), Varieties of Attention. New York: Academic press: 1–17. {{cite journal}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)
  23. ^ Logan, G.D. (1988). "Automaticity, resources and memory: Theoretical controversies and practical implications". Human Factors. 30 (5): 583–598. doi:10.1177/001872088803000504. PMID 3065212. S2CID 43294231.
  24. ^ Tzelgov, J. (1999). "Automaticity and processing without awareness". Psyche. 5.
  25. ^ Logan, G.D. (1980). "Attention and automaticity in Stroop and priming tasks: Theory and data". Cognitive Psychology. 12 (4): 523–553. doi:10.1016/0010-0285(80)90019-5. PMID 7418368. S2CID 15830267.
  26. ^ Logan, G.D. (1985). "Executive control of thought and action". Acta Psychologica. 60 (2–3): 193–210. doi:10.1016/0001-6918(85)90055-1.
  27. ^ Perlman, A. (2006). "Interaction between encoding and retrieval in the domain of sequence learning". Journal of Experimental Psychology: Learning, Memory, and Cognition. 32 (118–130): 118–130. doi:10.1037/0278-7393.32.1.118. PMID 16478345. {{cite journal}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)
  28. ^ Manza, L. (1998). "Artificial grammar learning and the mere exposure effect: Emotional preference tasks and the implicit learning process". In Stadler, M.A. & Frensch, P.A. (Eds.), Handbook of Implicit Learning. Thousand Oaks, CA: Sage Publications, Inc.: 201–222. {{cite journal}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)
  29. ^ MacWhinney, B. (1987). Mechanisms of language acquisition. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.
  30. ^ MacWhinney, B. (2008). "A Unified Model". In Robinson, P. & Ellis, N. (Eds.), Handbook of Cognitive Linguistics and Second Language Acquisition. Mahwah, NJ: Lawrence Erlbaum Associates.
  31. ^ Christiansen, M.H. (2010). "Impaired artificial grammar learning in agrammatism". Cognition. 116 (3): 383–393. doi:10.1016/j.cognition.2010.05.015. PMID 20605017. S2CID 43834239. {{cite journal}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)
  32. ^ De Vries, M.H. (2010). "Electrical stimulation of Broca's area enhances implicit learning of artificial grammar". Cognitive Neuroscience. 22 (11): 2427–2436. doi:10.1162/jocn.2009.21385. PMID 19925194. S2CID 7010584. {{cite journal}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)
  33. ^ Petersson, K.M. (2010). "What artificial grammar learning reveals about the neurobiology of syntax". Brain & Language: 340–353. {{cite journal}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)

Category:Grammar Category:Language acquisition Category:Computational linguistics