Unable to retrieve citations for this document
Retrieving citations for document...
Requires Authentication
Unlicensed
Licensed
May 3, 2011
Abstract
We present tongue-palate contact (EPG) and acoustic data on English sibilant assimilation, with a particular focus on the asymmetry arising from the order of the sibilants. It is generally known that /s#ʃ/ sequences may display varying degrees of regressive assimilation in fluent speech, yet for /ʃ#s/ it is widely assumed that no assimilation takes place, although the empirical content of this assumption has rarely been investigated nor a clear theoretical explanation proposed. We systematically compare the two sibilant orders in word-boundary clusters. Our data show that /s#ʃ/ sequences assimilate frequently and this assimilation is strictly regressive. The assimilated sequence may be indistinguishable from a homorganic control sequence by our measures, or it can be characterized by measurement values intermediate to those typical for /ʃ/ or /s/. /ʃ#s/ sequences may also show regressive assimilation, albeit less frequently and to a lesser degree. Assimilated /ʃ#s/ sequences are always distinguishable from /s#s/ sequences. In a few cases, we identify progressive assimilation for /ʃ#s/. We discuss how to account for the differences in degree of assimilation, and we propose that the order asymmetry may arise from the different articulatory control structures employed for the two sibilants in conjunction with phonotactic probability effects.
Unable to retrieve citations for this document
Retrieving citations for document...
Requires Authentication
Unlicensed
Licensed
May 3, 2011
Abstract
A series of speech production and categorization experiments demonstrates that naïve speakers and listeners reliably use correspondences between prosodic phrasing and syntactic constituent structure to resolve standing and temporary ambiguity. Materials obtained from a co-operative gameboard task show that prosodic phrasing effects (e.g., the location of the strongest break in an utterance) are independent of discourse factors that might be expected to influence the impact of syntactic ambiguity, including the availability of visual referents for the meanings of ambiguous utterances and the use of utterances as instructions versus confirmations of instructions. These effects hold across two dialects of English, spoken in the American Midwest, and New Zealand. Results from PP-attachment and verb transitivity ambiguities indicate clearly that the production of prosody-syntax correspondences is not conditional upon situational disambiguation of syntactic structure, but is rather more directly tied to grammatical constraints on the production of prosodic and syntactic form. Differences between our results and those reported elsewhere are best explained in terms of differences in task demands.
Unable to retrieve citations for this document
Retrieving citations for document...
Requires Authentication
Unlicensed
Licensed
May 3, 2011
Abstract
Vowel harmony is a phonotactic principle that requires adjacent vowels to agree in certain vowel features. Phonological theory considers this principle to be represented in one's native grammar, but its abstractness and perceptual consequences remain a matter of debate. In this paper, we are interested in the brain's response to violations of harmony in Turkish. For this purpose, we test two acoustically close and two acoustically distant vowel pairs in Turkish, involving different kinds of harmony violations. Our measure is the Mismatch Negativity (MMN), an automatic change detection response of the brain that has previously been applied for the study of native phoneme representations in a variety of languages. The results of our experiment support the view that vowel harmony is a phonological principle with a language-specific long-term memory representation. Asymmetries in MMN responses support a phonological analysis of the pattern of results, but do not provide evidence for a pure acoustic or a pure probabilistic approach. Phonological analyses are given within Optimality Theory (OT) and within an underspecification account.
Unable to retrieve citations for this document
Retrieving citations for document...
Requires Authentication
Unlicensed
Licensed
May 3, 2011
Abstract
This study explores phonetic convergence during conversations between pairs of talkers with varying language distance. Specifically, we examined conversations within two native English talkers and within two native Korean talkers who had either the same or different regional dialects, and between native and nonnative talkers of English. To measure phonetic convergence, an independent group of listeners judged the similarity of utterance samples from each talker through an XAB perception test, in which X was a sample of one talker's speech and A and B were samples from the other talker at either early or late portions of the conversation. The results showed greater convergence for same-dialect pairs than for either the different-dialect pairs or the different-L1 pairs. These results generally support the hypothesis that there is a relationship between phonetic convergence and interlocutor language distance. We interpret this pattern as suggesting that phonetic convergence between talker pairs that vary in the degree of their initial language alignment may be dynamically mediated by two parallel mechanisms: the need for intelligibility and the extra demands of nonnative speech production and perception.
Unable to retrieve citations for this document
Retrieving citations for document...
Requires Authentication
Unlicensed
Licensed
May 3, 2011
Abstract
Phonotactics – the permissibility of sound sequences within a word – correspond to lexical statistics, but controversy persists over which statistics are being tracked. In this study, lexical type and token counts were compared as they contributed to phonotactic extraction from an artificial lexicon. Young-adult participants were familiarized with a set of CVCCVC nonwords contextualized as a lexicon of Martian animal names. The type and token frequencies of word-medial consonant sequences within those names were varied systematically. Participants then rated new nonwords, containing the same medial sequences, on a 7-point scale for similarity to the Martian animal names. Higher ratings only followed high type frequency familiarization conditions, suggesting that word-types drove phonotactic extraction. Additionally, participants reversed the typical preference for high-frequency English sequences, likely because they rated nonwords according to their membership in an unknown language. This finding suggests cognitively separable tracking of artificial language statistics and preexisting representations.
Unable to retrieve citations for this document
Retrieving citations for document...
Requires Authentication
Unlicensed
Licensed
May 3, 2011
Abstract
Prosodic structure is known to influence utterance production in numerous ways, but the influence of repetition of metrical pattern on utterance production has not been thoroughly investigated. It was hypothesized that metrical regularity would speed utterance production and reduce the occurrence of speech errors. Productions of sequences of four trisyllabic nonwords were compared between two conditions: a metrically regular condition with a repeating strong-weak-weak pattern, and a metrically irregular condition that lacked a repeating prominence pattern. Utterance durations were slower in the irregular condition, more hesitations occurred, and more sequencing errors were made. These findings are significant in that they are not accommodated by serial models of speech production. It is argued that the effects of metrical regularity are due to interference between words in an utterance plan, and that this interference arises from constraints on the dynamics of word form representations in the planning of speech.
Unable to retrieve citations for this document
Retrieving citations for document...
Requires Authentication
Unlicensed
Licensed
May 3, 2011
Abstract
In Exemplar Theory, the mental lexical representation of a word is a distribution over memories of past experiences with that word. These memories are rich with phonetic and indexical detail. At the very core of the theory, then, is the prediction that individual words should have a unique phonetic distribution shaped by the environments in which they were most encountered. We pursue this hypothesis directly by exploring the prediction that a word should be more easily processed when it contains characteristics that most resemble the listener's accumulated past experience with that word. Twenty-five participants took part in an auditory lexical decision task where they heard words that are usually said more by older speakers, words usually said more by younger speakers, and age-neutral words. These words were presented in both an older and a younger voice. Accuracy rates increased and response times decreased when voice age and word age matched. This provides robust evidence that words are more easily processed when they contain characteristics that most resemble the listener's accumulated past experience with that word, providing verification of a key prediction of exemplar models of the lexicon.