Is there phonological feature priming?

Karthik Durvasula 1  and Alicia Parrish 2
  • 1 Michigan State University, Department of Linguistics & Germanic, Slavic, Asian and African Languages, East Lansing, MI, USA
  • 2 New York University, Department of Linguistics, New York, NY, USA
Karthik Durvasula
  • Corresponding author
  • Michigan State University, Department of Linguistics & Germanic, Slavic, Asian and African Languages, East Lansing, MI, USA
  • Email
  • Search for other articles:
  • degruyter.comGoogle Scholar
and Alicia Parrish

Abstract

While there is robust evidence of segment priming, particularly in some real word contexts, there is little to no evidence bearing on the issue of priming of subsegmental features, particularly phonological features. In this article, we present two lexical decision task experiments to show that there are no consistent priming effects attributable to phonological place of articulation features. Given that there is clear evidence of segment priming, but no clear evidence of priming due to other phonological representations, we suggest that it is doubtful that priming is a good tool to study phonological representations, particularly those that are not consciously accessible.

1 Introduction

Models of linguistic processing typically include claims about the activation of lexical or sublexical representations during perception, and priming paradigms in particular have been extensively used in testing such theories (Schvaneveldt and Meyer 1973; Neely 1977; Tulving and Schacter 1990). In this article, we test whether there is any evidence of phonological feature priming, particularly for place of articulation (POA) features. We do not find any compelling evidence of such feature priming. Given the inconsistent phonological priming effects observed for other representations, we suggest that priming is not an appropriate experimental probe for phonological representations, especially for those that are not consciously available.

Many studies of priming have attempted to probe the activation of sublexical representations, but results have been mixed. Slowiaczek and Pisoni (1986) found no effect when the first 1, 2, or 3 word-initial segments overlapped between prime and target. However, they did find a facilitatory effect with whole-word repetition; e.g., black was recognized faster following black, but not bland/burnt when compared to a control. A follow-up experiment found that the expected facilitatory effect could be obtained when the stimulus was degraded (Slowiaczek et al. 1987). In contrast to the above results, Radeau et al.’s (1989) lexical decision task study in French found that overlap of 1 syllable (palais-parure) and 1 segment (poulet-parure) led to an inhibitory effect, and syllable overlap had a stronger inhibitory effect than segment overlap, indicating the possibility of competition among activated components. However, Schiller et al. (2002) found little to no evidence of syllable priming, but did find priming related to segmental overlap. Explanations for the mixed findings have focused on possible differences in word frequency, as high-frequency words are identified more accurately (Goldinger et al. 1989) and faster (Radeau et al. 1995) than low-frequency ones. A meta-analysis of phonological priming studies reported that while consistent facilitatory priming effects can be seen when the final sounds of a word match its prime, the effect of initial segment overlap is inconsistent (Dufour 2008). Dufour explained this difference as potentially tapping into different levels of processing: facilitatory effects reflect speech recognition occurring prior to lexical access, whereas inhibitory effects reflect competition from other lexical items that share phonological information with the target. Thus the nature of the effect measured is informative in determining what stage of processing is active for different types of phonological information. The methodology is particularly important, as Dufour also noted that there is often a strategic effect seen in lexical decision tasks (LDTs) as participants learn task-specific strategies that interfere with the analysis of automatic processes. High interstimulus intervals and the proportion of related prime/target pairs were also proposed as potential causes of response biases across many studies.

Segments and syllables, however, can be understood as having both a subconscious phonological representation, and also a moderately consciously-available index, possibly facilitated through literacy (Read 1986; Luketala et al. 1995; Kazanina et al. 2017). So, with segment/syllable priming, the priming effects measured are possibly tapping into consciously-available representations (that are either explicitly learned or reinforced through cultural norms). Therefore, a remaining question from these studies is whether priming effects can be observed when probing an abstract level of representation that is not consciously available, particularly a subsegmental/featural representation.

Goldinger (1998) and Goldinger et al. (1992) addressed this question by probing for priming of what they called “phonetic features”. In fact, they did find evidence of inhibitory feature priming. However, a careful look at the items suggests that there was no systematic manipulation of phonological features, i.e., subsegmental representations argued for in the phonological literature based on the patterning of sounds in phonotactic patterns and phonological alternations. Instead, the stimuli involved “phonetic” or “perceptual” similarity as established by confusion matrices obtained through perceptual similarity experiments (e.g., target/related-prime/unrelated-prime triples: late-rate-miss, rod-wit-niece, mutt-notch-will). Furthermore, given that there were differences in the rhymes of the words used in the experiments, it is possible that what led to the observed priming effects were other uncontrolled aspects of the stimuli (like segmental/rhymal differences), and not quite phonetic similarity. Therefore, the evidence of feature priming, particularly phonological feature priming, is unclear or still lacking.

Building on these studies, we can ask, through systematically controlling the Related/Unrelated primes, whether abstract subsegmental phonological features also show a priming effect. If priming studies are appropriate probes for phonological representations, then a featurally related prime should result in facilitated identification of the target. Experiments 1 and 2 test this prediction.

2 Experiment 1

We probe the issue of phonological feature priming through a lexical decision task. In this study, other possible phonological/phonetic aspects that affect word recognition, such as stress placement (Umeda 1977), were controlled for by choosing predominantly monosyllabic lexical items. Issues that have been possible confounds in similar studies, such as task learning (Goldinger 1998), are not an issue in the present study as any facilitation from learning will affect the control condition the same as the experimental condition. Finally, because of the robustness of priming with final segment overlap, we decided to work within final overlap environments, varying only the degree of featural similarity of the very first segment.

2.1 Methods

2.1.1 Participants

Ninety native speakers of American English were run in a between-subjects design (30 participants in each POA condition: Bilabial, Alveolar, and Velar). All had normal or corrected-to-normal vision and normal hearing. Participants were all students at a large Midwestern university and completed the study for course credit.

2.1.2 Materials

The experiment’s stimuli consisted of 84 pairs of monosyllabic words per participant, where the first member of the pair served as the prime, and the second member served as a target. The stimuli included 28 pairs of test words, partitioned into two groups: (a) 14 pairs consisted of real-word rhymes, whose initial consonants had the same POA but differed only in the voicing (Related pairs), e.g., the target “bat” differs from the prime “pat” only in the voicing feature. (b) 14 pairs consisted of real-word rhymes, whose initial consonants differed in both the voicing and at least one other feature (Unrelated pairs), e.g., the target “bang” differs from the prime “tang” in both voicing and POA. The Unrelated pairs could differ in more than just POA, and we used the full range of available word-initial phonemes for English, so that Bilabial could be {/p/, /b/, /m/}, Alveolar could be {/t/, /d/, /s/, /z/, /n/}, and Velar could be {/k/, /g/}. Crucially, the difference between Related and Unrelated pairs was that the latter also differed in POA. Therefore, any difference in reaction times (RT) between the Related/Unrelated targets allows us to isolate the priming effect of POA.

The stimuli also consisted of 56 filler-pair rhymes which had the following distribution: (a) 14 pairs where only the prime was a nonce word; (b) 14 pairs where only the target was a nonce words; (c) 28 pairs where both prime and target were nonce words. The nonce words followed English phonotactic patterns. Furthermore, there were an equal number (N = 42) of nonce words and fillers for both primes and targets. Finally, as far as the stimuli were concerned, the only consistent difference between the three participant groups was the POA of the Related/Unrelated targets (Alveolar/Bilabial/Velar).

Some words in the test items were deemed inappropriate later because they had unintended slang/problematic interpretations. After removing those words, there were 13 pairs of Bilabial Related words and 12 pairs of the remaining conditions (see Supplementary Materials).

The stimuli were recorded by two speakers, one male and one female, with American Midwestern accents, such that the same speaker was used for both the prime and target word in each pair. Each speaker recorded half of the prime, target, and filler items. Each stimulus had ∼50 ms of silence at the beginning, and was normalized to 70 dB using Praat (Boersma and Weenink 2016).

Word frequencies were obtained from SubtlexUS (Brysbaert and New 2009). The mean loge frequencies for experimental items did not differ significantly between the Related and Unrelated conditions for any of the POA conditions for either primes or targets (see Supplementary Materials).

2.1.3 Procedure

We used an LDT to investigate the priming effect in the crucial test stimuli (Related vs. Unrelated targets). All words were presented auditorily and orthographically on computer screens with an interstimulus interval of 1 s in pseudorandom order, with the constraint that the relevant prime appeared immediately before the target.

Participants indicated whether each stimulus was a real word in English or not by pressing “a” (real-word) or “l” (nonce-word) on a standard QWERTY keyboard. The experiment began with 20 practice words. There was no option for participants to rehear a word or change an answer.

The experiment was conducted in a quiet room with a group of 3–8 participants per session. The stimuli were presented with a low-noise headset to each participant, and the experiment was presented via PsychoPy (Peirce 2007).

2.2 Results

We measured RTs to the target word following either a prime that matched in the POA (Related target) or that mismatched in POA (Unrelated target).1 A minimum accuracy threshold of 75% was set to ensure all participants were paying attention to the stimuli. After applying this accuracy threshold, there were 26 participants each in the Alveolar and Velar conditions, and 27 in the Bilabial condition. Furthermore, we removed RTs that were below 250 ms or above 2.5 s as outliers (29 measurements or 1.6% were removed), and removed all erroneous responses (49 measurements, or 2.6% were removed). We analysed the RTs for correct responses, and they were loge transformed for the primary statistical analysis. This was done as raw RTs are typically not normally distributed; therefore, they violate the normality assumption in standard parametric statistical tests. A log-transform of the RTs makes them more normally distributed, and therefore their use in parametric statistical tests allows for better interpretability (see Supplementary Materials).

The raw and loge transformed RTs for target words in the Related and Unrelated conditions are shown below (Table 1).

Table 1:

RTs for both target types for each POA (units: ms, loge-ms).

POARaw RTs (ms)LogeRTs
UnrelatedRelatedDiff.UnrelatedRelatedDiff.
Alveolar732.2736.9−4.76.596.59−0.001
Bilabial688.8651.037.86.526.470.05
Velar706.5668.338.26.556.490.06

Visual inspection of the mean differences in loge RTs (Unrelated-Related) suggested that there might be a facilitation effect of priming for the Related targets, but only for the Bilabial and Velar conditions (Figure 1).

Figure 1:
Figure 1:

LogeRTs for all three places of articulation.

Citation: Linguistics Vanguard 5, 1; 10.1515/lingvan-2018-0041

In order to confirm the observations made by visual inspection of the results, we conducted statistical analysis in R (R Development Core Team 2014) and analysed subject responses using linear mixed-effects regression models. The models were fitted using the lmer function in the lme4 package (Bates et al. 2015). P-values were obtained using the lmerTest package (Kuznetsova et al. 2017). The model with the best combination of fixed-effects was identified through backwards elimination of non-significant terms, beginning with the interactions, through a chi-squared test of the log-likelihood ratios. (see Supplementary Materials for a more in-depth description of model selection.)

We also checked the Bayesian Information Criterion (BIC) values, as this measure takes into account both the fit of the models and their complexity; the lowest values indicate the model best supported by the data (Wagenmakers 2007). Finally, following Wagenmakers (2007), we used the BIC values to estimate the Bayes Factor for different model comparisons, as a larger BIC difference suggests a larger Bayes Factor, indicating more evidence in favor of a particular model compared to another model (Raftery 1995; Wagenmakers 2007).

For the mixed-effects models, the dependent variable was the loge RTs for target words. The independent variables were POA (Bilabial, Alveolar, Velar) and TargetType (Unrelated, Related). The baseline was the alveolar Related words. The most complex random-effects structure that converged was one with a varying-intercepts for subjects and items, varying slopes of TargetType by subjects. One might argue that the random-effects structure justified by the data is not close to being “maximal”. However, if anything, a nonmaximal random-effects structure will lead to an inflation of Type I errors (Barr et al. 2013), i.e., given this model specification, it is slightly easier to get statistical significance than in a more fully-specified model. Therefore, any nonsignificant effects are even more surprising, and likely suggest a lack of an effect.

In Table 2, we present the pairwise comparisons between nested models comparing the model on the line that has particular p-value with the model in the previous line.2 The best model based on a chi-squared test of the log-likelihood ratios was the model with just an intercept in the fixed-effects structure, since there was no significant improvement of the model when the various factors were added. Furthermore, the lowest BIC value was also the model with just an intercept in the fixed-effects structure. This suggests that none of the differences between the Unrelated and Related words were significant, i.e., there are no observable priming effects.

Table 2:

Comparison of linear mixed-effects models for Experiment 1.

Fixed effectsBICχ2dfPr(>χ2)
Intercept−24
1 + TargetType−19.63.010.08
1 + POA−14.82.71
,1 + POA + TargetType−10.33.010.08
1 + POA + TargetType + POA*TargetType3.01.720.42

It is important to note that standard null-hypothesis testing only tests for discrepancies from the null-hypothesis of “no difference”, so it is difficult to interpret a statistically nonsignificant result directly as support for the null hypothesis. The usual suggestion is to check the power of the experiment; however, this is not possible in our case. While there are many previous experiments probing phonological priming, there are no prior experiments specifically on phonological feature priming; therefore, there is no way to get an independent (or a priori) estimate for phonological feature priming to compare against to check the power of the experiment. It is sometimes suggested that one should look at “observed” or “post-hoc” power by using estimates derived from the experiment itself. This calculation of “observed” power has been strongly criticized in the literature, as it essentially tracks the p-values and is independent of a true power calculation (Hoenig and Heisey 2001, Wagenmakers 2007; Kirby and Sonderegger 2018).

To address the problem with standard null-hypothesis testing and lack of a clear independent estimate for a power calculation, we follow Wagenmakers (2007) and use the BIC values to compare the different models against each other. As Wagenmakers notes, BIC values can be used to approximate the Bayes Factor for pairs of models, under the assumption that the models are equally likely a priori. We compared the model with the lowest BIC value (the model with just an intercept in the fixed-effects structure) with the next best model (the model with a separate fixed-effect of TargetType). The Bayes Factor for this comparison is 9.6, in favor of the model with just an intercept in the fixed-effects structure. This constitutes “positive” evidence in favor of the simplest model (Raftery 1995). Furthermore, as per the BIC-Bayes Factor approximation, the larger the difference in BIC values, the larger the Bayes Factor in support of the model with the smaller BIC value. Therefore, as a logical extension, comparing the simplest model (with the lowest BIC) against any of the other models should result in larger Bayes Factors in support of the simplest model, suggesting that the data are most consistent with the simplest model.

Finally, we also did the same analyses with the raw RTs as the dependent variable. Again, the best model based on a chi-squared test of the log-likelihood ratios and BIC values was one with just an intercept in the fixed-effects structure. And the model had a Bayes Factor of 12.1 compared to the next best model, again suggesting that it was indeed the best model for the data.

2.3 Discussion

The results of Experiment 1 do not provide any clear evidence of feature priming, particularly in the statistical modelling. It is worth noting that visual inspection of the results initially suggested that Bilabial and Velar conditions might have some facilitatory priming, but the Alveolar condition clearly did not. Although the pattern was not found to be present based on statistical analysis, it does lead to a tempting connection with claims of coronal underspecification (Avery and Rice 1989, Eulitz and Lahiri 2004). As per the coronal underspecification viewpoint, coronals (and consequently, alveolars) are not specified for POA features; therefore, alveolar segments might not be expected to prime other alveolars. Given that the pattern is observed only in the visual depiction of the results and not the statistical analysis, it is difficult to say more. As discussed above, it is possible that this is an issue of lack of enough statistical power; we presented the Bayes Factor analyses specifically to address this issue. Having said this, we believe ultimately the best way to probe the issue is through replication. However, we show in Experiment 2 that the pattern of results is not even in the expected direction. Therefore, connecting the results in Experiment 1 to coronal underspecification, though tempting, is ultimately undesired.

Experiment 1 does have some problems with interpretation given that there were potentially some uncontrolled aspects in the stimuli. First, the stimuli in the different POA conditions were not balanced in some ways; while some prime/target pairs had changes in manner of articulation, others had a change in voicing. Therefore, it is possible that the nonsignificant effects observed are due to this uncontrolled aspect of the stimulus pairs. Second, though the stimuli were controlled for in terms of frequencies, they were not controlled for in terms of lexical neighborhood densities, which have been shown to affect word recognition (Luce and Pisoni 1998) and production (Vitevitch and Sommers 2003; Taler et al. 2010); therefore, it is conceivable that they might affect priming too. We address both of these concerns in Experiment 2.

3 Experiment 2

Experiment 2 was more constrained in several ways. To increase power, we designed a within-subjects study. To control for potential orthographic effects, we opted for a fully auditory presentation style. To control for lexical competition effects, we additionally controlled for neighborhood densities. To account for the potential confound that the POAs had different types of segments in the primes, we restricted onset consonants to single stop segments.

3.1 Methods

3.1.1 Participants

Thirty-one native speakers of American English participated in an LDT in a within-subjects design. All had normal or corrected-to-normal vision and normal hearing. Participants were all students at a large Midwestern university and completed the study for course credit, and none had participated in Experiment 1.

3.1.2 Materials

Stimuli were similar to those used in Experiment 1, but with the added constraints mentioned above. We used 14 pairs of Related-Unrelated stimuli from each of the three POA conditions, so each participant saw a total of 84 experimental items. All Related pairs differed in one feature: voicing of the initial consonant. All Unrelated pairs differed in both voicing and POA of the initial consonant, but shared manner of articulation. Again, the mean loge frequencies for target words did not differ significantly between the Related and Unrelated conditions for any of the POAs for either primes or targets (see Supplementary Materials).

Table 3:

RTs for both target types for each POA (units: ms, loge-ms).

POARaw RTs (ms)LogeRTs
UnrelatedRelatedDiff.UnrelatedRelatedDiff.
Alveolar816792246.676.640.03
Bilabial809827−186.666.67−0.01
Velar813837−246.676.68−0.01

The neighborhood densities for pairs of words were controlled for as much as possible based on the log10 Kucera-Francis (KF) frequency weighted densities, obtained from the Irvine Phonotactic Online Dictionary v. 1.4 (Vaden et al. 2009) (see Supplementary Materials). It is worth pointing out that the difference in values for the Velar primes and targets are marginally significant (velar primes [t(22.9) = 2.05, p = 0.06], velar targets [t(22.9) = 2.03, p = 0.06]), suggesting that there is the possibility of a sufficient departure from the null hypothesis expectation of “no difference” for each comparison; however, foreshadowing the results somewhat, this does not seem to have affected the pattern of results, as ultimately we did not find any effect of priming for any of the POA conditions.

Some words in the test items were deemed inappropriate later for the same reasons noted in Experiment 1. After removing those words, there were 13 pairs of Bilabial Related words and Velar Unrelated words, and 12 pairs of the rest of the conditions (see Supplementary Materials).

The creation of the fillers and the ratio of nonce words to real words was identical to that of Experiment 1. All stimuli were recorded by a single female speaker with an American Midwestern accent. As in Experiment 1, each word began with ∼50 ms of silence and was normalized to 70 dB in Praat.

3.1.3 Procedure

Using an LDT, we again tested priming effects of a Related/Unrelated prime-word. Participants responded to every word by indicating whether it was a real English word or not. Experiment 2 used only an auditory presentation. All other details of the procedure were identical to those in Experiment 1.

3.2 Results

All 31 participants passed the minimum accuracy threshold of 75%. As with Experiment 1, we removed all erroneous responses [84 responses (3.8%)] and removed RTs that were below 250 ms or above 2.5 s as outliers [17 measurements (0.8%)]. The RTs analysed were only for correct responses, and they were loge-transformed. The mean raw and loge-transformed RTs for each condition are reported in Table 3.

Figure 2:
Figure 2:

LogeRTs for all three places of articulation.

Citation: Linguistics Vanguard 5, 1; 10.1515/lingvan-2018-0041

A visual inspection of the mean differences in the raw and loge RTs (Unrelated-Related) suggested a possible facilitation effect of priming for the Related targets in the Alveolar condition, and a possible inhibitory priming effect for the Bilabial/Velar conditions (Figure 2).

We followed up on the visual inspection with linear mixed-effects modelling. The dependent variable was the loge RT for the target word. The independent variables were POA (Bilabial, Alveolar, Velar) and TargetType (Unrelated, Related). The baseline was the alveolar Related words. The most complex random-effects structure that converged was one with a varying intercept for subjects and items, varying slopes of POA and TargetType by subjects, and a varying slope of the interaction of POA*TargetType by subjects.

The best model, based on a chi-squared test of the log-likelihood ratios, is one with just an intercept and no other fixed effects (Table 4). The BIC values also suggested that the null model with no fixed-effects except the intercept was the best model, suggesting that none of the differences between the Unrelated and Related words were significant. Therefore, as with Experiment 1, there are no clear priming effects.

Table 4:

Comparison of linear mixed-effects models for Experiment 2.

Fixed effectsBICχ2dfPr(>χ2)
Intercept113.2
1 + TargetType120.90.000210.99
1 + POA128.20.381
1 + POA + TargetType135.90.00210.96
1 + POA + TargetType + POA*TargetType150.21.0720.59

As with Experiment 1, the BIC values were used to approximate the Bayes Factor for pairs of models, under the assumption that the models were equally likely a priori. We compared the model with the lowest BIC value (the model with just an intercept in the fixed-effects structure) with the next best model (the model with a separate fixed-effect of TargetType). As per the approximation in Wagenmakers (2007), the Bayes Factor for this comparison is 47, in favor of the model with just an intercept in the fixed-effects structure. This constitutes strong evidence in favor of the simplest model (Raftery 1995). As a logical extension, comparing the simplest model (with the lowest BIC) against any of the other models should result in a larger Bayes Factor in support of the simplest model. Therefore, the data is most consistent with the simplest model.

Finally, we also did the same analyses with the raw RTs as the dependent variable. Again, the best model based on a chi-squared test of the log-likelihood ratios and the lowest BIC value was one with just an intercept in the fixed-effects structure. This model had a Bayes Factor of 42.5 compared to the next best model, again suggesting strong evidence in favor of the simplest model (Raftery 1995).

3.3 Discussion

The results of Experiment 2 once again do not provide any clear support for featural priming. In visual inspection, there appeared to be the possibility of a slight facilitatory effect for the Alveolar condition; however, the statistical modelling did not bear this out. Experiment 1 revealed the same lack of priming. Furthermore, even in the visual inspection of the data, the pattern of results in Experiment 1 and Experiment 2 are quite different. So, a reasonable interpretation of the results is that they are not identifying a real priming effect.

Finally, we think, the use of a single speaker for both primes and targets should have enhanced the priming effect, if present. Therefore, the lack of priming strongly suggests that a feature-driven priming effect might not be present at all.

4 General discussion

Taken together, these two experiments do not present any consistent evidence of phonological feature priming. Rather, any small and unstable priming effects of POA observed in visual inspection hint at random chance occurrence. While further studies that test other features, such as manner of articulation or voicing, and studies that directly test featural priming against known priming effects such as semantic priming are needed to complete the story, the present findings support an account where phonological features (at least, POA features) are not accessible to priming effects.

Summarizing, although there is evidence of facilitatory priming for segments in some lexical contexts (Dufour 2008), the evidence for syllable-priming is weak to nonexistent (Schiller et al. 2002). And with the current results, there appears to be no evidence of systematic priming for phonological features, particularly POA features. This raises the possibility that at least the segment priming effects observed in other studies might be driven by another factor. Following Kazanina et al. (2017), we would like to suggest that exposure to orthography might have an indirect role in priming tasks, as knowledge of an alphabetic writing system may make the relevant phonological representations more consciously accessible, at least as they relate to full segments. Similarly, syllables are consciously available to at least some American English speakers, as syllabic segmenting is explicitly taught in primary and middle schools. Phonological features, on the other hand, are not taught (except to linguistics students) and are not consciously available in a way similar to segments or syllables. Therefore, it is possible that priming in these cases is tracking some consciously-available representation, which may or may not be isomorphic with the (subconscious) representations needed for linguistic computation.

Considering that we are suggesting a null effect of priming, it would be reasonable to ask if we were unable to measure priming because of a ceiling effect. That is, because both the Related and Unrelated conditions had overlapping rimes, and previous research has found that rime overlap induces facilitatory priming effects, were we unable to measure a real priming effect because RTs were effectively already at ceiling? We consider this explanation of our findings unlikely; Slowiaczek et al. (1987) have already shown that rime overlap alone does not put priming effects at ceiling, as identity priming still results in a greater facilitation effect than rime overlap alone. Because there are other cases of rime overlap not leading to ceiling effects, it is not obvious that the lack of a clear priming effect in our experiments can be explained through ceiling effects. Essentially, there are facilitatory priming effects greater than rime overlap, so rime overlap does not put RTs at ceiling, and we interpret this to mean that there is still room for additional overlap (say, of featural information) to produce additional facilitation effects.

Given our claim, it is fair to ask how this finding relates to the observed priming of abstract representations beyond phonological ones, for example, priming of syntactic representations (Bock 1986, et seq.). Branigan and Pickering (2016) recognize a difference between structures inferred from acceptability judgments (and syntactic patterns) versus those that seem to be accessible in priming, and suggest that a “shallow” syntactic representation that is devoid of empty categories associated with the movement of constituents is what is typically primed. Crucially related to our findings, Branigan and Pickering (2016) point out that despite the evidence for priming of structural representations, “sublexical features” such as those signifying tense or aspect marking fail to show priming effects. This interesting parallel between the syntactic and phonological priming results supports our interpretation that abstract featural representations are not accessible to priming. What this means for the nature of (morphosyntactic) featural representations and for the specific processing level that “syntactic” priming targets remains a question for further research.

Before concluding, it is important to recognize that one could interpret the results in this article as arguing that phonological features are not psychologically real, or that phonological features are not accessed in speech perception. With respect to the former possibility, we share the view along with most phonologists that features play an important role in accounting for phonological patterns, particularly patterns of vowel/consonant harmony and assimilation (Kenstowicz 1994; Halle 2013). While some have suggested that phonological features might be emergent, but not innate, based on patterns in the language (Mielke 2008), as far as POA features are concerned, there is ample evidence from nasal place assimilation patterns in English that there is indeed a need for them in English phonology; therefore, the issue of whether such features are innate or emergent is somewhat tangential to the current article. With respect to the second issue of whether phonological features are accessible during speech perception, again it is our view that the evidence in the literature supports the claim that phonological features are recruited during speech perception. For example, Moreton (2002) shows that word-initial [dl] sequences are misperceived more by English listeners than word-initial [bw] sequences, though both sequences are absent word-initially in English. He argues that his results stem from structural (or featural) cooccurrence constraints, and cannot be accounted for by segmental cooccurrence constraints. Furthermore, the use of abstract featural knowledge in speech perception has been used to explain asymmetric patterns of mismatch negativity (MMN) (Eulitz and Lahiri 2004; Hestvik and Durvasula 2016). Finally, in a recent review of the evidence for phonological knowledge in speech perception, Monahan (2018) argues that distinctive features (including POA features) “play a role in speech comprehension, serving as a basis for predictions of what we will hear next.” (See Blumstein (2016) for another recent review arguing for phonological features in speech perception).3 For the above reasons, we see the lack of priming for phonological (place) features as suggesting that priming is likely the wrong task to probe them. Therefore, the results ultimately inform us more about the probative value of priming as an experimental task than about phonological representations.

Acknowledgement

We would like to thank Shigeto Kawahara and two anonymous reviewers for helping improve this manuscript. We would also like to thank the audience at the LSA Annual Meeting 2018 and the Phonology/Phonetics group at Michigan State University for helpful discussions.

References

  • Avery, P. & K. Rice. 1989. Segment structure and coronal underspecification. Phonology 6(2). 179–200. Retrieved from http://www.jstor.org/stable/4419997.

    • Crossref
    • Export Citation
  • Barr, D. J., R. Levy, C. Scheepers, & H. J. Tily. 2013. Random effects structure for confirmatory hypothesis testing: Keep it maximal. Journal of Memory and Language 68(3). 1–26. doi: 10.1016/j.jml.2012.11.001.

  • Bates, D., M. Mächler, B. Bolker & S. Walker. 2015. Fitting linear mixed-effects models using lme4. Journal of Statistical Software 67(1). 1–48. doi: 10.18637/jss.v067.i01.

  • Blumstein, S. E. 2016. Phonetic categories and phonological features: Evidence from the cognitive neuroscience of language. In A. Lahiri & S. Kotzor (eds.), The speech processing lexicon: Neurocognitive and behavioural approaches, 4–20. Berlin, Boston: De Gruyter. doi: https://doi.org/10.1515/9783110422658.

  • Bock, J. 1986. Syntactic persistence in language production. Cognitive Psychology 18(3). 355–387. Retrieved from http://www.sciencedirect.com/science/article/pii/0010028586900046, doi: https://doi.org/10.1016/0010-0285(86)90004-6.

    • Crossref
    • Export Citation
  • Boersma, P. & D. Weenink. 2016. Praat: doing phonetics by computer [Computer program]. Version 6.0.19, retrieved 13 June 2016 from http://www.praat.org/.

  • Branigan, H. & M. Pickering. 2016. An experimental approach to linguistic representation. Behavioral and Brain Sciences 40. 1–73. doi: 10.1017/S0140525X16002028.

  • Brysbaert, M. & B. New. 2009. Moving beyond Kucera and Francis: A critical evaluation of current word frequency norms and the introduction of a new and improved word frequency measure for American English. Behavior Research Methods 41(4). 977–990. Retrieved from http://subtlexus.lexique.org/moteur2/index.php (Online; accessed October-2014).

    • Crossref
    • Export Citation
  • Dufour, S. 2008. Phonological priming in auditory word recognition: When both controlled and automatic processes are responsible for the effects. Canadian Journal of Experimental Psychology/Revue canadienne de psychologie expérimentale 62(1). 33–41.

    • Crossref
    • Export Citation
  • Eulitz, C. & A. Lahiri. 2004. Neurobiological evidence for abstract phonological representations in the mental lexicon during speech recognition. Journal of Cognitive Neuroscience 16(4). 577–583. Retrieved from https://doi.org/10.1162/089892904323057308, doi: 10.1162/089892904323057308.

    • Crossref
    • Export Citation
  • Goldinger, S. D. 1998. Signal detection comparisons of phonemic and phonetic priming: The flexible-bias problem. Perception and Psychophysics 60(6). 952–965.

    • Crossref
    • Export Citation
  • Goldinger, S. D., P. A. Luce & D. B. Pisoni. 1989. Priming lexical neighbors of spoken words: Effects of competition and inhibition. Journal of Memory and Language 28(5). 501–518.

    • Crossref
    • PubMed
    • Export Citation
  • Goldinger, S. D., P. A. Luce, D. B. Pisoni & J. K. Marcario. 1992. Form-based priming in spoken word recognition: The roles of competition and bias. Journal of Experimental Psychology: Learning, Memory and Cognition 18(6). 1211–1238.

  • Halle, M. 2013. From memory to speech and back. Papers on phonetics and phonology 1954 – 2002. Berlin, Boston: De Gruyter Mouton.

  • Hestvik, A. & K. Durvasula. 2016. Neurobiological evidence for voicing underspecification in English. Brain and Language 152. 28–43. Retrieved from http://www.sciencedirect.com/science/article/pii/S0093934X15300274, doi: https://doi.org/10.1016/j.bandl.2015.10.007.

    • Crossref
    • PubMed
    • Export Citation
  • Hoenig, J. M. & D. M. Heisey. 2001. The abuse of power. The American Statistician 55(1). 19–24. Retrieved from https://doi.org/10.1198/000313001300339897, doi: 10.1198/000313001300339897.

    • Crossref
    • Export Citation
  • Kazanina, N., J. S. Bowers & W. Idsardi. 2017. Phonemes: Lexical access and beyond. Psychonomic Bulletin and Review 25. 560–585. Retrieved from https://doi.org/10.3758/s13423-017-1362-0, doi: 10.3758/s13423-017-1362-0.

  • Kenstowicz, M. 1994. Phonology in generative grammar. Cambridge MA: Blackwell.

  • Kirby, J. & M. Sonderegger. 2018. Mixed-effects design analysis for experimental phonetics. Journal of Phonetics 70. 70–85. Retrieved from http://www.sciencedirect.com/science/article/pii/S0095447017301390, doi: https://doi.org/10.1016/j.wocn.2018.05.005.

    • Crossref
    • Export Citation
  • Kuznetsova, A., P. B. Brockhoff & R. H. B. Christensen. 2017. lmerTest package: Tests in linear mixed effects models. Journal of Statistical Software 82(13). 1– 26. doi: 10.18637/jss.v082.i13.

  • Luce, P. A. & D. B. Pisoni. 1998. Recognizing spoken words: The neighborhood activation model. Ear and Hearing 19(1). 1–36.

    • Crossref
    • PubMed
    • Export Citation
  • Luketala, K., C. Carello, D. Schankweiler & I. Y. Liberman. 1995. Phonological awareness in illiterates: Observations from Serbo-Croatian. Applied Psycholinguistics 16. 463–487.

    • Crossref
    • Export Citation
  • Mielke, J. 2008.The emergence of distinctive features. Oxford: Oxford University Press.

  • Monahan, P. J. 2018. Phonological knowledge and speech comprehension. Annual Review of Linguistics 4(1). 21–47. Retrieved from https://doi.org/10.1146/annurev-linguistics-011817-045537, doi: 10.1146/annurev-linguistics-011817-045537.

    • Crossref
    • Export Citation
  • Moreton, E. 2002. Structural constraints in the perception of English stop-sonorant clusters. Cognition 84. 55–71.

    • Crossref
    • PubMed
    • Export Citation
  • Neely, J. H. 1977. Semantic priming and retrieval from lexical memory: Roles of inhibitionless spreading activation and limited-capacity attention. Journal of Experimental Psychology: General 106(3). 226–254.

    • Crossref
    • Export Citation
  • Okada, K., W. Matchin & G. Hickok. 2018. Phonological feature repetition suppression in the left inferior frontal gyrus. Journal of Cognitive Neuroscience 30(10). 1549–1557. Retrieved from https://doi.org/10.1162/jocn_a_01287 (PMID: 29877763), doi: 10.1162/jocn_a_01287.

    • Crossref
    • PubMed
    • Export Citation
  • Peirce, J. 2007. PsychoPy – Psychophysics software in Python. Journal of Neuroscience Methods 162. 8–13.

    • Crossref
    • PubMed
    • Export Citation
  • R Development Core Team. 2014. R: A language and environment for statistical computing [Computer software manual]. Vienna, Austria. Retrieved from http://www.R-project.org (ISBN 3-900051-07-0).

  • Radeau, M., J. Morais & A. Dewier. 1989. Phonological priming in spoken word recognition: Task effects. Memory and Cognition 17(5). 525–535.

    • Crossref
    • Export Citation
  • Radeau, M., J. Morais & J. Segui. 1995. Phonological priming between monosyllabic spoken words. Journal of Experimental Psychology: Human Perception and Performance 21(6). 1297–1311.

  • Raftery, A. E. 1995. Bayesian model selection in social research. Sociological Methodology 25. 111–163. Retrieved from http://www.jstor.org/stable/271063.

    • Crossref
    • Export Citation
  • Read, C., Z. Yun-Fei, N. Hong-Yin & D. Bao-Qing. 1986. The ability to manipulate speech sounds depends on knowing alphabetic writing. Cognition 24. 31–44.

    • Crossref
    • PubMed
    • Export Citation
  • Schiller, N. O., A. Costa & A. Colomé. 2002. Phonological encoding of single words: In search of the lost syllable. In Papers in laboratory phonology VII. Berlin: Mouton de Gruyter.

  • Schvaneveldt, R. & D. E. Meyer. 1973. Retrieval and comparison processes in semantic memory.Attention and performance IV. New York: Academic Press.

  • Slowiaczek, L. M. & D. B. Pisoni. 1986. Effects of phonological similarity on priming in auditory lexical decision. Memory and Cognition 14(3). 230–237.

    • Crossref
    • Export Citation
  • Slowiaczek, L. M., H. C. Nusbaum & D. B. Pisoni. 1987. Phonological priming in auditory word recognition. Journal of Experimental Psychology: Learning, Memory, and Cognition 13(1). 64–75.

    • PubMed
    • Export Citation
  • Taler, V., P. G. Aaron, L. G. Steinmetz & D. B. Pisoni. 2010. Lexical neighborhood density effects on spoken word recognition and production in healthy aging. The Journals of Gerontology Series B: Psychological Sciences and Social Sciences 65B(5). 551–560. doi: 10.1093/geronb/gbq039.

    • Crossref
    • Export Citation
  • Tulving, E. & D. L. Schacter. 1990. Priming and human memory systems. Science 247(4940). 301–306.

    • Crossref
    • PubMed
    • Export Citation
  • Umeda, N. 1977. Consonant duration in American English. The Journal of the Acoustical Society of America 61(3). 846–858.

    • Crossref
    • Export Citation
  • Vaden, K., H. Halpin & G. Hickok. 2009. Irvine phonotactic online dictionary, version 1.4. [data file]. Available from http://www.iphod.com.

  • Vitevitch, M. S. & M. S. Sommers. 2003. The facilitative influence of phonological similarity and neighborhood frequency in speech production in younger and older adults. Memory and Cognition 31(4). 491–504.

    • Crossref
    • Export Citation
  • Wagenmakers, E.-J. (2007). A practical solution to the pervasive problems ofp values. Psychonomic Bulletin and Review 14(5). 779–804. Retrieved from https://doi.org/10.3758/BF03194105, doi: 10.3758/BF03194105.

    • Crossref
    • Export Citation

Footnotes

Supplementary Material

The online version of this article offers supplementary material (DOI: https://doi.org/10.1515/lingvan-2018-0041).

Footnotes

1

All data presented in this paper can be accessed at: https://gitlab.com/karthikdurvasula/is-there-phonological-feature-priming/.

2

There is no simple way to present all comparisons made; we present them in the order of ascending degrees of freedoms for the models. In Table 2, there is no p-value associated with the 1 + POA row; the pairwise comparison with the previous model (namely, fixed-effects = 1 + TargetType) does not constitute a nested model comparison.

3

Quite recently, Okada et al. (2018) have argued by looking at repetition suppression effects that phonological features are part of the planning units of the motor speech system.

If the inline PDF is not rendering correctly, you can download the PDF file here.

  • Avery, P. & K. Rice. 1989. Segment structure and coronal underspecification. Phonology 6(2). 179–200. Retrieved from http://www.jstor.org/stable/4419997.

    • Crossref
    • Export Citation
  • Barr, D. J., R. Levy, C. Scheepers, & H. J. Tily. 2013. Random effects structure for confirmatory hypothesis testing: Keep it maximal. Journal of Memory and Language 68(3). 1–26. doi: 10.1016/j.jml.2012.11.001.

  • Bates, D., M. Mächler, B. Bolker & S. Walker. 2015. Fitting linear mixed-effects models using lme4. Journal of Statistical Software 67(1). 1–48. doi: 10.18637/jss.v067.i01.

  • Blumstein, S. E. 2016. Phonetic categories and phonological features: Evidence from the cognitive neuroscience of language. In A. Lahiri & S. Kotzor (eds.), The speech processing lexicon: Neurocognitive and behavioural approaches, 4–20. Berlin, Boston: De Gruyter. doi: https://doi.org/10.1515/9783110422658.

  • Bock, J. 1986. Syntactic persistence in language production. Cognitive Psychology 18(3). 355–387. Retrieved from http://www.sciencedirect.com/science/article/pii/0010028586900046, doi: https://doi.org/10.1016/0010-0285(86)90004-6.

    • Crossref
    • Export Citation
  • Boersma, P. & D. Weenink. 2016. Praat: doing phonetics by computer [Computer program]. Version 6.0.19, retrieved 13 June 2016 from http://www.praat.org/.

  • Branigan, H. & M. Pickering. 2016. An experimental approach to linguistic representation. Behavioral and Brain Sciences 40. 1–73. doi: 10.1017/S0140525X16002028.

  • Brysbaert, M. & B. New. 2009. Moving beyond Kucera and Francis: A critical evaluation of current word frequency norms and the introduction of a new and improved word frequency measure for American English. Behavior Research Methods 41(4). 977–990. Retrieved from http://subtlexus.lexique.org/moteur2/index.php (Online; accessed October-2014).

    • Crossref
    • Export Citation
  • Dufour, S. 2008. Phonological priming in auditory word recognition: When both controlled and automatic processes are responsible for the effects. Canadian Journal of Experimental Psychology/Revue canadienne de psychologie expérimentale 62(1). 33–41.

    • Crossref
    • Export Citation
  • Eulitz, C. & A. Lahiri. 2004. Neurobiological evidence for abstract phonological representations in the mental lexicon during speech recognition. Journal of Cognitive Neuroscience 16(4). 577–583. Retrieved from https://doi.org/10.1162/089892904323057308, doi: 10.1162/089892904323057308.

    • Crossref
    • Export Citation
  • Goldinger, S. D. 1998. Signal detection comparisons of phonemic and phonetic priming: The flexible-bias problem. Perception and Psychophysics 60(6). 952–965.

    • Crossref
    • Export Citation
  • Goldinger, S. D., P. A. Luce & D. B. Pisoni. 1989. Priming lexical neighbors of spoken words: Effects of competition and inhibition. Journal of Memory and Language 28(5). 501–518.

    • Crossref
    • PubMed
    • Export Citation
  • Goldinger, S. D., P. A. Luce, D. B. Pisoni & J. K. Marcario. 1992. Form-based priming in spoken word recognition: The roles of competition and bias. Journal of Experimental Psychology: Learning, Memory and Cognition 18(6). 1211–1238.

  • Halle, M. 2013. From memory to speech and back. Papers on phonetics and phonology 1954 – 2002. Berlin, Boston: De Gruyter Mouton.

  • Hestvik, A. & K. Durvasula. 2016. Neurobiological evidence for voicing underspecification in English. Brain and Language 152. 28–43. Retrieved from http://www.sciencedirect.com/science/article/pii/S0093934X15300274, doi: https://doi.org/10.1016/j.bandl.2015.10.007.

    • Crossref
    • PubMed
    • Export Citation
  • Hoenig, J. M. & D. M. Heisey. 2001. The abuse of power. The American Statistician 55(1). 19–24. Retrieved from https://doi.org/10.1198/000313001300339897, doi: 10.1198/000313001300339897.

    • Crossref
    • Export Citation
  • Kazanina, N., J. S. Bowers & W. Idsardi. 2017. Phonemes: Lexical access and beyond. Psychonomic Bulletin and Review 25. 560–585. Retrieved from https://doi.org/10.3758/s13423-017-1362-0, doi: 10.3758/s13423-017-1362-0.

  • Kenstowicz, M. 1994. Phonology in generative grammar. Cambridge MA: Blackwell.

  • Kirby, J. & M. Sonderegger. 2018. Mixed-effects design analysis for experimental phonetics. Journal of Phonetics 70. 70–85. Retrieved from http://www.sciencedirect.com/science/article/pii/S0095447017301390, doi: https://doi.org/10.1016/j.wocn.2018.05.005.

    • Crossref
    • Export Citation
  • Kuznetsova, A., P. B. Brockhoff & R. H. B. Christensen. 2017. lmerTest package: Tests in linear mixed effects models. Journal of Statistical Software 82(13). 1– 26. doi: 10.18637/jss.v082.i13.

  • Luce, P. A. & D. B. Pisoni. 1998. Recognizing spoken words: The neighborhood activation model. Ear and Hearing 19(1). 1–36.

    • Crossref
    • PubMed
    • Export Citation
  • Luketala, K., C. Carello, D. Schankweiler & I. Y. Liberman. 1995. Phonological awareness in illiterates: Observations from Serbo-Croatian. Applied Psycholinguistics 16. 463–487.

    • Crossref
    • Export Citation
  • Mielke, J. 2008.The emergence of distinctive features. Oxford: Oxford University Press.

  • Monahan, P. J. 2018. Phonological knowledge and speech comprehension. Annual Review of Linguistics 4(1). 21–47. Retrieved from https://doi.org/10.1146/annurev-linguistics-011817-045537, doi: 10.1146/annurev-linguistics-011817-045537.

    • Crossref
    • Export Citation
  • Moreton, E. 2002. Structural constraints in the perception of English stop-sonorant clusters. Cognition 84. 55–71.

    • Crossref
    • PubMed
    • Export Citation
  • Neely, J. H. 1977. Semantic priming and retrieval from lexical memory: Roles of inhibitionless spreading activation and limited-capacity attention. Journal of Experimental Psychology: General 106(3). 226–254.

    • Crossref
    • Export Citation
  • Okada, K., W. Matchin & G. Hickok. 2018. Phonological feature repetition suppression in the left inferior frontal gyrus. Journal of Cognitive Neuroscience 30(10). 1549–1557. Retrieved from https://doi.org/10.1162/jocn_a_01287 (PMID: 29877763), doi: 10.1162/jocn_a_01287.

    • Crossref
    • PubMed
    • Export Citation
  • Peirce, J. 2007. PsychoPy – Psychophysics software in Python. Journal of Neuroscience Methods 162. 8–13.

    • Crossref
    • PubMed
    • Export Citation
  • R Development Core Team. 2014. R: A language and environment for statistical computing [Computer software manual]. Vienna, Austria. Retrieved from http://www.R-project.org (ISBN 3-900051-07-0).

  • Radeau, M., J. Morais & A. Dewier. 1989. Phonological priming in spoken word recognition: Task effects. Memory and Cognition 17(5). 525–535.

    • Crossref
    • Export Citation
  • Radeau, M., J. Morais & J. Segui. 1995. Phonological priming between monosyllabic spoken words. Journal of Experimental Psychology: Human Perception and Performance 21(6). 1297–1311.

  • Raftery, A. E. 1995. Bayesian model selection in social research. Sociological Methodology 25. 111–163. Retrieved from http://www.jstor.org/stable/271063.

    • Crossref
    • Export Citation
  • Read, C., Z. Yun-Fei, N. Hong-Yin & D. Bao-Qing. 1986. The ability to manipulate speech sounds depends on knowing alphabetic writing. Cognition 24. 31–44.

    • Crossref
    • PubMed
    • Export Citation
  • Schiller, N. O., A. Costa & A. Colomé. 2002. Phonological encoding of single words: In search of the lost syllable. In Papers in laboratory phonology VII. Berlin: Mouton de Gruyter.

  • Schvaneveldt, R. & D. E. Meyer. 1973. Retrieval and comparison processes in semantic memory.Attention and performance IV. New York: Academic Press.

  • Slowiaczek, L. M. & D. B. Pisoni. 1986. Effects of phonological similarity on priming in auditory lexical decision. Memory and Cognition 14(3). 230–237.

    • Crossref
    • Export Citation
  • Slowiaczek, L. M., H. C. Nusbaum & D. B. Pisoni. 1987. Phonological priming in auditory word recognition. Journal of Experimental Psychology: Learning, Memory, and Cognition 13(1). 64–75.

    • PubMed
    • Export Citation
  • Taler, V., P. G. Aaron, L. G. Steinmetz & D. B. Pisoni. 2010. Lexical neighborhood density effects on spoken word recognition and production in healthy aging. The Journals of Gerontology Series B: Psychological Sciences and Social Sciences 65B(5). 551–560. doi: 10.1093/geronb/gbq039.

    • Crossref
    • Export Citation
  • Tulving, E. & D. L. Schacter. 1990. Priming and human memory systems. Science 247(4940). 301–306.

    • Crossref
    • PubMed
    • Export Citation
  • Umeda, N. 1977. Consonant duration in American English. The Journal of the Acoustical Society of America 61(3). 846–858.

    • Crossref
    • Export Citation
  • Vaden, K., H. Halpin & G. Hickok. 2009. Irvine phonotactic online dictionary, version 1.4. [data file]. Available from http://www.iphod.com.

  • Vitevitch, M. S. & M. S. Sommers. 2003. The facilitative influence of phonological similarity and neighborhood frequency in speech production in younger and older adults. Memory and Cognition 31(4). 491–504.

    • Crossref
    • Export Citation
  • Wagenmakers, E.-J. (2007). A practical solution to the pervasive problems ofp values. Psychonomic Bulletin and Review 14(5). 779–804. Retrieved from https://doi.org/10.3758/BF03194105, doi: 10.3758/BF03194105.

    • Crossref
    • Export Citation
FREE ACCESS

Journal + Issues

Linguistics Vanguard is a new channel for high-quality articles in all major fields of linguistics. Published solely online, the multimodal journal provides an accessible platform supporting both traditional contributions as well as innovative publications featuring interactive content. Linguistics Vanguard publishes concise and up-to-date reports on the state of the art in linguistics as well as cutting-edge research papers.

Search