Skip to content
Licensed Unlicensed Requires Authentication Published by De Gruyter Mouton December 1, 2022

Diving deeper into subregular syntax

  • Thomas Graf EMAIL logo
From the journal Theoretical Linguistics

Corresponding author: Thomas Graf, Department of Linguistics, Stony Brook University, Stony Brook, NY, USA, E-mail:

Award Identifier / Grant number: BCS-1845344

Research funding

The work reported in this paper was supported by the National Science Foundation under Grant No. BCS-1845344.

Appendix: TSL over trees without trees

This appendix illustrates that subregular classes like SL and TSL represent very abstract concepts that can be specified in numerous ways. In particular, it is even possible to have a tree-less view of TSL over trees. This addresses the remark of Brody (p. 199f) that “Graf talks here about daughter strings of a given node, and this way of talking contains the concept of ‘string’. But it crucially presupposes the tree structure: a node with a set of daughters.” It also illustrates that subregular linguistics does not put any restrictions on formalisms or metalanguages because it is the complexity of the underlying computation that matters, not how that computation is specified.

TSL over trees can be regarded as a system that associates every node in a dependency tree with one or more strings of other nodes in the tree. We visualize this process as the projection of tree tiers, and on each one of those tiers we then check that each node has a licit string of daughters. But mathematically this is only a convenient metaphor for associating a node with a string of other nodes. Each tier corresponds to a specific association between nodes and strings of nodes (in the case where a node is not projected onto the tier, it is associated with the empty string, whereas a node that is projected on the tier but has no tier daughters is associated with the special string ). Instead of tiers, we can use a table to represent these associations, which is exemplified below.

Node-string associations visualized via tiers
Node-string associations as a table
Node nom-string wh-string
did[C, wh+] ε which[D, wh]
ε[T, nom+] John[D, nom] might[T, nom+] ε
ε[v] ε ε
John[D, nom] ε
complain[V] ε ε
that[C] ε ε
might[T, nom+] Mary[D, nom] ε
ε[v] ε ε
Mary[D, nom] ε
buy[V] ε ε
which[D, wh] ε
car[N] ε ε

Suppose then that we have a function parse that assigns every string a dependency tree with tree tiers, and a function stringify that takes as its input a dependency tree with tiers and converts it to a table of the form above. Given a string s, the output of stringify(parse(s)) is a table that encodes associations between each lexical item and some other lexical items in the sentence, and these associations are then checked by the grammar in the same manner that we employed over tiers. The mathematical core of TSL over trees then is not the projection of tiers, but rather how tier projection allows us to associate nodes in a dependency tree, which are just lexical items, with strings of other nodes, i.e. strings of other lexical items.

This might seem like a pointless mathematical exercise because we are still invoking tree structure in order to get our function stringify to work. But given two functions f and g, it is always possible to compose those two functions into a single function f g that directly maps every x to the output of f(g(x)). Crucially, this need not involve the intermediary output produced by g and fed into f. For example, if f(x) = x − 1 and g(x) = x + 1, then we can simply define f g as mapping every x to x — there is no need to first increment x by 1 as is done by g only to decrement it by 1 immediately afterwards in line with f. The possibility of expressing TSL dependencies over strings rather than over trees by feeding the output of parse into stringify tells us that there is some function s t r i n g i f y p a r s e that can take as its input a string and immediately, in one fell swoop, tell us what strings of lexical items each lexical item is associated with.

We linguists have not figured out yet what this mystery function is or how we could define it without invoking tree structure at some point, and as a result this tree-free perspective on TSL over trees is not very useful. It may also be completely different from how TSL computations are actually represented in the human mind, either because they are stated over trees after all, or because the human mind uses yet another specification that subregular researchers have not discovered yet. Be that as it may, this alternative view of TSL establishes that there are many different ways of thinking of the formal class TSL, some of which do not invoke trees at all.


Abels, Klaus & Ad Neeleman. 2009. Universal 20 without the LCA. In José M. Brucart, Anna Gavarró & Jaume Solà (eds.), Merging features: Computation, interpretation, and acquisition, 60–79. Oxford: Oxford University Press.10.1093/acprof:oso/9780199553266.003.0004Search in Google Scholar

Bondevik, Ingrid, Dave Kush & Terje Lohndal. 2021. Variation in adjunct islands: The case of Norwegian. Nordic Journal of Linguistics 44. 223–254. in Google Scholar

Brody, Michael. 2019. Some biolinguistic remarks. Acta Linguistica Academica 66. 335–348. in Google Scholar

Burness, Phillip & Kevin McMullin. 2019. Efficient learning of output tier-based strictly 2-local functions. MoL 16. 78–90.10.18653/v1/W19-5707Search in Google Scholar

Burness, Phillip, Kevin McMullin & Jane Chandlee. 2021. Long-distance phonological processes as tier-based strictly local functions. Glossa 6. 1–37. in Google Scholar

Chandlee, Jane. 2017. Computational locality in morphological maps. Morphology 27. 599–641. in Google Scholar

Chandlee, Jane & Jeffrey Heinz. 2018. Strict locality and phonological maps. Linguistic Inquiry 49. 23–60. in Google Scholar

Chomsky, Noam. 1986. Knowledge of language: Its nature, origin, and use. New York: Praeger.Search in Google Scholar

Chomsky, Noam. 1995. The Minimalist program. Cambridge, MA: MIT Press.Search in Google Scholar

Chomsky, Noam. 1998. Minimalist inquiries: The framework, vol. 15 of MIT occasional papers in linguistics. Cambridge, MA: MIT Press.Search in Google Scholar

Chomsky, Noam. 2005. Three factors in language design. Linguistic Inquiry 36. 1–22. in Google Scholar

De Santo, Aniello & Thomas Graf. 2019. Structure sensitive tier projection: Applications and formal properties. In Raffaella Bernardi, Gregory Kobele & Sylvain Pogodalla (eds.), Formal grammar, 35–50. Heidelberg: Springer.10.1007/978-3-662-59648-7_3Search in Google Scholar

Durvasula, Karthik. 2020. O gradience, whence do you come? Slides of a talk given at AMP 2020, September 20, 2020. Department of Linguistics: UC Santa Cruz.Search in Google Scholar

Georgi, Doreen. 2017. Patterns of movement reflexes as the result of the order of merge and agree. Linguistic Inquiry 48. 585–626. in Google Scholar

Goodman, Joshua. 1999. Semiring parsing. Computational Linguistics 25. 573–605.Search in Google Scholar

Gorman, Kyle. 2013. Generative phonotactics. University of Pennsylvania Doctoral Dissertation.Search in Google Scholar

Graf, Thomas. 2017. A computational guide to the dichotomy of features and constraints. Glossa 2. 1–36. in Google Scholar

Graf, Thomas. 2020. Curbing feature coding: Strictly local feature assignment. SCiL 3. 362–371.Search in Google Scholar

Graf, Thomas. 2022. Typological implications of tier-based strictly local movement. SCiL 5. 184–193.Search in Google Scholar

Graf, Thomas & Aniello De Santo. 2019. Sensing tree automata as a model of syntactic dependencies. MoL 16. 12–26. Available at: in Google Scholar

Graf, Thomas & Kalina Kostyszyn. 2021. Multiple wh-movement is not special: The subregular complexity of persistent features in Minimalist grammars. SCiL 4. 275–285.Search in Google Scholar

Graf, Thomas, Monette James & Chong Zhang. 2017. Relative clauses as a benchmark for Minimalist parsing. Journal of Language Modelling 5. 57–106. in Google Scholar

Hao, Yiding & Samuel Andersson. 2019. Unbounded stress in subregular phonology. SIGMORPHON 16. 135–143.10.18653/v1/W19-4216Search in Google Scholar

Hao, Yiding & Dustin Bowers. 2019. Action-sensitive phonological dependencies. SIGMORPHON 16. 218–228.10.18653/v1/W19-4225Search in Google Scholar

Heinat, Fredrik. 2006. Probes, pronouns and binding in the Minimalist program. University of Lund Doctoral Dissertation.Search in Google Scholar

Ji, Jing & Jeffrey Heinz. 2020. Input strictly local tree transducers. In Language and automata theory and applications: 14th International conference, LATA 2020, Milan, Italy, vol. 12038 of LNCS, 369–381.10.1007/978-3-030-40608-0_26Search in Google Scholar

Jurgec, Peter. 2011. Feature spreading 2.0: A unified theory of assimilation. University of Tromsø Doctoral Dissertation.Search in Google Scholar

Kayne, Richard S. 1994. The antisymmetry of syntax. Cambridge, MA: MIT Press.Search in Google Scholar

Kobele, Gregory M., Sabrina Gerth & John T. Hale. 2013. Memory resource allocation in top-down Minimalist parsing. In Glyn Morrill & Mark-Jan Nederhof (eds.), Formal grammar: 17th and 18th International conferences, FG 2012, Opole, Poland, August 2012, Revised Selected Papers, FG 2013, Düsseldorf, Germany, August 2013, 32–51. Heidelberg: Springer.10.1007/978-3-642-39998-5_3Search in Google Scholar

Lambert, Dakotah, Jonathan Rawski & Jeffrey Heinz. 2021. Typology emerges from simplicity in representations and learning. Journal of Language Modelling 9. 151–194. in Google Scholar

Mayer, Connor. 2021. Capturing gradience in long-distance phonology using probabilistic tier-based strictly local grammars. SCiL 4. 39–50.Search in Google Scholar

McCloskey, James. 2001. The morphosyntax of wh-extraction in Irish. Journal of Linguistics 37. 67–100. in Google Scholar

Pasternak, Robert & Thomas Graf. 2021. Cyclic scope and processing difficulty in a Minimalist parser. Glossa 6. 1–34. in Google Scholar

Pullum, Geoffrey K. & James Rogers. 2006. Animal pattern-learning experiments: Some mathematical background. Radcliffe Institute for Advanced Study, Harvard University Ms.Search in Google Scholar

Pullum, Geoffrey K. & Barbara C. Scholz. 2005. Contrasting applications of logic in natural language syntactic description. In Petr Hájek, Luis Valdés-Villanueva & Dag Westerståhl (eds.), Logic, methodology and philosophy of science, 481–503. London: College Publications Department of Computer Science, KCL, The Strand.Search in Google Scholar

Richards, Norvin. 2016. Contiguity theory. Cambridge, MA: MIT Press.10.7551/mitpress/9780262034425.001.0001Search in Google Scholar

Rogers, James & Geoffrey K. Pullum. 2011. Aural pattern recognition experiments and the subregular hierarchy. Journal of Logic, Language and Information 20. 329–342. in Google Scholar

Savitch, Walter J. 1993. Why it might pay to assume that languages are infinite. Annals of Mathematics and Artificial Intelligence 8. 17–25. in Google Scholar

Stepanov, Arthur. 2001. Late adjunction and Minimalist phrase structure. Syntax 4. 94–125. in Google Scholar

Stepanov, Arthur. 2007. The end of CED? Minimalism and extraction domains. Syntax 10. 80–126. in Google Scholar

Torr, John. 2017. Autobank: A semi-automatic annotation tool for developing deep Minimalist grammar treebanks. EACL 15. 81–86.10.18653/v1/E17-3021Search in Google Scholar

Zentz, Jason. 2015. Bantu Wh-agreement and the case against probe impoverishment. ACAL 44. 290–301.Search in Google Scholar

Published Online: 2022-12-01
Published in Print: 2022-10-26

© 2022 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 26.9.2023 from
Scroll to top button