Funding source: National Science Foundation
Award Identifier / Grant number: BCS-1845344
The work reported in this paper was supported by the National Science Foundation under Grant No. BCS-1845344.
Appendix: TSL over trees without trees
This appendix illustrates that subregular classes like SL and TSL represent very abstract concepts that can be specified in numerous ways. In particular, it is even possible to have a tree-less view of TSL over trees. This addresses the remark of Brody (p. 199f) that “Graf talks here about daughter strings of a given node, and this way of talking contains the concept of ‘string’. But it crucially presupposes the tree structure: a node with a set of daughters.” It also illustrates that subregular linguistics does not put any restrictions on formalisms or metalanguages because it is the complexity of the underlying computation that matters, not how that computation is specified.
TSL over trees can be regarded as a system that associates every node in a dependency tree with one or more strings of other nodes in the tree. We visualize this process as the projection of tree tiers, and on each one of those tiers we then check that each node has a licit string of daughters. But mathematically this is only a convenient metaphor for associating a node with a string of other nodes. Each tier corresponds to a specific association between nodes and strings of nodes (in the case where a node is not projected onto the tier, it is associated with the empty string, whereas a node that is projected on the tier but has no tier daughters is associated with the special string ). Instead of tiers, we can use a table to represent these associations, which is exemplified below.
Suppose then that we have a function parse that assigns every string a dependency tree with tree tiers, and a function stringify that takes as its input a dependency tree with tiers and converts it to a table of the form above. Given a string s, the output of stringify(parse(s)) is a table that encodes associations between each lexical item and some other lexical items in the sentence, and these associations are then checked by the grammar in the same manner that we employed over tiers. The mathematical core of TSL over trees then is not the projection of tiers, but rather how tier projection allows us to associate nodes in a dependency tree, which are just lexical items, with strings of other nodes, i.e. strings of other lexical items.
This might seem like a pointless mathematical exercise because we are still invoking tree structure in order to get our function stringify to work. But given two functions f and g, it is always possible to compose those two functions into a single function that directly maps every x to the output of f(g(x)). Crucially, this need not involve the intermediary output produced by g and fed into f. For example, if f(x) = x − 1 and g(x) = x + 1, then we can simply define as mapping every x to x — there is no need to first increment x by 1 as is done by g only to decrement it by 1 immediately afterwards in line with f. The possibility of expressing TSL dependencies over strings rather than over trees by feeding the output of parse into stringify tells us that there is some function that can take as its input a string and immediately, in one fell swoop, tell us what strings of lexical items each lexical item is associated with.
We linguists have not figured out yet what this mystery function is or how we could define it without invoking tree structure at some point, and as a result this tree-free perspective on TSL over trees is not very useful. It may also be completely different from how TSL computations are actually represented in the human mind, either because they are stated over trees after all, or because the human mind uses yet another specification that subregular researchers have not discovered yet. Be that as it may, this alternative view of TSL establishes that there are many different ways of thinking of the formal class TSL, some of which do not invoke trees at all.
Abels, Klaus & Ad Neeleman. 2009. Universal 20 without the LCA. In José M. Brucart, Anna Gavarró & Jaume Solà (eds.), Merging features: Computation, interpretation, and acquisition, 60–79. Oxford: Oxford University Press.10.1093/acprof:oso/9780199553266.003.0004Search in Google Scholar
Bondevik, Ingrid, Dave Kush & Terje Lohndal. 2021. Variation in adjunct islands: The case of Norwegian. Nordic Journal of Linguistics 44. 223–254. https://doi.org/10.1017/s0332586520000207.Search in Google Scholar
Burness, Phillip, Kevin McMullin & Jane Chandlee. 2021. Long-distance phonological processes as tier-based strictly local functions. Glossa 6. 1–37. https://doi.org/10.16995/glossa.5780.Search in Google Scholar
Chomsky, Noam. 1986. Knowledge of language: Its nature, origin, and use. New York: Praeger.Search in Google Scholar
Chomsky, Noam. 1995. The Minimalist program. Cambridge, MA: MIT Press.Search in Google Scholar
Chomsky, Noam. 1998. Minimalist inquiries: The framework, vol. 15 of MIT occasional papers in linguistics. Cambridge, MA: MIT Press.Search in Google Scholar
De Santo, Aniello & Thomas Graf. 2019. Structure sensitive tier projection: Applications and formal properties. In Raffaella Bernardi, Gregory Kobele & Sylvain Pogodalla (eds.), Formal grammar, 35–50. Heidelberg: Springer.10.1007/978-3-662-59648-7_3Search in Google Scholar
Durvasula, Karthik. 2020. O gradience, whence do you come? Slides of a talk given at AMP 2020, September 20, 2020. Department of Linguistics: UC Santa Cruz.Search in Google Scholar
Goodman, Joshua. 1999. Semiring parsing. Computational Linguistics 25. 573–605.Search in Google Scholar
Gorman, Kyle. 2013. Generative phonotactics. University of Pennsylvania Doctoral Dissertation.Search in Google Scholar
Graf, Thomas. 2020. Curbing feature coding: Strictly local feature assignment. SCiL 3. 362–371.Search in Google Scholar
Graf, Thomas. 2022. Typological implications of tier-based strictly local movement. SCiL 5. 184–193.Search in Google Scholar
Graf, Thomas & Aniello De Santo. 2019. Sensing tree automata as a model of syntactic dependencies. MoL 16. 12–26. Available at: https://www.aclweb.org/anthology/W19-5702.10.18653/v1/W19-5702Search in Google Scholar
Graf, Thomas & Kalina Kostyszyn. 2021. Multiple wh-movement is not special: The subregular complexity of persistent features in Minimalist grammars. SCiL 4. 275–285.Search in Google Scholar
Graf, Thomas, Monette James & Chong Zhang. 2017. Relative clauses as a benchmark for Minimalist parsing. Journal of Language Modelling 5. 57–106. https://doi.org/10.15398/jlm.v5i1.157.Search in Google Scholar
Heinat, Fredrik. 2006. Probes, pronouns and binding in the Minimalist program. University of Lund Doctoral Dissertation.Search in Google Scholar
Ji, Jing & Jeffrey Heinz. 2020. Input strictly local tree transducers. In Language and automata theory and applications: 14th International conference, LATA 2020, Milan, Italy, vol. 12038 of LNCS, 369–381.10.1007/978-3-030-40608-0_26Search in Google Scholar
Jurgec, Peter. 2011. Feature spreading 2.0: A unified theory of assimilation. University of Tromsø Doctoral Dissertation.Search in Google Scholar
Kayne, Richard S. 1994. The antisymmetry of syntax. Cambridge, MA: MIT Press.Search in Google Scholar
Kobele, Gregory M., Sabrina Gerth & John T. Hale. 2013. Memory resource allocation in top-down Minimalist parsing. In Glyn Morrill & Mark-Jan Nederhof (eds.), Formal grammar: 17th and 18th International conferences, FG 2012, Opole, Poland, August 2012, Revised Selected Papers, FG 2013, Düsseldorf, Germany, August 2013, 32–51. Heidelberg: Springer.10.1007/978-3-642-39998-5_3Search in Google Scholar
Lambert, Dakotah, Jonathan Rawski & Jeffrey Heinz. 2021. Typology emerges from simplicity in representations and learning. Journal of Language Modelling 9. 151–194. https://doi.org/10.15398/jlm.v9i1.262.Search in Google Scholar
Mayer, Connor. 2021. Capturing gradience in long-distance phonology using probabilistic tier-based strictly local grammars. SCiL 4. 39–50.Search in Google Scholar
Pullum, Geoffrey K. & James Rogers. 2006. Animal pattern-learning experiments: Some mathematical background. Radcliffe Institute for Advanced Study, Harvard University Ms.Search in Google Scholar
Pullum, Geoffrey K. & Barbara C. Scholz. 2005. Contrasting applications of logic in natural language syntactic description. In Petr Hájek, Luis Valdés-Villanueva & Dag Westerståhl (eds.), Logic, methodology and philosophy of science, 481–503. London: College Publications Department of Computer Science, KCL, The Strand.Search in Google Scholar
Rogers, James & Geoffrey K. Pullum. 2011. Aural pattern recognition experiments and the subregular hierarchy. Journal of Logic, Language and Information 20. 329–342. https://doi.org/10.1007/s10849-011-9140-2.Search in Google Scholar
Savitch, Walter J. 1993. Why it might pay to assume that languages are infinite. Annals of Mathematics and Artificial Intelligence 8. 17–25. https://doi.org/10.1007/bf02451546.Search in Google Scholar
Zentz, Jason. 2015. Bantu Wh-agreement and the case against probe impoverishment. ACAL 44. 290–301.Search in Google Scholar
© 2022 Walter de Gruyter GmbH, Berlin/Boston