Closure properties in the class of multiple context-free groups

Robert P. Kropholler 1  and Davide Spriano 2
  • 1 Tufts University, Medford, USA
  • 2 ETH Zurich, Zurich, Switzerland
Robert P. Kropholler and Davide Spriano

Abstract

We show that the class of groups with k-multiple context-free word problem is closed under graphs of groups with finite edge groups.

1 Introduction

Multiple context-free languages (MCFLs) form a class of languages which contains context-free languages and is contained in context sensitive languages. MCFLs were introduced to better model natural languages for which it has been shown that context-free languages did not allow enough expressibility [9]. MCFLs allow some cross serial dependencies in natural languages such as Swiss German; for nice examples, see [10]. They share several properties with context-free languages. Indeed, they form a cone of languages, they are semilinear, they are not closed under intersection and they satisfy some form of pumping lemma [12]. MCFLs also have some useful decidability properties, for instance, one can decide membership in polynomial time [12].

Given a presentation for a group G, it is a natural question to ask whether two words represents the same element in G. Using the elementary fact that g=hgh-1=1, this is equivalent to establishing whether a given product of generators represents the identity element. One of the most successful strategies for tackling this question is to consider the set of all words that represents the trivial element, the so-called word problem, and study it via language theoretical instruments. A remarkable result of Muller and Schupp [8], which relies on results of Stallings and Dunwoody [13, 3], shows that the class of groups that have context-free word problem coincides with the class of virtually free groups.

With a complete classification of groups whose word problem is context-free, it is natural to look at larger classes. We will be interested in the class of multiple context-free languages (MCFLs); we will give a rigorous definition of this class in due course. The class was first studied in [12]. The class of MCFLs is strictly larger than the class of CF languages. For example, the language {anbncnn} is MCF but not CF. It was not until [11] that it was known that the difference could be seen on the level of groups. Namely, [11] shows that the word problem for 2 is multiple context-free. However, since 2 is not a virtually free group, the word problem is not context-free. This result has been extended by Ho [5], where it is shown that all free abelian groups have multiple context-free word problem.

It is natural then to ask what are the closure properties of this class. It is shown in [12] that the class is closed under finite extensions and taking finitely generated subgroups. It is shown in [4] that the class is not closed under direct products.

In this paper, we prove the following result.

Theorem A.

Let G be the fundamental group of a finite graph of groups. Assume that all the vertex groups have multiple context-free word problem and all the edges groups are finite. Then G has multiple context-free word problem.

Since the class groups with regular word problem coincides with the class of finite groups, one could rephrase this result as saying that the class of MCF groups is closed under amalgamation over regular groups. This result is not true substituting regular groups with CF groups. Indeed, F2×F2=(22)F2(22) and does not have multiple context-free word problem.

2 Background

We are interested in the study of formal languages. In this section, we will give an introduction to formal languages and MCFLs. For a more comprehensive treatment, we refer to [6].

Definition 2.1.

Given a finite set Σ, Σ* is the free monoid over S, i.e., the set of all finite words in Σ with the concatenation operation. We will denote with ε the trivial element of Σ*, namely, the empty word.

Definition 2.2.

Given a finite set Σ, we say that a set LΣ* is a language over Σ.

Since the definition of language is very broad, we will restrict our attention to languages that have a nice description. The reader should think of this as the same meta-distinction between continuous functions and continuous functions that can be phrased in terms of elementary functions.

Hence we want to prescribe a general recipe that will allow us to produce languages.

Chomsky grammars and hierarchy

Definition 2.3.

A Chomsky grammarG is a tuple (Σ,N,δ,S), where Σ and N are (disjoint) finite sets, SN and δ is a finite subset of ((ΣN)*Σ*)×(ΣN)*. Namely, if (x,y)δ, then x contains at least one symbol of N. We call Σ the set of terminals of G, N the set of non-terminals, S the starting symbol and δ the production rules.

Notation.

We will often use the following conventions: the elements of Σ will be denoted by lowercase letters (e.g., {a,b,c}), the elements of N by uppercase letters (e.g., {A,B,S}), and elements τ=(aB,BccA) of δ as τ:aBBccA.

Given a grammar G=(Σ,N,S,δ), it is always possible to associate a (possibly empty) language L(G)Σ*. We will describe inductively the language L(G).

Definition 2.4.

Let G=(Σ,N,S,δ) be a grammar. We want to describe a subset D(G)(ΣN)* of derivable words.

  1. S is derivable.
  2. For u,v,w(ΣN)*, if uvw is derivable and the rule vx is an element of δ, then uxw is derivable. In particular, we say that uxw is derivable from uvw.

We say that a derivation (for wk) is a chain of words S=w1,,wk such that wi+1 is derivable from wi. The language associated to the grammar G is the intersection L(G)=D(G)Σ*, namely, all the derivable words that consists only of terminals symbols.

Example 2.5.

Let G=({a,b,c},{A,B,S},S,δ) be a grammar, where δ consists of the following rules:

  1. τ1:SAB,
  2. τ2:AaAb,
  3. τ3:BABc,
  4. τ4:Aε,
  5. τ5:Bε.

To generate the language L(G), we will try to understand the derivable words. We start with the symbol S. The only rule we can apply at the first step is τ1, yielding AB. Then we can substitute A with aAb, using rule τ2, getting aAbB. Applying τ2k more times gives akAbkB. Rule τ4 gives akbkB. Now if we apply rule τ3, we will get akbkABc. We can repeat the process above and get some word of the form ak1bk1aknbknBcm. After applying rule τ5, we would get ak1bk1aknbkncm, which is a string composed of non-terminals only.

We now give a classification of some grammars.

Definition 2.6.

A Chomsky grammar G=(Σ,N,δ,S) is called

  1. regular if all the elements of δ have the form XwY, where XN, YN{ε} and wΣ*;
  2. context-free if all the elements of δ have the form Xw, where XN and w(ΣN)*;
  3. unrestricted otherwise.

The language L(G) is regular (respectively, context-free or recursively enumerable) if G is regular (respectively, context-free or unrestricted).

The intuitive idea that one should have about the above definition is the following: a derivation in a regular language consists of substituting the last letter of a word with a new string of letters. A derivation in a context-free language consists of substituting a single letter (but not necessarily the last one) of a word with a new string of letters. The last case covers all other possibilities.

The gap between being context-free and being recursively enumerable seems (and in fact is) very big. The class of multiple context-free languages (MCFLs) that we are going to describe is one of the classes that properly lives in this gap, namely, properly contains context-free languages, and is properly contained in the class of recursively enumerable languages [12].

As before, we are going to describe a grammar that defines the class of MCFLs. It should be noted that this will not be a Chomsky grammar. We start with the definition of linear rewriting function. The idea is very simple, but the definition may look a bit convoluted. Intuitively, a linear rewriting function is a function that “pastes words together”, possibly adding some string of letters. For instance, if a,b are letters and v,w words, a linear rewriting function is (v,w)waabvb.

Definition 2.7.

Fix a finite alphabet Σ, and let X={x1,,xn} be a finite (possibly empty) set of variables. A rewriting on the variables {x1,,xn} is a word w(XΣ)*. We say that a rewriting w is linear if each element of X occurs at most once.

Given a rewriting w, we can associate to it the function fw:(Σ*)nΣ* that associates to each tuple (u1,,un) the word obtained by substituting in w each occurrence of xi with ui. If n=0, then (Σ*)0={ε} and fw is the constant function w. A rewriting function is linear if it comes from a linear rewriting.

We say that a function f:(Σ*)n(Σ*)m is a (multiple) rewriting function if it is a rewriting function in each component. A (multiple) rewriting function coming from rewritings w1,,wm is linear if w1wm is linear.

Note that being linear in each component is not enough for a multiple rewriting function to be linear. In fact, the whole word w1wm must be linear; this implies that each variable xi appears in at most one of the wj. In order to simplify notation, from now on, we will call multiple rewriting functions simply rewriting functions.

Definition 2.8.

A stratified set is a set N equipped with a function :N{0}. The function is called a dimension.

Definition 2.9.

A multiple context-free grammar (MCFG) on an alphabet Σ is a tuple (Σ,N,S,F) satisfying the following:

  1. Σ is a finite set of terminals.
  2. N is a finite stratified set of non-terminals.
  3. SN is the starting symbol such that S=1.
  4. F is a finite set of elements of the form (A,f,B1,,Bs), where A,B1,,Bs are elements of N, and f:(Σ*)B1++Bs(Σ*)A is a linear rewriting function.

Given an element τ=(A,f,B1,,Bs) of F, we will denote it by τ=Af(B1,,Bs). We say that the grammar is k-MCF if Ak for all AN.

As in the case of Chomsky grammars, given a MCFG H, we want to associate a language L(H) to it.

Definition 2.10.

Let H=(Σ,N,S,F) be a MCFG, and let AN. We inductively define DH(A)(Σ*)A as follows: for each τF,

  1. if τ=Af(ε), then f(ε)DH(A);
  2. if τ=Af(B1,,Bs) and y1DH(B1),,ysDH(Bs), then f(y1,,ys)DH(A).

Definition 2.11.

For a MCFG H=(Σ,N,S,F), we define the language associated to H as DH(S). We say that a language L is a multiple context-free language if there is a MCFG H such that L=DH(S).

3 Grammars and automata

The goal of this section is to explain the relation between grammars and automata. In what follows, an automaton should be thought as a “computer with limitations”, namely as a machine that can do some operations, but does not possess the power (usually memory) of a Turing machine. As in the case of grammars, an automaton is naturally associated to a language. The intuitive explanation for this is the following: an automaton is associated to an algorithm that, given a word, either “accepts” or “rejects” it. The language associated to an automaton is the set of all “accepted” words.

In what follows, we fix a finite alphabet Σ, and all the definitions are understood to be dependent on Σ. Recall that a partial functionf:AB is a map of sets defined on a subset CA, called the domain of f.

Definition 3.1.

A storage type is a tuple T=(C,P,F,CI) satisfying the following: C is a set, called the set of storage configurations; P is a subset of the power set 𝒫(C), and the elements of P are called predicates; F is a set of partial functions f:CC called instructions; and CIC is a set of initial configurations.

Definition 3.2.

An automaton with storage is a tuple =(Q,T,I,δ), where Q is a finite set of states, T=(C,P,F,CI) is a storage type, I is a tuple I=(qI,cI,QF), where qIQ is the initial state, QFQ are the final states, and cICI is the initial storage configuration. Finally, δQ×(Σ{ε})×P×F×Q is a finite set of transitions.

Definition 3.3.

Given an automaton with storage =(Q,T,I,δ), we define the graph realisation of , denoted by Γ(), as the following oriented labelled graph:

  1. The vertices of Γ() are the elements of Q×C.
  2. To each τ=(q1,σ,p,f,q2)δ, we associate an oriented edge between the pair ((q1,c1),(q2,c2)) if c1p, f(c1)=c2. In that case, the label of this edge is σ.

Note that f is a partial function, so with f(c1)=c2 we are also asking that c1 is in the domain of f.

Definition 3.4.

Let Σ be an alphabet, and let g:(Σ{ε})*Σ* be the morphism of monoids that sends ε to the empty word and is the identity on all the other generators. Given a word wΣ*, we say that a word w(Σ{ε})* is an ε-expansion of w if g(w)=w.

Definition 3.5.

Given an automaton with storage , we define a language L()Σ* as follows. A word w is in L() if and only if there is an oriented path γ in Γ() starting from (qI,cI) and ending in a vertex (q,c) with qQF such that the word formed by the labels of γ is an ε-expansion of w.

In order to improve the readability of the above definitions, we will provide a fairy tale example to clarify the role of the various entities above.

Imagine there is a group of children playing a treasure hunt in a town. The town is finite (as towns tend to be), and each block of the town is one of the states Q. The children possess an extremely bad memory, but luckily each of them is equipped with a book to write notes. The set C consists of all possible books with all possible contents opened to any page. The set P contains some description about the state of the book, for example “the set of all books open on a blank page” or “all books open to the 12th page”.

Now suppose that there is a voice guiding the game in order to help the children find the treasure and, in particular, every now and then is reading out loud some hint (the alphabet Σ). The voice represents the word w in the alphabet. When a hint (letter) is read, the children will perform an action, and the possible actions are encoded in the set δ.

At the start of the game, the children will all be in the central block of the city (qI), with an empty book open on the first page (cI), and the treasures will be buried in some blocks (QF) of the city. The typical turn will work as follows: every child will check on which block they are standing on (an element of Q), then listen to what the voice is saying (an element of Σ), and look if there is something written on the book (an element of P). Then each child decides which strategy apply on that turn (i.e., picks an element of δ), which is compatible with the information Q, Σ and P. Following such a strategy, they may change page or write something on the book (an element of F) and go to a new block (an element of Q) accordingly. If, at any time, a child cannot perform an action, then he or she is disqualified from the game. When the voice stops giving hints, each child will start digging exactly where they stand and see if a treasure is found.

If at least one child has found a treasure, then the instructions were correct (and hence the word w is accepted).

Let us start with some famous automata in order to familiarise with the above concepts.

Definition 3.6.

A trivial storage is a storage type T=(C,P,F,CI) with C={CI}, P={C} and F={𝚒𝚍}.

Definition 3.7.

A finite state automaton (FSA) is an automaton with storage with trivial storage.

It is a very easy exercise to see that an FSA is completely described by a finite oriented graph with edges labelled by elements of Σ (and not Σ{ε}).

The following theorem forms a bridge between languages associated to grammars, and languages accepted by automata.

Theorem 3.8 ([6]).

For a language LΣ*, the following are equivalent:

  1. L is associated to a regular grammar;
  2. L is accepted by an FSA.

Definition 3.9.

A push-down storage over a finite alphabet Ω is a storage type T=(C,P,F,CI), where the following holds:

  1. C=Ω*.
  2. We define the set 𝚎𝚚𝚞𝚊𝚕𝚜(ω) as the set of words in Ω* that end with ω (note that 𝚎𝚚𝚞𝚊𝚕𝚜(ε) is the set {ε}). Then P={𝚎𝚚𝚞𝚊𝚕𝚜(ω)ωΩ{ε}}.
  3. We define the function 𝚙𝚞𝚜𝚑(ω):Ω*Ω* that sends x to xω. Furthermore, we define a partial function 𝚙𝚘𝚙ω:𝚎𝚚𝚞𝚊𝚕𝚜(ω)Ω* that sends xω to x. Then F={Id}{𝚙𝚘𝚙ω,𝚙𝚞𝚜𝚑(ω)ωΩ}.
  4. CI={ε}.

The intuitive idea behind the push-down storage is to have a stack of papers that can grow arbitrarily large, but the automaton can read only what is written on the top-most paper. This corresponds to the predicate 𝚎𝚚𝚞𝚊𝚕𝚜(ω). Then one can put another paper on top with the letter ω (𝚙𝚞𝚜𝚑(ω)) or remove the old one (𝚙𝚞𝚕𝚕ω). Note that the alphabet Ω is, in general, not the same as Σ.

Definition 3.10.

A push-down automaton is an automaton with storage with push-down storage.

Theorem 3.11 ([1]).

For a language LΣ*, the following are equivalent:

  1. L is associated to a context-free grammar;
  2. L is accepted by a push-down automaton.

We now want to describe the last automaton we are interested in, namely, the tree-stack automaton.

Definition 3.12.

Let S be a set. If uvS*, we say that u is a prefix for uv. Given a set DS*, we say that D is prefix-closed if, for each word wD, all the prefixes of w are in D. Similarly, we say that v is a suffix for uv.

Definition 3.13.

Given an alphabet Ω, an Ω-tree is a partial function T:*Ω{} such that T-1()={ε} and domain(T)* is prefix-closed.

Note that this corresponds to a rooted tree in the usual graph-theory sense, where each edge is labelled by a natural number, the root is labelled by the symbol and every other vertex is labelled by an element of Ω.

Definition 3.14.

An Ω-tree with a pointer is a pair (T,p) such that T is an Ω-tree and pdomain(T).

One should think of the pointer as a selected vertex of the tree. Figure 1 may provide some clarification.

Figure 1
Figure 1

Graphic representation of (T,21)

Citation: Groups Complexity Cryptology 11, 1; 10.1515/gcc-2019-2004

Notation.

Let F:CX be a partial function, and let cdomain(F). Then we define F[cx] as the partial function defined on domain(F){c} that agrees with F on domain(F) and sends c to x.

Definition 3.15.

A tree-stack storage over a finite alphabet Ω is a storage type T=(C,P,F,CI), where the following holds:

  1. C={(T,p)(T,p) is an Ω-tree with pointer}.
  2. For ωΩ{}, we set 𝚎𝚚𝚞𝚊𝚕𝚜(ω)={(T,p)CT(p)=ω} and 𝚗𝚘𝚝𝚎𝚚𝚞𝚊𝚕𝚜(ω)={(T,p)CT(p)ω}.Then P={𝚎𝚚𝚞𝚊𝚕𝚜(ω),𝚗𝚘𝚝𝚎𝚚𝚞𝚊𝚕𝚜(ω)ωΩ{}}{C}.
  3. For n and γΩ, we define the following partial functions:
    1. 𝚙𝚞𝚜𝚑n(γ):{(T,p)pndomain(T)}C as the map (T,p)(T[pnγ],pn),
    2. 𝚞𝚙n:{(T,p)pndomain(T)}C as the map (T,p)(T,pn),
    3. 𝚍𝚘𝚠𝚗:C-𝚎𝚚𝚞𝚊𝚕𝚜()C as the map that sends (T,pm)(T,p), for m,
    4. 𝚜𝚎𝚝γ:C-𝚎𝚚𝚞𝚊𝚕𝚜()C as the map that sends (T,p) to (T,p), where T is obtained by T changing the value of p to γ.
    Then F={Id,𝚙𝚞𝚜𝚑n(γ),𝚞𝚙n,𝚍𝚘𝚠𝚗,𝚜𝚎𝚝γγΩ,n}.
  4. CI={(ε,ε)}.

One should note that the command 𝚙𝚞𝚜𝚑n(γ) can only be used if there is no branch labelled n emanating from the vertex p.

Notation.

For a subset F of Ω, we will write 𝚗𝚘𝚝𝚎𝚚𝚞𝚊𝚕𝚜(F) to indicate the finite union of {𝚗𝚘𝚝𝚎𝚚𝚞𝚊𝚕𝚜(ω)ωF}. In particular, if we have the command (q,a,𝚗𝚘𝚝𝚎𝚚𝚞𝚊𝚕𝚜(F),f,q), this will indicate the finite set of rules {(q,a,𝚗𝚘𝚝𝚎𝚚𝚞𝚊𝚕𝚜(ω),f,q)ωF}.

Definition 3.16.

A tree-stack automaton is an automaton with storage with tree-stack storage.

Definition 3.17.

We say that a tree-stack automaton is k-restricted if, for any p*, n and any path in the graph realisation Γ() starting at (qI,cI), the following holds. There are at most k edges of the form (q1,(T1,p)) to (q2,(T2,pn)), where q1,q2Q and T1,T2 are tree-stacks.

Intuitively, Definition 3.17 states that every vertex in the tree-stack can be accessed from below a uniformly finite number of times. We will see in Lemma 4.3 that this is equivalent to the fact that each vertex in the tree-stack is only accessed for a uniformly bounded amount of time.

Theorem 3.18 ([2]).

For a language LΣ*, the following are equivalent:

  1. L is associated to a k-MCFG;
  2. L is accepted by a k-restricted tree-stack automaton.

Definition 3.19.

A tree-stack automata is cycle-free if, for every non-trivial loop in the graph realisation Γ(), there is at least one push, up or down command.

Lemma 3.20 ([2]).

Given a k-restricted tree-stack automaton M, there exists a tree-stack k-restricted automaton M such that L(M)=L(M) and M is cycle-free.

It is true that a 1-restricted tree-stack automaton is equivalent to a push-down automaton. It is tempting to think that this equivalence can be realised just taking the stack of the push-down automaton as the tree-stack, using push each time a push command is issued and down each time a pop command is issued. However, this cannot be done with finitely many branches at each vertex.

We may try and subvert this issue by replacing subsequent push commands with up commands; however, this will quickly fail to be 1-restricted (it will in fact not be k-restricted for any k).

We include here a method for associating a tree-stack automaton to a push-down automaton. We mimic the pop command as follows. Observe that there are no up commands in the automaton. This means that if we are in a vertex v and a command down is performed, the whole branch above v is no longer accessible anymore, mimicking the fact that the symbols of that branch were removed from the stack. To be sure that this does not cause loss of information, all the down commands in states with a vertex labelled by a letter of the alphabet Ω correspond to pop commands of the original automaton.

If one collapses all the edges labelled with a 0, we arrive at the tree one would get if each push command added a new edge at a vertex and each pop command moved down in the tree.

This requires us to open a new branch of the tree each time a new push command is issued. This is the purpose of ω allowing new branches to opened with dummy symbols.

Example 3.21.

Let =(Q,T,I,δ) be a push-down automaton over a finite alphabet Ω. We want to define a 1-restricted tree-stack automaton 𝒩 such that L()=L(𝒩).

We define 𝒩=(Q,T,I,δ) to be the following tree-stack automaton.

  1. For each element ωΩ, let ω be an extra symbol. Then Q=Q{(ω,q)ωΩ,qQ}.
  2. T is the tree-stack storage with respect to an Ω–tree, where Ω=Ω{ωωΩ}.
  3. The initial and final states of I are the same as I (because QQ).
  4. δ will be the set containing the following instructions:
    1. ()(a)For each rule τ=(q1,σ,p,f,q2)δ, there is a corresponding rule τ=(q1,σ,p,f,q2)δ as follows. If p=𝚎𝚚𝚞𝚊𝚕𝚜(ω), then p=𝚎𝚚𝚞𝚊𝚕𝚜({ω,ω) (note that those predicates have the same names, but are subsets of different power sets). Similarly, if p represents the whole set of configurations of the push-down storage, then p will represent the whole set of configuration of the tree-stack storage. If f=𝚙𝚞𝚜𝚑(ω), then f=𝚙𝚞𝚜𝚑0(ω) and q2=(ω,q2). If f=𝚙𝚘𝚙ω, then f=𝚍𝚘𝚠𝚗 and q2=q2.
    2. ()(b)For the state (ω,q), we have the instruction ((ω,q),ε,C,𝚙𝚞𝚜𝚑1(ω),q).
    3. ()(c)For every qQ and ωΩ, we have the instruction (q,ε,𝚎𝚚𝚞𝚊𝚕𝚜(ω),𝚍𝚘𝚠𝚗,q).

We also include an application of this example to give a tree-stack automaton which recognises the word problem in =t. This case is easier, and so we have slimmed down the commands used.

Example 3.22.

Define a tree-stack automaton as follows. The states of the automaton are Q={S,qf,qt,qT}; Σ is the alphabet {t,T}. We define T to be the tree-stack with alphabet {t,T,}, and S is the start state with empty stack as the start stack. The final states are Qf={qf}. The set of commands δ consists of

(S,t,𝚗𝚘𝚝𝚎𝚚𝚞𝚊𝚕𝚜(T),𝚙𝚞𝚜𝚑0(),qt),
(qt,ε,C,𝚙𝚞𝚜𝚑1(t),S),
(S,T,𝚗𝚘𝚝𝚎𝚚𝚞𝚊𝚕𝚜(t),𝚙𝚞𝚜𝚑0(),qT),
(qT,ε,C,𝚙𝚞𝚜𝚑1(T),S),
(S,ε,𝚎𝚚𝚞𝚊𝚕𝚜(),𝚍𝚘𝚠𝚗,S),
(S,t,𝚎𝚚𝚞𝚊𝚕𝚜(T),𝚍𝚘𝚠𝚗,S),
(S,T,𝚎𝚚𝚞𝚊𝚕𝚜(t),𝚍𝚘𝚠𝚗,S),
(S,ε,𝚎𝚚𝚞𝚊𝚕𝚜(),Id,qf).

This automaton accepts words that contain an equal number of the letter t and the letter T, which coincides with the word problem in .

For some explicit examples of 2-restricted tree-stack automata, see [2, Examples 3.2 and 3.3].

4 Closure under free products

In this section, we prove that the class of groups whose word problem is multiple context-free is closed under free products. To do this, we will show that, given G1 and G2 with multiple context-free word problem, we can construct a tree-stack automaton which accepts the word problem for G1G2.

Lemma 4.1.

Let M be a tree-stack automaton accepting the language M. Then there exists a tree-stack automaton M such that L(M)=L and M accepts a non-empty word only if the tree-stack storage is in the state (T,ε) for some Ω-tree T.

Proof.

We build a new automaton which accepts the same language as follows: Add two extra states qf,q¯f to our automaton. We add the following transitions to δ:

(q,ε,C,Id,qf)for allqQF,
(qf,ε,C,𝚍𝚘𝚠𝚗,qf),
(qf,ε,𝚎𝚚𝚞𝚊𝚕𝚜(),q¯f).

We change the set of accept states to {q¯f}. The language accepted by this new automaton is the same language as before. It should be noted that the new automaton has a single accept state, and if was cycle-free, then so is . ∎

It will also be useful to know that the amount of time spent at any vertex in the tree-stack is uniformly bounded.

Definition 4.2.

A run in a tree-stack automaton is a path in the graph realisation. This can be seen as a valid sequence of instructions. An accepted run is a run which ends in an accept state.

Lemma 4.3.

If M is a k-restricted cycle-free tree-stack automaton, then there is an n such that, for each pN* and each path in the graph realisation of M starting at (qI,cI), there are at most n vertices in the run of the form (q,(T,p)), where q and T may vary.

Proof.

Consider the two possibilities for entering a vertex of the form (q,(T,p)), where p is fixed and q and T may vary. Either we have an edge (q1,(T2,pm))(q,(T,p)) or (q2,(T2,p¯))(q,(T,p)), where p¯l=p for some l. There are only k possibilities of the second instance since the automaton is k-restricted.

In the first instance, there must have been an edge of the form (q,(T,p))(q′′,(T′′,pm)) previously in the path. There are at most k such edges by k-restrictedness. Since δ is finite, there can only be a finite number of instructions that contain a push command. Therefore, there are a bounded number of choices for m.

We will not require the exact bound; however, it can be calculated. A good estimate is k× (number of push commands) × (length of the longest path in the automaton with no movement in the tree). ∎

Let G1,G2 be groups with multiple context-free word problem; we now create the automaton which will accept the word problem for G1G2. Ideally, one would like to take the “free product” of the automata. However, this will result in something infinite. The key idea is to do this at the level of the tree-stack storage only.

Theorem 4.4.

If G1 and G2 are groups with multiple context-free word problem, then G1G2 has multiple context-free word problem.

Proof.

Let Wi be the word problem in Gi, and let W be the word problem in G1G2. Let i=(Qi,Ti,Ii,δi), where Ti is a tree-stack storage over the alphabet Ωi, and let Ii=(qIi,cIi,QFi={qfi}) be an automaton recognising the language Wi.

We will assume that these automata are k-restricted, cycle-free and accept a word if and only if the stack pointer is at the root. Let n be the maximum of the two bounds obtained from Lemma 4.3 applied to 1 and 2.

We now define the automaton that will recognise the language W. The automaton is depicted in Figure 2.

The states of are Q=Q1Q2{S,F}; the storage type T is the set of tree-stacks on the alphabet Ω=Ω1Ω2(Q×{1,2}). The initial state is S, with empty initial tree, and the final state is F. The transitions are δ=δ1δ2δ3, where each set will be described shortly. Intuitively, the set δ3 regulates the transitions between the two original automata, and we will obtain δi from δi by substituting each instruction in δi that contains a symbol with a finite set of instructions, one for each state of Qi. More precisely, δi=(δi(𝒟=𝒟))𝒮=𝒮 where

  1. 𝒟=={(q1,σ,𝚎𝚚𝚞𝚊𝚕𝚜(),f,q2)},
  2. 𝒟={(q1,σ,𝚗𝚘𝚝𝚎𝚚𝚞𝚊𝚕𝚜(),f,q2)},
  3. 𝒮=={(q1,σ,𝚎𝚚𝚞𝚊𝚕𝚜((q,i)),f,q2)(q1,σ,𝚎𝚚𝚞𝚊𝚕𝚜(),f,q2)𝒟=,qQ},
  4. 𝒮={(q1,σ,𝚗𝚘𝚝𝚎𝚚𝚞𝚊𝚕𝚜((q,i)),f,q2)(q1,σ,𝚗𝚘𝚝𝚎𝚚𝚞𝚊𝚕𝚜(),f,q2)𝒟,qQ},

and

δ3={(S,ε,𝚎𝚚𝚞𝚊𝚕𝚜(),Id,F),(S,ε,C,𝚙𝚞𝚜𝚑1((S,1)),qI1),(S,ε,C,𝚙𝚞𝚜𝚑2((S,2)),qI2),
(qf1,ε,𝚎𝚚𝚞𝚊𝚕𝚜((S,1)),𝚍𝚘𝚠𝚗,S),(qf2,ε,𝚎𝚚𝚞𝚊𝚕𝚜((S,2)),𝚍𝚘𝚠𝚗,S)}
{(q,ε,𝚗𝚘𝚝𝚎𝚚𝚞𝚊𝚕𝚜(Q1×{1}),𝚙𝚞𝚜𝚑i((q,2)),qI2)qQ2,i{-1,,-n}}
{(q,ε,𝚗𝚘𝚝𝚎𝚚𝚞𝚊𝚕𝚜(Q2×{2}),𝚙𝚞𝚜𝚑i((q,1)),qI1)qQ1,i{-1,,-n}}
{(qf1,ε,𝚎𝚚𝚞𝚊𝚕𝚜((q,1)),𝚍𝚘𝚠𝚗,q)qQ2}
{(qf2,ε,𝚎𝚚𝚞𝚊𝚕𝚜((q,2)),𝚍𝚘𝚠𝚗,q)qQ1}.

Figure 2
Figure 2

A depiction of the automata accepting the word problem of G1G2

Citation: Groups Complexity Cryptology 11, 1; 10.1515/gcc-2019-2004

Figure 3
Figure 3

The process of opening a new tree and returning to an old tree once the word is accepted

Citation: Groups Complexity Cryptology 11, 1; 10.1515/gcc-2019-2004

The reader should note that tree-stacks were defined with and negative labels have been used above. One should note that is countable so the labels can be made positive. The automaton above is k-restricted since the commands in δ3 do not add any 𝚞𝚙n commands and all such commands come from the automata i which are k-restricted. We want to show that W=L().

The way the automaton above works is as follows. We start with our word and move to one of the automata 1 or 2, say, 1. We then read a word in Σ1 and move in this automaton as usual. When we come to a letter from Σ2, we move to the automaton 2 recording the state qQ1 where we left 1 and opening a new branch on the tree. Later, we will read a letter of Σ1; if we do this at the final state of 2, then we move back to q; otherwise, we open a new branch and move to qI1 and continue this process. This process is depicted in Figure 3.

An accepted run Λ of the automaton will have the pointers start and end at the root of the tree-stack. Let Tf be the final tree-stack for the run. We can colour the non-root vertices of Tf red and blue as follows. Colour a vertex red if the label is from Ω1(Q2×{1}) and blue otherwise. Note that, after each instruction, there is a tree-stack which embeds, as a graph, into Tf. Since the only set commands are to be found in δ1 and δ2, one could colour a vertex upon creation; the above embedding will then be colour preserving. There is a subtree TcTf of a single colour whose complement is connected.

For each instruction, there are two possible pointers; these can be viewed as vertices of Tf. Let Θ be the instructions in Λ such that both pointers are in Tc. We claim that all the elements of Θ are consecutive. This is because there are no up commands with negative labels, so, once we leave Tc, there is no way to return. Note that Θ starts at the initial state of one of the automaton and ends at the corresponding final state. In particular, it can be viewed as an accepted run in i, and the subword v of the run Λ associated to Θ is an element of Wi.

Using the above, Λ decomposes as Λ1θ1Θθ2Λ2, where θ1δ3 is an instruction containing a push command and θ2δ3 is an instruction containing a down command. However, to leave the tree Tc, θ1 and θ2 pair up, by which we mean that the state of the automaton and the pointer before θ1 and after θ2 are the same. Also, the tree-stacks outside Tc remain unchanged. Thus Λ1Λ2 is an accepted run of . As a consequence, we have that the word w corresponding to the run Λ decomposes as w1vw2, where v is an element of Wi and w1w2 is accepted by . By considering words that are trivial in G1G2, we have that if w1w2 is an element of W, then so is w.

For the base case, note that if Tc=Tf, then wWi. Thus, by induction on the number of maximal one-coloured subtrees, L() is a subset of W.

For the other direction, we will use induction on the free product length of the word wW. The free product length of w is the p such that w=w1w2wp and if wiWj, then wi+1Wj. It is clear that words of free product length 1 are in the language L().

If w=w1wp has free product length p and is an element of W, then there is an i such that wi is an element of Wj. We will assume that wiW1. The run the machine will take is as follows: make the run for the word w1wi-1wi+1wp, which exists by induction hypothesis. At the point where the word wi is read, we will open a new tree and move to the automaton 1 following a run for this word. This run will finish at the root of the new tree and then return to the automaton 2 to continue the run where it left off.

To make sure that we can do this process, we have to be able to push a new edge at the correct moment. This may not be possible if we have already pushed n edges at this vertex. However, we assumed that the automaton 1 can only spend a uniformly bounded amount of time at any vertex and we added more push commands than this bound. Thus there will always be a run for the word w1wi-1wi+1wp, where we can make a push at the desired moment. ∎

In fact, in the proof, we have shown a slightly stronger result.

Corollary 4.5.

If G1 and G2 are groups whose word problem is k-MCF, then the word problem in G1G2 is k-MCF.

Proof.

It is clear from the proof of Theorem 4.4 that the automaton constructed is max{k1,k2}-restricted. Indeed, all the instructions that contain commands up are contained in δ1δ2. Applying instructions contained in δi will not move the pointer to a vertex of a different colour (where the colouring is defined as in the proof of Theorem 4.4). Thus if a vertex is contained in the interior of a one-coloured subtree, say, the colour corresponding to W1, then that vertex will satisfy the k1-restriction condition. ∎

5 Amalgamated free products

In this section, we generalise the previous result to show that the class of groups with multiple context-free word problem is closed under amalgamation over finite subgroups.

The idea is similar to the previous proof; there are however more details. We feel that the interested reader should understand the proof of Theorem 4.4, which encapsulates most of the details in an easier setting. The key idea is the following:

Proposition 5.1.

Let G be a group with multiple context-free word problem. Let H be a finite subset of G. Then {wΣ*wrepresents an element ofH} is a multiple context-free language.

Proof.

For each hH, let vh be a word representing h-1 in Σ. Let R={vhhH}. Since H is a finite set, so is R. Let R be the set of (possibly empty) suffixes of words in R. Let =(Q,T,I,δ) be an automaton recognising the word problem in G with start state qI and a single final state qf, where T is the set of tree-stacks over the alphabet Ω. Assume that this automaton has been modified as in Lemma 4.1.

The idea is the following: Let w be the input word. We will build an automaton that will “guess” an element of H, say, h, and then proceed to process the word vhw in . The way this is done is by adding a “second variable” to the states. The second variable represents the new word that is inserted. If the second variable is empty, then the automaton acts exactly as before. Otherwise, if the automaton is in a state (q,v), where v=a1an is a (non-trivial) word, the automaton acts as if it was in the state q and the first letter of v (that is, a1) is read. Then the second variable becomes a2an.

More formally, we will build a new automaton =(Q,T,I,δ) as follows. The set of states Q will be (Q×R){S}. The storage T will be tree-stacks over Ω. The set of transitions δ will consist of four types of transformation:

(S,ε,𝚎𝚚𝚞𝚊𝚕𝚜(),Id,(qI,v))for allvR,
((q,v),ε,p,f,(q,v))for allvRand(q,ε,p,f,q)δ,
((q,a1an),ε,p,f,(q,a2an))for alla1anRand(q,a1,p,f,q)δ,
((q,ε),σ,p,f,(q,ε))for all(q,σ,p,f,q)δ.

The automaton will have start state S and final state (qf,ε). ∎

We stress once more that everything boils down to the fact that, given an automaton and a finite number of words wiΣ, it is possible to insert a routine in the automaton that will mimic the behaviour of when a word wi is read, that is, to “insert” wi in the processed string of letters. The way it is done, is by adding the various suffixes of the wi as a “second variable” to the states.

If H is a normal subgroup G, then the word problem in G/H is exactly the set of words representing elements of H. Thus we immediately get the following corollary.

Corollary 5.2.

If G is a groups with multiple context-free word problem and H is a finite normal subgroup of G, then G/H has a multiple context-free word problem.

We recalled the following result from [7].

Theorem 5.3 ([7, p. 187, Theorem 2.6]).

Let G=G1HG2 be an amalgamated product, and let c1,,cn be a sequence of elements of G such that

  1. (i)n2,
  2. (ii)each ci is in one of the factors G1 or G2,
  3. (iii)the words ci, ci+1 come form different factors,
  4. (iv)no ci is in H.

Then the product c1cn is non-trivial in G.

With Proposition 5.1, we can prove our main theorem; as previously stated, the idea is similar to Theorem 4.4 with a few extra details.

Theorem 5.4.

Let G1 and G2 be groups whose word problem with multiple context-free. Let Hi be a finite subgroup of Gi such that H1H2H. Then G=G1HG2 has a multiple context-free word problem.

Proof.

The idea is the following: Suppose that the word w=a1am is read. If all ai are contained in only one of Σ1 or Σ2, the automaton will then proceed as in Proposition 5.1 having guessed that it will read the trivial element. So suppose this does not happen. We can subdivide the word w into (maximal) subwords that contain only elements of Σ1 or Σ2. This will give a sequence c1,,cn of elements of G. Theorem 5.3 gives that w=Gc1cn represents the trivial element only if there is an i such that ci represents an element of H. Let u be the subword of w associated to ci. Without loss of generality, we may assume that uΣ1*. By non-determinism, the automaton will guess the correct i and the element ciH. Then, using the procedure detailed in Proposition 5.1, it will check if u really represents ci, and if this is the case, the automaton will return to the point it started reading u and proceed as if it had, instead, read the word vΣ2* representing ci in G2. Note that, for this last step, it is crucial that H is finite.

It is clear that the word w will be accepted if and only if the automaton will accept the word obtained by w substituting u with v. By induction on the length of the sequence c1,,cn, we get the result.

More formally, let Wi be the word problem in Gi. Let i be an automaton accepting the language Wi. Let wih be a word in Σi representing the element hH. Let Fi={wihhH} with a bijection ϕ:F1F2 such that ϕ(w1h)=w2h, and let ψ=ϕ-1. Let Fi be the set of suffixes of words in Fi. Let i be the automaton recognising words in Hi from Proposition 5.1 with states Qi×Fi{Si}. Let W be the word problem in G. We build an automaton similar to Theorem 4.4 accepting the language W.

The states of are Q1×F1Q2×F2{S1,S2,S,F}. The storage will be tree-stacks over the alphabet Ω1Ω2(Q1×F1)(Q2×F2){1,2}.

The transitions will consist of the following:

{(S,ε,C,𝚙𝚞𝚜𝚑1(1),(qI1,ε)),(S,ε,C,𝚙𝚞𝚜𝚑1(2),(qI2,ε))}{(S,ε,𝚎𝚚𝚞𝚊𝚕𝚜(),Id,F)},
{((qf1,ε),ε,𝚎𝚚𝚞𝚊𝚕𝚜(1),𝚍𝚘𝚠𝚗,S),((qf2,ε),ε,𝚎𝚚𝚞𝚊𝚕𝚜(2),𝚍𝚘𝚠𝚗,S)},
{((q,ε),ε,𝚗𝚘𝚝𝚎𝚚𝚞𝚊𝚕𝚜(Q2×F2),𝚙𝚞𝚜𝚑i((q,w)),(qI2,ϕ(w)-1))qQ1,wF1,i{-1,,-n}},
{((q,ε),ε,𝚗𝚘𝚝𝚎𝚚𝚞𝚊𝚕𝚜(Q1×F1),𝚙𝚞𝚜𝚑i((q,w)),(qI1,ψ(w)-1))qQ2,wF2,i{-1,,-n}},
{((qf1,ε),ε,𝚎𝚚𝚞𝚊𝚕𝚜((q,w)),𝚍𝚘𝚠𝚗,(q,w))qQ2,wF2}{((qf2,ε),ε,𝚎𝚚𝚞𝚊𝚕𝚜((q,w)),𝚍𝚘𝚠𝚗,(q,w))qQ1,wF1},
the transitions ofiexcept those with form(Si,ε,𝚎𝚚𝚞𝚊𝚕𝚜(),Id,(QI,v)).

Before explaining in detail the rules, there is one key and central observation. If the automaton is in a state (q,w) with w=a1alε, then the only possible rules are those from group (5.6). In particular, by the definition of i (see the proof of Proposition 5.1), the only such rules are of the form

((q,a1al),ε,p,f,(q,a2al)),where(q,a1,p,f,q)was a rule ofi,

or of the form

((q,a1al),ε,p,f,(q,a1al)),where(q,ε,p,f,q)was a rule ofi.

That is, if there is a non-empty word w at the second variable, the only possible rule that can be applied is one mimicking the behaviour of one of the original automata if the first letter of w was read. That is, the priority is always to deplete the second variable of the states.

The elements of group (5.1) consist of the very final instruction and the two instructions that starts processing letters in one of the two alphabets Σi.

The elements of the group (5.2) consist of the second to last move in a run; they are triggered when the complete word has been read and the tree-stack is one step away from the root.

The elements of groups (5.3) and (5.4) consist of the same type of rules, with the roles of G1 and G2 interchanged. The rules describe the following instruction (say, for group (5.3)): “At any moment where the stack pointer is not pointing an element of Q2×F2 and your state has empty second variable, you can guess that a subword that represents ϕ(w) is starting, for some wH. Then you start a new branch and add ϕ(w)-1 at the second variable”. If the guess was correct, then eventually the automaton will return to the root of the new branch with state (q2,ε). Thus it successfully processed a subword that represented ϕ(w). In this case, the rules of group (5.5) apply. Indeed, remember that, at the beginning of the process, we pushed (q,w) in the stack to remember the state at which the automaton was (as in Theorem 4.4) and the word we were checking. Then we put w in the second variable. What happened is that we effectively substituted the subword representing ϕ(w) with w.

We will now give a precise proof of the theorem. This automaton works similarly to the automaton in Theorem 4.4. Let Λ be an accepted run for the automaton. Let Tf be the final tree-stack for this run. We colour the non-root vertices of Tf red and blue as in the proof of Theorem 4.4.

There is a subtree TcTf of a single colour whose complement is connected. Assume that Tc is a tree with labels from Ω1. For each instruction, there are two possible pointers, these can be viewed as vertices of Tf. Let Θ be the subset of the instruction in Λ such that both pointers are in Tc. It can be seen as in the proof of Theorem 4.4 that all these instructions are consecutive. Since Θ starts and ends at the root, the word read while performing the instructions in Θ represents an element vF1.

The run Λ decomposes as a concatenation Λ1θ1Θθ2ΞΛ2, where θ1 and θ2 correspond to entering and leaving the tree Tc and Ξ is the run from (q,ϕ(v)) to the first state (q,ε).

Since the tree Tc cannot be reentered, we see that Λ is a valid run if and only if there is a valid run of the form Λ1ΘΛ2, where Θ is the same run as Ξ running through the states (q,ε) instead of (q,w); one could see this as a run in 2 corresponding to Ξ.

The original decomposition Λ1θ1Θθ2ΞΛ2 corresponds to a decomposition of w as u1vu2. The word corresponding to the run Λ1ΘΛ2 is u1ϕ(v)u2.

It should be noted that the final tree for the run Λ1ΘΛ2 will have one fewer red subtree. For the base case, note that if Tc=Tf; then we have a word in W1W2. Thus, by induction on the number of maximal one-coloured subtrees, L() is a subset of W.

We must now prove that this automaton accepts all words in W. We will use the free product length of a word once again. Let w=w1wk be a word of free product length k. If this word represents the trivial word, then there is a subword wj which represents an element of H. Let u be the corresponding element of F1. We can assume that this word is in Σ1*. Let v be an element of F2 representing the same element as wj.

The automaton will leave the automaton 2 from the state (q,ε) to the automaton 1 starting at the state (qI1,u). When the word wj is read, the automaton will return to 2 at the state (q,v). The automaton will then make a run in 2 for the word v. Thus w is in L() if and only if w=w1wj-1vwj+1wk is in L(). Since w has shorter free product length and it is clear that words of free product length 1 are in L(), we are done by induction. ∎

6 HNN extensions and graphs of groups

The goal of this section is to prove Theorem 5.4 for HNN extension with finite associated subgroup. We recall the definition of HNN extension.

Definition 6.1 (HNN extension).

Let G be a group. Let H1,H2 be two subgroups of G, and let ϕ:H1H2 be an isomorphism. The HNN extension is the group given by the presentation

G*ϕ=G,ttgt-1=ϕ(g)for allgH1.

Our goal is to prove the following result.

Theorem 6.2.

Let G be a finitely generated group whose word problem is multiple context-free. Let H1 and H2 be two finite subgroups of G, and let ϕ:H1H2 be an isomorphism. Then the HNN extension Gϕ has a multiple context-free word problem.

The proof of Theorem 6.2 almost coincides with the proof in the case of the amalgamated product, modulo the following lemma.

Lemma 6.3.

Consider a word g0tε1g1tε2gn in an HNN extension, where giG and εi=±1. If w=1, then

  1. either n=0 and g0=1 in G,
  2. or n>0 and, for some i{1,,n-1}, one of the following holds:
    1. ()(a)εi=1 and εi+1=-1 and giH1,
    2. ()(b)εi=-1 and εi+1=1 and giH2.

Proof of Theorem 6.2.

The proof here is similar to the proof of Theorem 5.4. Instead of changing automaton when we change alphabet, we instead note that, each time we read a t or t-1, the next word we read must be an element g in H1 or H2, respectively. Since Hi are finite groups, we can recognise such words. After doing this, we return to where we were and proceed with the instruction as if we had read ϕ(g) or ϕ-1(g), respectively. ∎

We have now all the ingredients to prove Theorem A.

Theorem A.

Let G be the fundamental group of a finite graph of groups. Assume that all the vertex groups have multiple context-free word problem and all the edges groups are finite. Then G has multiple context-free word problem.

Proof.

Let 𝒯 be a spanning tree in the graph of the graph of groups. Applying inductively Theorem 5.4, we obtain that π1(𝒯) has a multiple context-free word problem. Since adding an edge between two vertices of a graph of groups corresponds to an HNN extension, by iteratively applying Theorem 6.2, we obtain the result. ∎

Acknowledgements

We greatly thank Bob Gilman for introducing us to the subject and making this project possible. The second author would like to thank UC Berkeley for inviting him as a visiting scholar. The first author would like to thank Alessandro Sisto for inviting him to complete this work at the ETH. We thank the anonymous referee for helpful comments and suggestions, in particular, the addition of Section 6. Finally, we would like to thank Neil Fullarton for his invaluable work with a stapler.

References

  • [1]

    N. Chomsky, Context-free grammars and pushdown storage, Quart. Progress Rep. 65 (1962), 187–194.

  • [2]

    T. Denkinger, An automata characterisation for multiple context-free languages, Developments in Language Theory, Lecture Notes in Comput. Sci. 9840, Springer, Berlin (2016), 138–150.

  • [3]

    M. J. Dunwoody, The accessibility of finitely presented groups, Invent. Math. 81 (1985), no. 3, 449–457.

    • Crossref
    • Export Citation
  • [4]

    R. H. Gilman, R. P. Kropholler and S. Schleimer, Groups whose word problems are not semilinear, Groups Complex. Cryptol. 10 (2018), no. 2, 53–62.

  • [5]

    M.-C. Ho, The word problem of n \mathbb{Z}^{n} is a multiple context-free language, Groups Complex. Cryptol. 10 (2018), no. 1, 9–15.

  • [6]

    J. E. Hopcroft and J. D. Ullman, Formal Languages and Their Relation to Automata, Addison-Wesley, Reading, 1969.

  • [7]

    R. C. Lyndon and P. E. Schupp, Combinatorial Group Theory, Classics Math., Springer, Berlin, 2001.

  • [8]

    D. E. Muller and P. E. Schupp, Context-free languages, groups, the theory of ends, second-order logic, tiling problems, cellular automata, and vector addition systems, Bull. Amer. Math. Soc. (N. S.) 4 (1981), no. 3, 331–334.

    • Crossref
    • Export Citation
  • [9]

    C. Pollard, Generalized phrase structure grammars, head grammars, and natural language, Ph.D. thesis, Stanford University, 1984.

  • [10]

    S. Salvati, Multiple context-free grammars. Course 1: Motivations and formal definition, 2011.

  • [11]

    S. Salvati, MIX is a 2-MCFL and the word problem in 2 \mathbb{Z}^{2} is captured by the IO and the OI hierarchies, J. Comput. System Sci. 81 (2015), no. 7, 1252–1277.

    • Crossref
    • Export Citation
  • [12]

    H. Seki, T. Matsumura, M. Fujii and T. Kasami, On multiple context-free grammars, Theoret. Comput. Sci. 88 (1991), no. 2, 191–229.

    • Crossref
    • Export Citation
  • [13]

    J. Stallings, Group Theory and Three-dimensional Manifolds, Yale University Press, New Haven, 1971.

If the inline PDF is not rendering correctly, you can download the PDF file here.

  • [1]

    N. Chomsky, Context-free grammars and pushdown storage, Quart. Progress Rep. 65 (1962), 187–194.

  • [2]

    T. Denkinger, An automata characterisation for multiple context-free languages, Developments in Language Theory, Lecture Notes in Comput. Sci. 9840, Springer, Berlin (2016), 138–150.

  • [3]

    M. J. Dunwoody, The accessibility of finitely presented groups, Invent. Math. 81 (1985), no. 3, 449–457.

    • Crossref
    • Export Citation
  • [4]

    R. H. Gilman, R. P. Kropholler and S. Schleimer, Groups whose word problems are not semilinear, Groups Complex. Cryptol. 10 (2018), no. 2, 53–62.

  • [5]

    M.-C. Ho, The word problem of n \mathbb{Z}^{n} is a multiple context-free language, Groups Complex. Cryptol. 10 (2018), no. 1, 9–15.

  • [6]

    J. E. Hopcroft and J. D. Ullman, Formal Languages and Their Relation to Automata, Addison-Wesley, Reading, 1969.

  • [7]

    R. C. Lyndon and P. E. Schupp, Combinatorial Group Theory, Classics Math., Springer, Berlin, 2001.

  • [8]

    D. E. Muller and P. E. Schupp, Context-free languages, groups, the theory of ends, second-order logic, tiling problems, cellular automata, and vector addition systems, Bull. Amer. Math. Soc. (N. S.) 4 (1981), no. 3, 331–334.

    • Crossref
    • Export Citation
  • [9]

    C. Pollard, Generalized phrase structure grammars, head grammars, and natural language, Ph.D. thesis, Stanford University, 1984.

  • [10]

    S. Salvati, Multiple context-free grammars. Course 1: Motivations and formal definition, 2011.

  • [11]

    S. Salvati, MIX is a 2-MCFL and the word problem in 2 \mathbb{Z}^{2} is captured by the IO and the OI hierarchies, J. Comput. System Sci. 81 (2015), no. 7, 1252–1277.

    • Crossref
    • Export Citation
  • [12]

    H. Seki, T. Matsumura, M. Fujii and T. Kasami, On multiple context-free grammars, Theoret. Comput. Sci. 88 (1991), no. 2, 191–229.

    • Crossref
    • Export Citation
  • [13]

    J. Stallings, Group Theory and Three-dimensional Manifolds, Yale University Press, New Haven, 1971.

FREE ACCESS

Journal + Issues

Groups – Complexity – Cryptology is a journal for speedy publication of articles in the areas of combinatorial and computational group theory, computer algebra, complexity theory, and cryptology. GCC primarily publishes research papers, but comprehensive and timely survey articles on a topic inside the scope of the journal are also welcome.

Search