Skip to content
BY 4.0 license Open Access Published by De Gruyter February 15, 2023

Plactic key agreement (insecure?)

  • Daniel R. L. Brown EMAIL logo


Plactic key agreement is a new type of cryptographic key agreement that uses Knuth’s multiplication of semistandard tableaux from combinatorial algebra. The security of plactic key agreement relies on the difficulty of some computational problems, particularly the division of semistandard tableaux. Tableau division can be used to find the private key from its public key or to find the shared secret from the two exchanged public keys. Monico found a fast division algorithm, which could be a polynomial time in the length of the tableaux. Monico’s algorithm solved a challenge that had been previously estimated to cost 2128 steps to break, which is an infeasibly large number for any foreseeable computing power on earth. Monico’s algorithm solves this challenge in only a few minutes. Therefore, Monico’s attack likely makes the plactic key agreement insecure. If it were not for Monico’s attack, plactic key agreement with 1,000-byte public keys might perhaps have provided 128-bit security, with a runtime of a millisecond. But Monico’s attack breaks these public keys’ sizes in minutes.

MSC 2010: 94A60

1 Introduction

Knuth’s [1] multiplication of semistandard tableaux is described in Section 2. (Some history of semistandard tableaux is covered in Section D.1.)

Plactic key agreement uses the multiplication of semistandard tableaux. Alice and Charlie agree on a secret key as follows. All keys are semistandard tableaux. Alice generates her private key a . Charlie generates his private key c . Alice and Charlie both have the same base key b , which is a common, non-secret, and fixed system parameter. Alice computes her public key d = a b and delivers d to Charlie. Charlie computes and delivers his public key e = b c to Alice. Alice computes a secret key f = a e from her private key a and Charlie’s public key e . Charlie computes a secret key g = d c .

Agreement of the secret keys f and g happens because multiplication of semistandard tableaux is associative: f = a ( b c ) = ( a b ) c = g . Alice and Charlie now share a secret key, which they can use for encryption (or other symmetric-key cryptography algorithms).

The well-known significance of the key agreement is that Alice and Charlie have agreed on a (potential) secret, despite all communication between Alice and Charlie being public. So, plactic key agreement is another instance of the renowned public-key cryptography, first published by Merkle [2] and by Diffie and Hellman [3].

More specifically, Rabi and Sherman’s [4] key agreement protocol based on associative (one-way) functions includes, as a particular instance, plactic key agreement. Take Knuth’s multiplication of semistandard tableaux as the associative binary operation. (Note, however, that tableau factoring is easy, so one of Rabi and Sherman’s requirements seems to be unnecessary.)

The security of the plactic key agreement, and more generally, Rabi and Sherman’s associative key agreement, relies on computational problems analogous to those of the Diffie–Hellman key agreement. (For a general treatment of the correspondence, see ref. [5].) Analogous to the discrete logarithm problem is the division problem. Right division means to compute, from d = a b and b , an element x such that a b = x b , which we write as x = d / b . Analogous to the (computational) Diffie–Hellman problem is the problem to compute shared secret f = a b c from the three values a b , b , and b c . (In [5], the computational problem [ a b , b , b c ] a b c is called the wedge problem, and custom notation for a wedge operator is introduced: d b e .)

Similarly, the decision Diffie–Hellman problem has an analog, such as distinguishing the shared secret tableau f = a b c from a random tableau (or from a random byte string). But this problem for tableaux is easy to solve, because the content (vector of entry multiplicities) of f is easily computed from the contents of d , b , and e . (Some versions of Diffie–Hellman key agreement over finite fields also allow non-negligible distinguishing attacks, using the Legendre symbol.) Key derivation functions (typically based on a cryptography hash function) should therefore be applied to the shared secret keys. The security then relies on an analog to a decision-hashed Diffie–Hellman problem.

Authentication of the public keys must be added to the plactic key agreement to avoid man-in-the-middle attacks. (An anonymous reviewer suggested this fundamental reminder.) In other words, the plactic key agreement is unauthenticated. Similarly, the Diffie–Hellman key agreement [3] is unauthenticated. Authentication is often achieved by using digital signatures. For example, in the Transport Layer Security (TLS) protocol, the server’s Diffie–Hellman public key is signed. The signing public key is also signed by a third-party certification authority, as part of a public-key infrastructure (PKI) that helps to establish trust. Authentication of key agreement is an important part of larger security protocols but can be treated separately from the more fundamental secrecy goals of the underlying unauthenticated key agreement.

Division of semistandard tableaux is discussed in Section 3. As noted earlier, division can be used as an attack against the plactic key agreement (as further discussed in Section 4.2). A shared secret key f can be computed from the keys d , b , and e using the formula f = ( d / b ) e . The value d / b be considered to be effectively equivalent to Alice’s private key a , making this attack analogous to the attack on the Diffie–Hellman key agreement of solving a discrete logarithm problem to find a private key from the public key. Therefore, the security of plactic key agreement relies on (in other words, requires) the difficulty of dividing semistandard tableaux.

Another attack, reported by an anonymous Journal of Mathematical Cryptology (JMC) reviewer, also using division, solves the analog of the computational Diffie–Hellman problem. Again, it computes f = a b c from d , b , and e (but does not try to find any equivalents to the private keys a or c ). First, b is factored as b = b 1 b 2 , into factors b i approximately half the length of b . (Factoring is easy in the plactic monoid.) The attacker then tries to compute the secret key f as ( d / b 2 ) ( b 1 e ) , where means left-division. This formula is not guaranteed to recover f , but the reviewer reports an 85% success rate for b of lengths up to 70. This attack seems to be faster than computing ( d / b ) e using a single full-length division. The speedup is similar to that of Pollard rho over an exhaustive search. (The speedup seems to only apply to the generalization of the Diffie–Hellman problem, not to a speedup of the generalization of the discrete logarithm problem.)

Division by erosion, as described in Appendix Section A, is a method to divide semistandard tableaux. It computes d / b by deleting each entry of b from d , in the reverse order that the multiplication algorithm inserted entries of b into d . A catch is that multiplication is non-cancellative. At each stage, there can be multiple choices of which entries of d to delete. Some deletion choices can turn out to be incompatible with the overall division task. Division by erosion uses backtracking to correct these incorrect deletion choices.

Other possible division algorithms include: division by trial multiplication (Section 3.3), which searches through the key space of a , and division by max-algebra matrices (Section 3.4), which uses Johnson–Kambites [6] tropical matrix representations of the plactic monoid. These other division algorithms are predicted to be slower than division by erosion, if a and b belong to suitably large key spaces.

Empirical testing suggests that division by erosion is slow. Initial testing suggested that 2 0.3 L was a reasonable lower bound for the number of steps to use erosion to divide by L -entry tableau b (with entries in a set of at least 64 values). Extrapolating this empirical evidence to L = 500 gives a value of 2 150 steps, which should be infeasible. More thorough testing suggests a more precise pattern to the number of steps when dividing small tableaux: a sub-exponential cost of 2 L 0.81 measured in the number of steps that swap an entry of a tableau with a value to be inserted or deleted. Extrapolating this more precise estimate to L = 500 gives a value of 2 153 steps.

If all of the claimed hypotheses above are true, then secure a plactic key agreement could be practical. But, to more clearly emphasize the hypothetical nature of these claims, we can formalize the claims as conjectures.

First, we can conjecture that erosion is the fastest division algorithm, for the relevant types of inputs.

Conjecture 1

Division d / b in the plactic monoid, for d = 2 L and b = L (with b dividing d ), is achieved at the lowest cost using the erosion algorithm.

Second, we can conjecture that erosion is slow.

Conjecture 2

Erosion to compute d / b , for d = 2 L and b = L , has an average cost of at least 2 L 0.81 , averaged over random a and b (and setting d = a b ).

More precisely, the mean of the base-2 logarithm of the cost is approximately L 0.81 . Furthermore, these logarithms of the costs are approximately normally distributed, with a standard deviation of approximately 1/8 times the mean.

If Conjecture 1 is proven wrong, then Conjecture 2 would likely become moot. Thus, Conjecture 2 is secondary to Conjecture 1.

It would be possible to further conjecture that to compute a b c from [ a b , b , b c ] , meaning to break plactic key agreement, is best achieved by two divisions as outlined earlier. However, this last conjecture would amount, essentially, to the conjecture that the plactic key agreement is secure (for some particular minimum sizes of keys), which is arguably implicit in any proposal to use plactic key agreement.

The security analysis of plactic key agreement is in its very early stages (see Section 5.3). This article aims to build up some direct support for Conjectures 1 and 2 in the form of attempted cryptanalysis. Indirect support from previous work also exists, as discussed in Section 5.1. Very informal support for the potential resistance to quantum computer attacks is discussed in Section 5.2.

1.1 Update: Monico’s division algorithm

Subsequent to the acceptance of this article to the Journal of Mathematical Cryptology, C. Monico disclosed to me a plactic division algorithm that is much faster than erosion. This disproves Conjecture 1. This also renders Conjecture 2 moot.

Furthermore, Monico’s division algorithm solves a division challenge in minutes. The division challenged had been conjecture to be infeasible to solve by erosion, taking at least 2 128 bit operations.

Worst of all, Monico’s algorithm might have runtime polynomials in the lengths of the tableaux. It could be so fast that increasing the public key sizes to resist Monico’s attack will result in unusably large keys (such as many megabyte keys taking hours or days to generate).

2 Knuth multiplication of tableaux

This section describes the Knuth multiplication of semistandard tableaux [1].

2.1 Semistandard tableaux

A (semistandard) tableau is a two-dimensional, grid-aligned arrangement of entries from an ordered set that meets the following requirements:

  1. The sequence of rows is as follows:

    1. left-aligned and

    2. sorted by row length (shortest row at the top, longest row at the bottom, and repeated lengths allowed)

  2. The entries in each row are as follows:

    1. sorted (lowest at left, highest at right, and repeated entries allowed).

  3. The entries of each column are as follows:

    1. sorted (highest at the top and lowest at the bottom) and

    2. all distinct (no repeated entries allowed).

Two examples of semistandard tableaux with single-digit entries are

(1) a = 4 334 1233 b = 2 13 .

Note the following special considerations. An empty semistandard tableau is possible, with no entries and no rows. Many authors, including Knuth [1], reverse the order of rows. (Each column would have the lowest entries at the top.) It is sometimes convenient to consider there to be extra empty rows above the top row.

Some mnemonics: entries of rows may repeat but entries of columns cannot, and the entry orderings follow the Cartesian coordinate orderings as follows:

(2) 4 334 1233 4 3 3 4 1 2 3 3 .

2.2 Over-tableaux and row reading

This section defines a few basic tableau operations, which will help define Knuth multiplication.

The size t of the tableau t is the number of entries in the tableau. For example, the tableaux a and b from (1) have sizes

(3) a = 8 b = 3 .

The empty tableau has size 0.

The bottom row t ̲ of a tableau t is the last row. The bottom row contains the least entry of each column. The over-tableau t ¯ is the tableau that sits above the bottom row t ̲ . In other words, the over-tableau t ¯ is just t with bottom row t ̲ removed. For example, from tableaux a and b from (1), we have

(4) a ¯ = 4 334 b ¯ = 2

(5) a ̲ = 1233 b ̲ = 13 .

Some special cases: if t has only one row, then its over-tableau is the empty tableau; if t is empty, then both the over-tableau and the bottom row are empty.

The row reading ρ ( t ) of a tableau t is the concatenation of all rows of t , starting with top row on the left and ending the bottom row at the right. For example, from tableaux a and b from (1), the row readings are as follows:

(6) ρ ( a ) = 43341233 ρ ( b ) = 213 .

When we need to refer to individual entries of a tableau, we can write t i for the i th entry of ρ ( t ) , starting with i = 1 at the left. So,

(7) ρ ( t ) = t 1 t 2 t t .

Note that we sometimes write t i to indicate multiple different tableaux. If the context is not enough to distinguish whether t i means an entry of t or a distinct tableau, then write ρ ( t ) i for the i th entry of the row reading.

2.3 Multiplication

To multiply tableaux t and u , compute v = t u as follows:

  1. Initialize v = t .

  2. For k = 1 to k = u , insert entry x = u k into tableau v , as described subsequently.

  3. To insert entry x into tableau v , do one of the following two actions (whichever is required by the stated conditions):

    1. If appending entry x to bottom row v ̲ , written v ̲ x , results in a sorted row (lowest to highest), then

      1. replace bottom row v ̲ by v ̲ x

    2. Else (if appending x to v ̲ does not result in a sorted row),

      1. Let r be the least entry in bottom row v ̲ that is larger than x ,

      2. Replace the leftmost copy of r by x , modifying v ̲ .

      3. Insert r into over-tableau v ¯ (so, entry x “bumps” entry r up into the rows above).

Note that multiplication adds sizes ( t u = t + u ), because each inserted entry increases the size of v by 1. Insertion has been defined recursively, with the insertion procedure described using the insertion procedure with different inputs (specifically, recursion is applied to the replaced entry and the over-tableau). Insertion could have instead been defined iteratively as a loop starting from the bottom row and bumping up to the next row until appending to the end of a row is possible. One can expect that an iterative implementation is simpler and faster than a recursive implementation.

2.4 Example of Knuth multiplication

As an example, let us multiply t = a and u = b from (1). We let v = a , as the initialization. Since ρ ( b ) = 213 , we must first insert x = b 1 = 2 into v .

The bottom row v ̲ is now 1233. The concatenation v ̲ x = 12332 is not sorted from lowest to highest because the last two entries have 3 > 2 . So, we must apply the second option for insertion, which bumps up an entry in the bottom row. The least entry r of v ̲ that is larger than x = 2 is r = 3 . So, we replace the bottom row 1,233 by 1,223. The replaced symbol r = 3 is bumped up into the over-tableau v ¯ . To do the bumping, we first look at the over-tableau w = v ¯ , which is

(8) w = v ¯ = 4 334 .

Because we are now trying to insert 3 into v ¯ , let us now write x = 3 . We see that x = 3 cannot be appended to the bottom row w ̲ = 334 of w , because 3,343 is not sorted. Another entry must be bumped up. In this case, the smallest entry larger than x = 3 is r = 4 , so we must bump up 4. The bumped-up entry, 4, can be appended to the very top row, giving a new top row, 44. So, after the insertion of 2 into v , we obtain a new value for tableau v :

(9) v = 4 4 33 3 12 2 3 ,

where the changed entries have been shown in bold.

The next entry of b to insert is b 2 = 1 , which must be inserted into this new v . From our experience of the previous insertion, we can see what happens a little more quickly: 1 bumps 2 from the first row (first from the bottom), which bumps 3 from the second row, which bumps 4 from the third row. This time, the last bumped entry 4 gets appended to the empty row above, creating a new non-empty row. The new v is

(10) v = 4 3 4 2 33 1 1 23 ,

where the changed entries of the new v have (again) been shown in bold. Note that the changed entry in the bottom row is the inserted entry b 2 , while the changed entries in the over-tableau were the bumped entries.

The final entry of b to insert is b 3 = 3 . This entry gets appended to the bottom row (because appending it gives a sorted row), which gives the new v and also the final multiplication of a and b ,

(11) a b = v = 4 34 233 1123 3

with the single changed entry shown in bold.

2.5 The plactic monoid

Recall that a monoid is a set with an associative binary operation and an identity element. Often, the operation is written as multiplication, in which case, the associative law means that a ( b c ) = ( a b ) c for all a , b , and c . If clear from context, the identity element is written as 1, in which case, the identity element condition means that 1 a = a = a 1 for all a .

Knuth’s multiplication of semistandard tableaux is an associative binary operation. Associativity is not obvious: Knuth’s realization that semistandard tableaux could be multiplied associatively was a major insight.

The identity element for Knuth’s multiplication of semistandard tableaux is the empty tableau. For tableaux, the notation 1 for the identity element conflicts with the tableau of single entry of value 1, so we can instead write ε . Then ε t = t = t ε for any semistandard tableau t .

Therefore, the set of semistandard tableaux with Knuth multiplication meets the defining axioms of a monoid. This monoid is now called the plactic monoid [7].

The plactic monoid is non-commutative: there are tableaux a and b such that a b b a (and non-commuting is typical). For example, 12 = ( 1 ) ( 2 ) ( 2 ) ( 1 ) = 2 1 .

The plactic monoid is non-cancellative: there are distinct tableaux a , e , b with a b = e b (for any a , b , it is typical for there to exist such an e ). For example, ( 12 ) ( 1 ) = 2 11 = 2 1 ( 1 ) .

An alternative definition of the plactic monoid is to use generators and relations to give a monoid presentation, which is summarized below. The monoid generators are possible for all entry values, which can be any ordered set, but we traditionally take it to be a set of consecutive non-negative integers. The relations are the Knuth relations, which say that y x z = y z x if x < y z and x z y = z x y if x y < z . Knuth [1] proved each congruence class of words under these relations has a unique representative that is the row reading of a semistandard tableau.

This alternative definition allows us to think of each word as being some alternative representation of a semistandard tableau.

2.6 The Robinson–Schensted algorithm

Knuth multiplication is almost equivalent to the Robinson–Schensted algorithm [8,9].

The Robinson–Schensted algorithm takes any word u = u 1 u 2 u n (not necessarily a row reading of a tableau) and then inserts the entries u i (in order) into a tableau, starting from an empty tableau, using the same insertion method as in Knuth multiplication. The result is always a semistandard tableau, which is traditionally written P ( u ) .

In the monoid presentation of the plactic monoid, a word w represents a tableau t if and only if P ( w ) = t .

Knuth multiplication has an alternative definition in terms of the Robinson–Schensted algorithm as:

(12) a b = P ( ρ ( a ) ρ ( b ) ) .

Consequently, for any semistandard tableau t , we have t = P ( ρ ( t ) ) .

The column reading γ ( t ) of tableau t is a word formed by reading the columns from left to right while reading each individual column from top to bottom. In other words, separate the columns, topple each column to the left and concatenate the fallen columns. Perhaps surprisingly, P ( γ ( t ) ) = t .

The Robinson–Schensted can also be interpreted as the Knuth multiplication of n single-entry tableaux.

3 Division of semistandard tableaux

A binary operator / is a (right) division operator if

(13) ( ( a b ) / b ) b = a b

for all a , b . Call / a divider (for brevity).

Note that other definitions for a divider are possible. In other areas of algebra, division is often required to cancel multiplication, meaning that ( a b ) / b = a for all a and b . A canceling divider is not possible for Knuth multiplication, because the plactic monoid is non-cancellative. The less strict requirement (13) for division is relevant to the security of plactic key agreement, so that is the definition that we will use.

3.1 Left division reducible to right division

The plactic monoid is anti-automorphic, meaning that it is isomorphic to its mirror image, where one swaps the left and right tableaux in the definition of multiplication. (The anti-automorphism swaps the values of high and low entries and also reverses the order of the row readings.)

The anti-automorphism means that left division, an operator such that b ( b ( b a ) ) = b a can be reduced to right division.

3.2 Division by erosion

This section (Section A) explains division by erosion, which is an algorithm to divide semistandard tableaux, set out in this report.

3.2.1 An overview

Eroding b from d = a b means trying to delete the entries of b from d , one-at-a-time, in the opposite order from which they were inserted. Each deletion bumps entries down the tableau, starting from a peak in the tableau (as defined in Section 3.2.2). Erosion tries each peak of d to see if it deletes the last entry x of b that needs to be deleted. On a match, it recursively uses erosion to delete more entries of b from d . Whenever a match fails, erosion backtracks the attempted deletion and tries to delete and recursively erode from a new peak of d .

Note the following metaphoric terminology. The term plactic is related to plate tectonics, and multiplication of tableaux is a little like two mountains colliding and merging into one larger mountain. By contrast, erosion amounts to the opposite process, i.e., the larger mountain eroding down into a smaller mountain. More specifically, the erosion algorithm tries to remove random small pieces of the larger mountain by pushing down from various points at the upper surface of the mountain. The deletion down the rows can be compared to a stream eroding the mountain.

3.2.2 Peaks of a tableau

A peak p of a tableau is an entry p with no entry above it and no entry to the right. Equivalently, a peak is an entry that is both the top of a column and the end of a row. In other words, a peak is an upper-right corner of a tableau. For example, tableau d = a b from (11) has four peaks, which are shown in bold below:

(14) 4 3 4 23 3 1123 3

When inserting a new entry x into a tableau v to obtain a larger tableau w , the last entry of w to be changed will be a peak of w . This new peak is the only entry of w in a new position that was not occupied in v . In other words, insertion always ends at a (new) peak. This view suggests that peaks should be the starting point for the process opposite to insertion (deletion, defined in Section 3.2.3).

Note that entries at different peaks can take the same value, so peak includes the position of an entry as well as its value.

3.2.3 Deletion

Deletion, defined below, starts at a given peak p of a given tableau t , ends by producing a smaller tableau s (a modification of t ), and results in an entry x being deleted from the bottom row of t . In other words, the input is tableau t and the peak p . The tableau s and the deleted entry x are the output.

Knuth [1] defined deletion. Deletion is essentially the opposite of insertion, in the sense that inserting entry x into s results in t . It can be described as follows:

  1. Initialize s to be the tableau t .

  2. Initialize x to be value of entry p .

  3. Let r be positive integer such that p is in row r of s (the bottom row is row 1, and then we count up).

  4. Remove p from (the end of) row r of s (so now s = t 1 ).

  5. While r > 1 , repeat the following:

    1. Reduce r by 1.

    2. In row r of s , let y be the largest entry with y < x .

    3. In row r of s , replace the rightmost copy of y by x .

    4. Update x to value y .

  6. Output the tableau s and the final entry value x .

Note that deletion and insertion also figure in various bijective combinatorial correspondences, from pairs of various kinds of tableaux to permutations, to sequences, and to certain classes of symmetric matrices.

3.2.4 Examples of deletion

Each of the four peaks in (14) results in its own deletion, which are illustrated as follows:

(15) 4 4 3 33 1 2 233 1 4 3 23 4 11 3 33 2 4 34 23 11 3 33 2 4 34 233 1123 3 .

The new smaller tableau s is shown above the line, and the deleted entry x is shown below the line. The entries of s that underwent replacement are shown in bold. The former position of the peak of t is shown as . The peak is not an entry of s .

Note that sliding the bold characters up a row shows the process of reinsertion of the deleted entry. Referring to the erosion metaphor, the bold characters indicate the path of a stream in the erosion of the mountain.

3.2.5 Erosion of a tableau by a word

Erosion takes as input a tableau v and a word b = b 1 b m . It either fails or returns a smaller tableau t as an output.

Erosion is a recursive procedure. The erosion of v by b will require the erosion of smaller tableaux by shorter words. The number of sub-erosions needed can be large, and the exact number seems hard to predict. It is easier to describe erosion as a recursive procedure rather than an iterative procedure.

To erode tableau v by word b 1 b m , do the following:

  1. If m = 0 , then stop and output t = v (successfully).

  2. If m > 0 , then for each peak p of v , do the following:

    1. Apply deletion to v , starting from p , getting a smaller tableau s and a deleted entry x .

    2. If x b m , then continue (to the next iteration of the loop over the peaks of v , if any peaks are left).

    3. If x = b m , then (recursively) run erosion on tableau s by word c = b 1 b m 1 , and do the following depending on the result:

      1. If erosion of s by c succeeds, with result of tableau q , then let t = q and stop (so that t = q is the output of eroding v by b ).

      2. If erosion of s by c fails, then continue (to the next iteration of the loop over the peaks of t , if any peaks are left).

  3. At this point, the loop over all the peaks p of t has finished, with none of the peaks successfully leading to an output tableau. In this case, stop and indicate the failure of the erosion of v by b .

3.2.6 Division by erosion

Division by erosion means to divide tableau d by tableau b by applying erosion of tableau d by word ρ ( b ) (the row reading of b ).

More generally, one can use erosion of d by any word w representing b , meaning that P ( w ) = b .

3.2.7 Example of erosion

We now divide tableau d = a b from equation (11) by b from equation (1). Since ρ ( b ) = 213 , we see that we need to delete m = 3 entries. The first entry to delete is b 3 = 3 . The four peaks of d were listed already in (14), and the results of deletions from these four peaks were listed already in (15). Only one of the four peaks results in the deletion of b 3 = 3 , which happens to be the lowest peak (the rightmost peak).

The deletion of 3 gives us a smaller tableau s that also has four peaks. The deletions of the four peaks of s are as follows:

(16) 4 4 3 33 1 2 23 1 4 3 23 4 11 3 3 2 4 34 23 11 3 3 2 4 34 233 112 3 .

The next entry to delete is b 2 = 1 , but the only peak of s that results in deleting 1 is the highest peak (the leftmost peak). So, now we continue the erosion process from the leftmost tableau of the four choices above.

The new smaller tableau s has three peaks that get deleted as follows:

(17) 4 33 4 12 3 3 2 44 33 12 3 3 2 44 333 122 3 .

The next entry of b to delete is b 1 = 2 . We see that the deletion of two of the peaks results in the deletion of 2. Erosion will output one of these results, depending on the ordering of the peaks in the iteration loop over the peaks.

Deleting from the highest peak of s gives back the original tableau a from (1). But suppose that we somehow used a different ordering of the peaks, so that we end up deleting the second-highest peak from s . Then division by erosion would give a final answer of

(18) d / b = 44 33 1233 .

3.2.8 Variants of erosion

Variants of erosion are possible by choosing different methods of looping through the peaks. Highest-to-lowest and lowest-to-highest are two simple methods. Choosing the peaks in a random order might also help in some cases (perhaps a and b have the property such that the fixed orderings of peaks are slower than random peak orderings).

Shortcuts are also possible. The highest peak deletes the lowest possible value of the deleted entry. A peak also deletes an element of the bottom in the same column or in a column further to the right. These observations provide a shortcut to reduce the amount of tests on non-matching peaks. If there are many peaks, then perhaps a binary search will be a faster way to find the matching peaks.

Shortcuts to delete only matching peaks could improve the speed of erosion by the inverse of the proportion of matching peaks to non-matching peaks. This is easily upper bounded by approximately d , the maximum number of peaks.

Instead of pushing down from the peaks, one can try to pull down from the bottom row. Implementations of this worked but were much slower than erosion. The problem may have been that there were multiple choices of which entry to pull down from the row above. Thus, deleting a single entry requires exploring many pull downs and backtracking from incompatible pull downs.

Note that the process of deletion is actually part of an enhanced version of the Robinson–Schensted algorithm. On input of a word w , the enhanced Robinson–Schensted outputs a pair [ P ( w ) , Q ( w ) ] of semistandard tableaux. The extra tableau Q ( w ) has the same shape as P (meaning each row r of Q has the same length as row r of P ). The set of entries of Q is exactly the set 1 , , Q ( w ) . Such a tableau is called a standard tableau. Basically, Q records the positions of the peaks when generating P ( w ) from Q . The peaks are recorded in Q , so deletion can be applied to recover w exactly from [ P ( w ) , Q ( w ) ] . The enhanced Robinson–Schensted algorithm is a bijection between words and pairs of semistandard tableaux of the same shape, with the second tableau being a standard tableau.

Therefore, erosion of tableau t by word b can be interpreted as a simplification of the following algorithm. Loop over standard tableaux s of the same shape as t . Apply the enhanced Robinson–Schensted algorithm to obtain a word w from [ t , s ] . If w = e b for some word e , then output P ( e ) as the value of d / b . Erosion simplifies this procedure by not computing all of w , but only the portion of the standard tableau s needed to check that w will have b as a suffix.

This interpretation of the erosion algorithm suggests that the number of deletion attempts in the erosion algorithm can be estimated from the number of standard tableaux s with the same shape as t , by somehow reducing this number to accurately account for savings from the shortcuts mentioned above. The number of standard tableaux s with the same shape as t is easily calculated using the hook-length formula and is exponential in the size of t . The significant open question is whether the savings arising during erosion (from not using all of each standard tableau s ) are enough to reduce the number of steps to a number that is sub-exponential (or polynomial) in the size of t .

3.3 Division by trial multiplication

Division by trial multiplication means computing d / b as follows. Use a quick method to generate a tableaux from long search list [ a 1 , , a L ] of tableaux. (Tableau a i in the search list does not indicate an entry in the row reading of a tableau a , but rather a whole tableau in a list of tableaux.) Begin a loop from i = 1 to i = L , which might end early. Compute d i = a i b at each iteration. If d i = d , then stop and output a i as the value for d / b . If no iteration has d i = d , then report failure to divide.

More precisely, a division algorithm uses trial multiplication if the vast majority of its run-time computation is spent on tableau multiplications d i = a i b . (So, a division algorithm that spends more time generating a i than testing them is not to be considered trial multiplication. In particular, division by erosion, which spends a long time to generate a one-tableau list [ a 1 ] , should not be considered as a trial multiplication.)

Trial multiplication is strict if the search list [ a 1 , , a L ] does not depend on d . More precisely, strict trial multiplication can generate [ a 1 , , a L ] based on a priori given information about the method used to generate a , such as Alice’s random variable she uses to generate a , but strict trial multiplication cannot depend on any information specific to the target instance of the generated a , which includes d = a b .

For small-sized a , or similarly, for a generated from a small set of large tableaux, trial multiplication can be faster than erosion, costly only a few tableau multiplications (one for each possible a ).

The rest of this section discusses some details of trial multiplication, such as how to generate the search list [ a 1 , a 2 , , a L ] .

3.3.1 Guaranteed success

Division by trial multiplication is guaranteed to succeed if it can be guaranteed that the search list contains the a used to generate the input d = a b defining the instance of the division problem.

Given a list [ a 1 , , a L ] , for which this is not guaranteed, it is usually straightforward to expand the list to ensure it covers all a that can be used to generate the problem instance. We assume that the search list is guaranteed to contain a .

3.3.2 Generating the list as words instead of tableaux

It might be faster to generate the tableaux a i from a list words [ w 1 , , w L ] , with a i = P ( w i ) , where P is the Robinson–Schensted algorithm that converts words to semistandard tableaux.

3.3.3 Content of a tableau

The (multiplicity) content μ ( t ) of a tableau t is an array [ μ ( t ) x ] of integers such that t contains μ ( t ) x entries with value x . When assuming that all entries of t are taken as positive integers, we write μ ( t ) = [ μ ( t ) 1 , μ ( t ) 2 , ] .

For example, in the case of tableaux a and b from (1), the contents are:

(19) μ ( a ) = [ 1 , 2 , 4 , 2 , 0 , 0 , ] μ ( b ) = [ 1 , 2 , 3 , 0 , 0 , 0 , ] .

Note that a tableau’s size is easily determinable from its content by summing: t = x μ ( t ) x . In addition, like size, content is additive over tableau multiplication: μ ( t u ) = μ ( t ) + μ ( u ) (using vector addition). Content is additive because tableau multiplication computes t u by rearranging the entries of t and u into a larger tableau, and rearrangement does not affect the total multiplicity of each entry.

3.3.4 Shuffling

Given d and b , we can deduce the content of tableau a as μ ( a ) = μ ( d ) μ ( b ) . This leads to the following shuffling strategy to assemble a search list for candidate values of d / b .

Write μ = μ ( a ) for short and then pick the first word of search list w 1 as the word:

(20) 1 μ 1 2 μ 2 d d μ .

This multiplicity μ = μ ( a ) . The remaining words in the word search list [ w 1 , , w L ] can be obtained by permuting the entries of w 1 . Methods to generate all distinct permutations of a given word are well-known, and fast. The number of distinct permutations of w 1 , and thus the length L of the search list is then

(21) L = ( μ 1 + μ 2 + μ 3 + ) ! μ 1 ! μ 2 ! μ 3 ! ! .

This shuffling saves us from wasting time on trials w i that have content inconsistent with d and b .

3.3.5 Shortening the list

The shuffling strategy will produce multiple words per tableau. This means that the list [ a 1 , , a L ] has repeated entries, even if the list of words [ w 1 , , w L ] has no repeats.

Given enough memory, some of the repeated computations d i = a i b and d j = a j b for a i = a j can be avoided. Using Robinson–Schensted to obtain a i from w i is actually the first part of computing d i = a i b . A hash of a i can be stored in a list. When a j is computed, it can be hashed, and compared to the list of hashes. If a j = a i is detected via the hash list, then the rest of the computation d j = a j b can be skipped.

There also exist various combinatorial methods to generate tableaux directly (such as [10] and [11], see [12] for others). These generation methods might take longer to generate search list a 1 , , a L and arguably fall outside the category of trial multiplication. More importantly, it is unclear by how much they would speed up division.

3.3.6 Non-cancellation increases the success rate

The length L of the search list [ a 1 , , a L ] is an upper bound on the number of trial multiplications needed.

Knuth multiplication is non-cancellative, so there would typically be many different a i such that a i b = d . If C is the average number of such a i , then the L / C is a better estimate for the number of trial multiplications that would actually be needed.

As a speculative heuristic, an a priori guess is that C L might measure the amount of non-cancellation, so that only L trial multiplications would actually be needed. This heuristic has not been tested empirically.

3.4 Division by max-algebra matrices

Johnson and Kambites [6] found a way to represent a semistandard tableau as a matrix, such that Knuth multiplication converts to matrix multiplication over a max-algebra. A max-algebra (also known as a tropical algebra), consists of numbers but with different operations. The usual addition operation is replaced by maximization. (The usual multiplication operation is either kept the same, in which case, only nonnegative numbers are used or, in some versions, replaced by addition, in which case an extra number, , is included in the algebra.)

This suggests that dividing tableaux can be achieved by dividing max-algebra matrices.

The Johnson–Kambites representation takes a tableau with entries in the set { 1 , , n } and represents it as a square matrix with 2 n rows and 2 n columns. For large n , these matrices are quite large. Max-algebraic division of 2 n by 2 n matrices should, in general, be possible with 8 n arithmetic operations. The matrices in the Johnson–Kambites representation are sparse and triangular, so a more specific division of such sparse triangular matrices might cost less, perhaps only 2 n arithmetic operations (with 2 n being a lower bound that just read all the entries).

These considerations suggest a tentative estimate that, for large size tableaux, division by erosion will be faster than using the Johnson–Kambites representation and generic max-algebra matrix division algorithms.

That said, as in classical representation theory, the Johnson–Kambites representation can be decomposed into smaller representations. If the smaller representations are faithful (isomorphic to the plactic monoid), then a division of smaller sized matrices might lead to division in the plactic monoid.

A reasonable question is therefore whether to investigate these smaller representations within Johnson–Kambites representation more thoroughly.

3.5 Division by introspection

Suppose that a i < b j for all i , j . When computing d = a b , no entry b j of b will bump any entry a i of a . Therefore, the semistandard tableau d contains, on the lower left, an exact copy of the semistandard tableau for a . In other words, we need to merely look inside d to find a by ignoring all entries from b . Let us name this division algorithm for this special case by calling it introspection (for looking inside).

In this special case (all a i < b j ), introspection is much faster than erosion. Empirically, erosion seems no faster than usual when applied to this special case.

For a and b generated from long enough random words, the probability is that all a i < b j is negligible, so introspection should not work.

Nonetheless, the fact that introspection can sometimes be faster than erosion (in a special case above), is a significant concern. A natural question is therefore whether the introspection can be generalized outside the very special case above. Perhaps, for example, the idea behind introspection can be blended into the erosion algorithm to pull out smaller entries of a more quickly.

4 Key agreement

Alice and Charlie agree on a secret key as follows. All keys are semistandard tableaux. Alice generates her private key a ; Charlie generates his private key c . Alice and Charlie both have the same base key b (but b is not secret). Alice computes her public key d = a b and delivers d to Charlie. Charlie computes and delivers his public key e = b c to Alice. Alice computes a secret key f = a e . Charlie computes a secret key g = d c . Alice and Charlie’s secret keys f and g agree, because f = a b c = g and Knuth multiplication is associative, a ( b c ) = ( a b ) c .

In practice, Alice and Charlie will apply a key derivation function H to the agreed secret key f to derive a shared key k = H ( f ) = H ( g ) suitable for an encryption algorithm or an authentication algorithm (or both). Without key derivation, the original tableaux f = g might be the wrong length to use as an encryption key or as an authentication key. Worse, without key derivation, the tableau f would be too easily distinguishable from a uniformly random byte string (or even a random tableau of the same size). (For example, tableau can be distinguished by its multiplicity content, since μ ( a b c ) = μ ( a b ) μ ( b ) + μ ( b c ) = μ ( d ) μ ( b ) + μ ( e ) .) Key derivation functions provide a way that seems to correct these functional and security deficiencies. Key derivation functions provide variable output lengths, allowing the output to be the correct length for a given encryption or authentication algorithm. Key derivation functions aim to be pseudorandom functions in the sense that the input of sufficient secret entropy should usually produce output strings that are computationally indistinguishable from uniformly random strings.

(More precisely, for common key derivation functions, it is generally believed to be computationally infeasible to find a random variable with sufficient entropy that, on input to the key derivation, yields output distinguishable from random. Such random variables may exist, just as colliding inputs to a compressing hash function may exist, but finding the inputs is deemed infeasible.)

4.1 Main security aims

The main security aim of plactic key agreement is to resist an attacker who tries to compute the agreed secret key f = g by knowing the base key b , and the delivered keys d and e .

Note that several other types of attackers can be defined. For example, some attackers might not observe d , but can observe the use of the key f . Some attackers might not know b , perhaps because it is derived from a password, and will try to guess b . Some attackers might somehow get to see a set of possible agreed secret keys { f 1 , f 2 } that contains f , and need to find i such that f i = f . (See [5] for more discussion.) Resisting these other attackers is only a secondary aim of plactic key agreement. Resisting these other attackers would be moot if the main security aim could not be met.

4.2 Attacking key agreement using division

The attacker can compute the agreed secret key f by the computation

(22) f = ( d / b ) e .

The proof that this works is an elementary calculation:

f = a b c = ( a b ) c = ( ( ( a b ) / b ) b ) c = ( ( d / b ) b ) c = ( d / b ) ( b c ) = ( d / b ) e .

Note that when the attacker computes d / b , it is not necessarily the case that a = d / b , because the plactic monoid is non-cancellative. The value d / b can be considered, however, to be effectively equivalent to Alice’s private key, because both private keys a and d / b generate the same public key d . In other words, ( d / b ) b = a b . In this sense, using division to attack plactic key agreement is similar to solving the discrete logarithm problem to attack elliptic curve Diffie–Hellman (ECDH) key agreement, because both attacks find the effective private key of one of the users.

As noted earlier in this article, an anonymous reviewer found another attack on plactic key agreement. This other attack uses two divisions instead of the one used above, but it appears faster than the attack above, because the sizes of the tableaux being divided are smaller. The attack works like this. First, factor the base b as b = b 1 b 2 . Then, try to compute f as

( d / b 2 ) ( b 1 e ) ,

where means left division. Unlike the previous attack, this attack is not guaranteed to work. Nevertheless, the JMC reviewer reports that it succeeds with probability about 85%, when b has 70 entries and each b i has 35 entries. (This new attack is analogous to solving the Diffie–Hellman problem rather than the discrete logarithm problem as the previous attack was.)

This attack seems considerably faster than the single-division attack because the two divisions have shorter divisors b i . If division has cost exponential in the number of entries of the divisor, halving the number of divisors has the effect of taking the square root of the number of steps.

This suggests that, in order to resist the second attack as well as the first attack, the length of the divisor b should be doubled.

5 Discussion

5.1 Provenance

The combinatorics underlying the plactic key agreement has prominent provenance, originating from very reputable authors, such as Knuth and Schutzenberger. If a fast division algorithm was obvious to these authors, maybe they had no reason to mention it.

5.2 Quantum computer attack vulnerability

The combinatorial nature of the plactic key agreement involves many branching steps and nonreversible operations. Quantum computers tend to have an advantage over classical computers when many non-branching and reversible operations can be computed in quantum superposition. This intuition suggests that plactic key agreement might have some innate resistance to quantum computer attacks. (On the other hand, perhaps the Johnson–Kambites matrix representation runs faster on a quantum computer.)

5.3 Nascency

Plactic key agreement is quite new. In particular, the thought put directly into its security analysis, or indirectly into the computational problems related to the plactic monoid (or to the Rabi and Sherman’s associative key agreement) is at the moment less than the future thought put into its cryptanalysis (assuming that the plactic key argument becomes a worthy target of cryptanalysis).

As with any new cryptography, plactic key agreement should be considered risky. Indeed, the generic model from the study of Brown [13] provides one way to quantify this risk. If we first estimate a fame 4 (meaning 16 times as much thought will put into further cryptanalysis) and take the recommended low optimism level of 0.05, we obtain an attack probability of approximately 1 2 69 , which is extremely close to one. Despite this pessimistic estimate, arguably an overestimate, even the securest cryptography must start off being new, and even the most mature cryptography has nonzero risk, so research into new cryptography seems worthwhile.

5.4 Diversification

Plactic key agreement seems independent of the more established cryptography schemes, such as ECDH, supersingular isogeny key exchange (SIKE), McEliece public-key encryption, and number theorists are us public key-encryption. (But, SIKE was recently broken.)

Intuition suggests that it is unlikely that an attack on one of these established schemes would extend to plactic key agreement. One might conjecture that the security of the plactic key agreement is independent of these established key encapsulation methods (KEMs). In other words, plactic key agreement could contribute to cryptography diversity. For very high-risk applications with large budgets, it might be worth layering (see [13]) plactic key agreement with the more established KEMs. Even without layering, cryptographic diversity helps make cryptographic agility, meaning an ability to react quickly and smoothly in response to a catastrophic attack on a deployed cryptographic system; without cryptographic diversity, there would be no viable alternative to switch to.

5.5 Usability comparison

Assuming the current security analysis advances no further, a preliminary, tentative usability comparison to other key establishment schemes can be useful.

At the 128-bit security level, meaning an attacker requires at least 2 128 bit operations to break the security, under the current estimates, parameters for plactic key agreement would use on the order of 1,000 bytes to be sent in each direction, and the run-time on a 2021 personal computer might be on the order of 1 ms.

Plactic key agreement has a small implementation footprint, because, as an algorithm, it mostly consists of a doubly nested loop with byte comparisons and swaps. It can be implemented with a short C program and perhaps with fairly simple hardware.

Plactic key agreement might be inherently vulnerable to side-channel attacks because, as an algorithm, it uses many comparisons, which could lead to secret-dependent timing or power usage.

5.5.1 ECDH key agreement

The most common key agreement scheme used today is ECDH. A risk with ECDH is that a sufficiently large quantum computer could break it in polynomial time, likely rendering practical key sizes insecure. Nonetheless, ECDH is still widely used and therefore serves as a good comparison for usability.

Typically, at the 128-bit security level, an ECDH public key has a byte length from 32 to 64 (depending on whether compression is used). On a 2022 personal computer, the runtime is about 0.1 ms. Compared to plactic key agreement, it is on the order of ten times faster, with 30 times less data usage.

5.5.2 Post-quantum KEMs

The National Institute of Standards and Technology (NIST) Post-Quantum Cryptography (PQC) Project recently announced that it will standardize a KEM called Kyber.

For speed, Kyber is approximately as fast as ECDH, so Kyber is about ten times faster than plactic key agreement. For data size, Kyber sends about 2,000 bytes, about twice as much data as plactic key agreement. Since plactic key agreement is so new and, therefore, unstable, this slim advantage of plactic over Kyber should likely be disregarded.

Four other KEM algorithms are being considered in Round 4 of the NIST PQC project: SIKE, bit flipping key encapsulation (BIKE), Classic McEliece, and hamming quasi-cyclic (HQC):

  1. SIKE was broken, within a month of the Round 4 announcement.

  2. Class McEliece has public keys on the order of a megabyte, about a 1,000 times larger than plactic key agreement.

  3. BIKE and HQC are more comparable in usability to Kyber, so might be more usable than plactic key agreement.

Overall, because plactic key agreement is so new, for security reasons, it should not replace ECDH or any of the PQC finalists, regardless of the usability comparisons.

Instead, plactic key agreement should only be used for cryptographic diversity. It could be implemented as a fallback ready to be deployed in the event of a disaster that breaks the currently used system, or it could be used as an extra redundant layer of security as part of a hybrid key agreement.

These two possible benefits should be compared to the usability cost of plactic key agreement, as part of a careful cost–benefit analysis.


A. Menezes, A. Živković, three anonymous Crypto 2021 reviewers, and three anonymous JMC reviewers provided helpful suggestions to improve the clarity of this report. Critically, one of the JMC reviewers found a new attack that seems to require doubling the size of the tableau b used in key agreement, and C. Monico found a new algorithm for division that is potentially polynomial time, and likely to make plactic key agreement insecure. T. Suzuki supported and encouraged the proposal of plactic key agreement.

  1. Conflict of interest: Author states no conflict of interest.

Appendix A Empirical estimates for the cost of erosion

This section reports some empirical estimates for the cost of erosion. Tables A1 and A2 show the main statistical results, measuring costs in the number of bytes swapped during erosion. While still lacking a rigorous cost analysis has not yet been discovered (by me), the empirical results are the best guess for the cost of erosion.

Table A1

Statistical results for costs (number of swaps)

a = b 10 20 30 40 50 60
Sample trials 41,889 4,630 629 200 154 149
Minimum 3.1 × 1 0 1 1.8 × 1 0 2 7.8 × 1 0 2 7.9 × 1 0 3 3.7 × 1 0 4 9.0 × 1 0 4
Mean 2.2 × 1 0 2 5.3 × 1 0 3 1.3 × 1 0 5 2.3 × 1 0 6 7.7 × 1 0 7 2.4 × 1 0 9
Standard deviation 1.7 × 1 0 2 8.3 × 1 0 3 2.8 × 1 0 5 3.9 × 1 0 6 2.1 × 1 0 8 1.4 × 1 0 10
Relative deviation 0.788 1.575 2.178 1.705 2.772 5.796
Skewness coefficient 2.905 5.418 5.945 4.144 8.459 11.290
Kurtosis coefficient 14.718 46.045 46.359 23.146 86.222 130.846
log 2 ( minimum ) / b 0.495 0.373 0.320 0.324 0.303 0.274
log 2 ( mean ) / b 0.777 0.618 0.565 0.528 0.524 0.519
Table A2

Statistical results for log-costs

a = b 10 20 30 40 50 60
Sample trials 41,889 4,630 629 200 154 149
Minimum 4.954 7.451 9.613 12.953 15.156 16.458
Mean 7.457 11.437 15.412 19.643 23.848 27.500
Standard deviation 0.898 1.595 2.136 2.305 2.985 3.457
Relative deviation 0.120 0.139 0.139 0.117 0.125 0.126
Skewness coefficient 0.531 0.247 0.042 0.334 0.336 0.175
Kurtosis coefficient 0.050 0.325 0.203 0.215 0.213 0.068
Minimum / b 0.495 0.373 0.320 0.324 0.303 0.274
Mean / b 0.746 0.572 0.514 0.491 0.477 0.458
log b ( Mean ) 0.873 0.813 0.804 0.807 0.811 0.809

Previous versions of this article used less extensive statistical tests, running only a few trials per tableau length. Because the costs have large deviations (as seen in Table A1), a simplistic lower bound on the cost was estimated, aiming cautiously below the lowest observed costs. Specifically, this lower bound estimate is exponential in the length, leading to an estimate of a cost 2 0.3 L where a = b = L .

At the suggestion of a JMC reviewer, the current, more thorough tests were run. The current tests use far larger sample sizes, with many more divisions per tableau length. The standard deviation of the cost is still higher than the mean, but larger sample size helped to inspect the cumulative probability function. Graphing the sorted costs suggested that the costs have a distribution close to a log-normal distribution.

Furthermore, the mean of the log-cost seems to have a sub-exponential growth quite close to 2 L 0.81 . This would suggest that L 400 starts to provide 128-bit security, at least on average.

A user should also worry about non-negligible probabilities for much easier divisions. To that end, the tables also list the minimum log-cost. The minimum log-cost has growth approximately 2 L 0.7 , when the sample trial sizes are at least 100.

More extensive analysis of lower tails of the cost distribution is needed. Future statistical analysis may better understand this by trying erosions for larger L but stopping any trials whose cost exceeds the lower bound in question.

B Identity-based key agreement

This section describes identity-based plactic key agreement, a variant of plactic key agreement, which aims to achieve a type of identity-based key agreement.

Identity-based key agreement is a three-party protocol, where Alice and Charlie have pre-existing secret communication channel with a trusted third party, Bob (whom we may call the benefactor as a mnemonic). Given the existence of two private channels and each other’s identities, Alice and Charlie can establish a third secret channel (the secret to everybody except the three parties Alice, Bob, and Charles).

To do this, identity-based plactic key agreement reverses the secrecy of the five variables a , b , c , d , and e . So, now a and c are public, while b , d , and e are secret. The public value a can be the hash of Alice’s identity. Similarly, c can be the hash of Charlie’s identity. The value b is Bob’s private key and is revealed to nobody but Bob. Bob computes d = a b and delivers d to Alice through the secret channel between Alice and Bob. Bob computes e = b c and delivers e to Charlie through the secret channel between Bob and Charlie.

Alice can compute g = d c , because Alice was given the secret d , and the value c is public, by way of Charlie’s identity. Charlie computes f = a e , using public a and secret e . As always, f = g , because f = a e = a ( b c ) = ( a b ) c = d c = g .

Note that, by contrast, in plactic key agreement, Alice computes f = a e , not g = d c .

Identity-based plactic key agreement is non-interactive in the sense that Alice and Charlie can compute their shared secret f = g without sending any data first. The communication necessary to achieve this happened earlier in their individual communications with Bob. So, Alice, for example, can start encrypting messages with f immediately, knowing that only she, Bob, or Charlie should be able to decrypt the ciphertexts.

To repeat a warning, Bob can also compute f = a b c = g , because Bob knows a , b , and c . In other words, identity-based plactic key agreement suffers from the very serious key escrow problem. If Bob were corrupt, then Bob could compute f and then undetectably decrypt any message encrypted with f .

Note that, by contrast, when conventional plactic key agreement is used with a conventional PKI, Alice and Charlie might only need to trust a third party for the authentication of their delivered public keys d and e , which might be digitally signed using a certificate issued by a trusted third party. If the third party Bob were corrupt, then Bob could impersonate one of Alice or Charlie to other, thereby tricking Alice or Charlie to encrypt messages to Bob. But the cost to a corrupt Bob is greater here than on the identity-based key agreement because he must participate in the key agreement session, rather than listen in silently.

To mitigate the escrow problem, Alice and Charlie can run identity-based key agreement and regular key agreement simultaneously. For example, Alice and Charlie can first do the identity-based plactic key agreement above to agree on a secret f . Second, they can apply conventional plactic key agreement, using variables, a , b , c , d , and e to agree on a second secret f . They can bind the results of the two key agreement schemes, perhaps by putting b = f or by deriving the final key as k = H ( f , f ) . Of course, Alice and Charlie must still trust Bob not to actively impersonate them. In this case, because the combined key agreement is interactive, Alice and Charlie must send each other a tableau ( d and e ) before they derive a shared key for encryption.

In this case, authentication is also implicit, because there are no digital signatures involved. So, a fourth-party attacker, say Dennis (other than Alice, Bob, or Charlie) could pretend to be Charlie, causing Alice to think that she is encrypting data to Charlie, when she is really encrypting with a key that nobody else knows. Because Alice uses the identity-based agree key f that only she, Bob, and Charlie should be able to compute, Dennis will not be able to decrypt Alice’s ciphertext, even if Dennis knows f . So, it seems that Dennis can, at worst, cause a denial of service attack, tricking Alice (or Charlie) to wasting time encrypting to the other. To resist this attack, Alice and Charlie might either request an authenticated acknowledgement from the other during the interactive stage of the key agreement.

C Sample code

C.1 A C program for Knuth multiplication

Table A3 is a C program, written a terse code-golf style, that can run Knuth multiplication of semistandard tableaux (and the Robinson–Schensted algorithm).

Table A3

A C program for Knuth multiplication

The simplicity of Knuth multiplication can be measured by the brevity of the C program in Table A3.

Note that the C program uses the Knuth relations to insert entries into the row reading representation of tableaux.

C.2 A C program for division by erosion

Table A4 shows a C program that runs one possible version of division by erosion.

Table A4

A C program for division by erosion

D Previous work

This section outlines the history of the technical elements making up the plactic key agreement.

D.1 Semistandard tableaux and the plactic monoid

Kostka [14] introduced semistandard tableaux to provide combinatorial explanations of the integer coefficients of symmetric polynomials. The coefficient of the monomial symmetric polynomials in the Schur symmetric polynomial is the number of semistandard tableaux of a given shape and content (and now the numbers are called Kostka numbers). Jacobi [15] defined Schur symmetric polynomials as ratios of alternating polynomials. Schur later used these polynomials in matrix representations of permutations [16] (and now these polynomials are named after Schur). Young [17] used standard tableaux (semistandard tableaux with all entries distinct) to study matrix representation of permutations (and now these tableaux are often called Young tableaux).

Robinson [8] used semistandard tableaux in 1938 to find the longest non-decreasing subsequence of a given sequence. Schensted [9] extended this method in 1961 to find the longest subsequences that were unions of a given number of non-decreasing subsequences. Their independently discovered algorithm is now called the Robinson–Schensted algorithm. It is the part of the Knuth multiplication that one obtains from multiplying an empty tableau a by some other tableau b , except that one starts from the row reading of b . The shape of the tableau b provides information about various subsequences of b .

Knuth [1] defined a monoid, which he called “tableau algebra.” Its elements can be represented by semistandard tableaux. To prove the associativity of multiplication of tableaux, Knuth showed that the Robinson–Schensted algorithm that maps strings to semistandard tableaux actually defines a congruence on the monoid of strings under concatenation (the free monoid). The resulting congruence monoid is the plactic monoid.

Lascoux and Schutzenberger [7] studied Knuth’s “tableau algebra” in 1981 and re-named it monoide plaxique (perhaps inspired by plate tectonics), which has been translated to plactic monoid, which is now the generally accepted term.

For a sample of some recent research about the plactic monoid, see [18], [19], and [6]

D.2 Key agreement

Diffie and Hellman [3] introduced in 1976 a key agreement scheme. (Diffie and Hellman refer to their key agreement scheme as a “public-key distribution system.” This report uses the term key agreement to avoid clashes with the more general meanings of key distribution and key exchange.) Merkle [2] independently introduced another (less efficient) type of key agreement. Diffie–Hellman key agreement uses exponentiation modulo a large prime number. Changing their notation slightly, Alice sends d = b a to Charlie, Charlie sends e = b c to Alice, Alice computes the agreed key f = e a , and Charlie computes the agreed key g = d c .

Rabi and Sherman [4] described in 1993 a key agreement scheme that uses an associative one-way function. Plactic key agreement can be considered an instance of Rabi–Sherman key agreement, with Knuth’s multiplication of semistandard tableaux as the associative binary operation.

Berenstein and Chernyak [20] described in 2004 a key agreement scheme that uses semigroup multiplications instead of modular exponentiation. (Recall that a semigroup is a set with an associative binary operation. By default, we assume a multiplicative notation for the binary operation. A monoid is a semigroup that has an identity element. In particular, the plactic monoid is a semigroup.)

D.3 Encryption

Soro et al. [21] proposed using semistandard tableaux and the Robinson–Schensted algorithm for cryptography (which an anonymous JMC reviewer alerted me to). To encrypt message m , the Robinson–Schensted algorithm is applied, obtaining a tableau P ( m ) for the ciphertext. (No key is used during encryption, it seems, which is rather unusual.) The decryption key seems to be standard tableau Q ( m ) , allowing recovery of m . Perhaps, the decryption key is meant to be delivered securely by another layer of encryption, such as Rivest-Shamir-Adleman encryption. The security seems (to me) to rely on the other layer of encryption. The usage of tableaux seems (to me) to not help with efficiency either.


[1] Knuth DE. Permutations, matrices, and generalized Young tableaux. Pacific J Math. 1970;34(3):709–27. 10.2140/pjm.1970.34.709Search in Google Scholar

[2] Merkle RC. Secure communications over insecure channels. Commun ACM. 1978 Apr;21(4):294–9. 10.1145/359460.359473Search in Google Scholar

[3] Diffie W, Hellman ME. New directions in cryptography. IEEE Trans Inform Theory. 1976 Nov;22(6):644–54. 10.1145/3549993.3550007Search in Google Scholar

[4] Rabi M, Sherman AT. Associative one-way functions: a new paradigm for secret-key agreement and digital signatures. University of Maryland; 1993. CS-TR-3183/UMIACS-TR-93-124. Search in Google Scholar

[5] Brown DRL. Key agreement: security/division; 2021. Cryptology ePrint Archive, Paper 2021/1112. Search in Google Scholar

[6] Johnson M, Kambites M. Tropical representations and identities of plactic monoids. Trans Amer Math Soc. 2021;374:4423–47. 10.1090/tran/8355Search in Google Scholar

[7] Lascoux A, Schutzenberger MP. Le monoide plaxique. In: Proc. Colloqu. Naples, Noncommutative structures in algebra and geometric combinatorics (Naples, 1978). vol. 109 of Quad. Ricerca Sci. CNR. 1981. p. 129–56. Search in Google Scholar

[8] Robinson Gd. On the representations of the symmetric group. Amer J Math. 1938;60:745–60. 10.2307/2371609Search in Google Scholar

[9] Schensted C. Longest increasing and decreasing subsequences. Canad J Math. 1961;13:179–91. 10.1007/978-0-8176-4842-8_21Search in Google Scholar

[10] Greene C, Nijenhuis A, Wilf HS. A probabilistic proof of a formula for the number of Young tableaux of a given shape. Adv Math. 1979;31:104–9. 10.1016/B978-0-12-428780-8.50005-2Search in Google Scholar

[11] Novelli JC, Pak IM, Stoyanovskii AV. A direct bijective proof of the hook-length formula. Discrete Math Theoret Comput Sci. 1997;1:53–67. 10.46298/dmtcs.239Search in Google Scholar

[12] Sagan BE. The symmetric group: representations, combinatorial algorithms, and symmetric functions. 2nd ed. No. 203 in Graduate Texts in Mathematics. Springer; 2001. 10.1007/978-1-4757-6804-6Search in Google Scholar

[13] Brown DRL. Layering diverse cryptography to lower risks of future and secret attacks: post-quantum estimates; 2021. Cryptology ePrint Archive, Paper 2021/608. Search in Google Scholar

[14] Kostka C. Uber den Zusammenhang zwischen einigen Formen von symmetrischen Funktionen. Crelleas J. 1882;93:89–123. 10.1515/crll.1882.93.89Search in Google Scholar

[15] Jacobi CG. De functionibus alternantibus. Crelleas J. 1841;22:360–71. Search in Google Scholar

[16] Schur I. Uber eine Klasse von Matrizen, die sich einer gegebenen Matrix zuordnen lassen. Doctoral Dissertation. Universitat Berlin; 1901. Search in Google Scholar

[17] Young A. On quantitative substitutional analysis. Proc London Math Soc Ser 1. 1900;33(1):97–145. 10.1112/plms/s1-33.1.97Search in Google Scholar

[18] Cain AJ, Gray RD, Malheiro A. Finite Grobner-Shirshov bases for Plactic algebras and biautomatic structures for Plactic monoids. J Algebra. 2014;423:37–53. Search in Google Scholar

[19] Cain AJ, Malheiro A. Identities in plactic, hypoplactic, sylvester, baxter, and related monoids. Electronic J Combinatorics. 2018 Aug;25(3). 10.37236/6873Search in Google Scholar

[20] Berenstein A, Chernyak L. Geometric key establishment. In: Canadian Mathematical Society Conference. 2004. p. 1–19. 10.1090/conm/418/07945Search in Google Scholar

[21] Soro KF, Akeke ED, Kouakou KM. An application of young tableaux to cryptography. Palestine J Math. 2020;9(2):639–57. Search in Google Scholar

Received: 2022-03-29
Accepted: 2022-07-23
Published Online: 2023-02-15

© 2023 the author(s), published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 28.2.2024 from
Scroll to top button