BY 4.0 license Open Access Published online by De Gruyter September 21, 2021

The number of response categories in ordered response models

Maria Iannario ORCID logo, Anna Clara Monti and Pietro Scalera

Abstract

The choice of the number m of response categories is a crucial issue in categorization of a continuous response. The paper exploits the Proportional Odds Models’ property which allows to generate ordinal responses with a different number of categories from the same underlying variable. It investigates the asymptotic efficiency of the estimators of the regression coefficients and the accuracy of the derived inferential procedures when m varies. The analysis is based on models with closed-form information matrices so that the asymptotic efficiency can be analytically evaluated without need of simulations. The paper proves that a finer categorization augments the information content of the data and consequently shows that the asymptotic efficiency and the power of the tests on the regression coefficients increase with m. The impact of the loss of information produced by merging categories on the efficiency of the estimators is also considered, highlighting its risks especially when performed in its extreme form of dichotomization. Furthermore, the appropriate value of m for various sample sizes is explored, pointing out that a large number of categories can offset the limited amount of information of a small sample by a better quality of the data. Finally, two case studies on the quality of life of chemotherapy patients and on the perception of pain, based on discretized continuous scales, illustrate the main findings of the paper.

1 Introduction

A critical point in surveys with rating questions is the choice of the number m of response categories to use in the discretization of a measurement obtained on a continuous scale (in which the only marks are those related to the minimum and the maximum level). Although categorization of continuous measurements implies a loss of information, it is a widespread practice in various fields, such as medicine and epidemiology for instance, where researchers often split a continuous scale into ordered categories to make interpretation of the results easy. Examples are in [1], [2], [3], [4], [5], among others, where categorization is performed without prearranged meaningful categories. Furthermore [6] underline the benefits of transforming continuous responses in ordinal categories when the measurement variable is skewed because ordinal response models handle floor and ceiling effects better than linear models.

Within the statistical literature the choice of m is discussed by [7] who studies the beneficial impact of an increasing number of categories on standard errors. [8] instead points out that a large m allows a more powerful detection of associations between variables; a result confirmed by [9] with reference to tests on differential item functioning. Furthermore [10] show that a larger value of m reduces the impact of response errors on the local robustness properties of the estimators in the modeling framework for ordinal data denoted as CUB models [11].

The current paper investigates the impact that the choice of m has on the efficiency of the estimators in case of discretization of a continuous response variable in data analysis (in Section 4) performed through a proportional odds model (POM) [12, 13]. The latter naturally arises when the rating is supposed to be driven by an underlying continuous variable. Each rating corresponds to an interval on the support of this variable. Choosing m is equivalent to deciding in how many classes the support is to be partitioned. With respect to alternative modeling frameworks, the POM is extremely parsimonious and is, by far, the most widely applied model in the biomedical context (see [14], [15], [16], among others).

Closely related to the choice of the number of categories, combining values/scores by collapsing adjacent categories of an ordinal response represents another relevant issue and also a widespread practice which arises in processing sample information after data collection. It may be pursued for overcoming sparseness problems, for simplifying interpretation or dealing with extreme response styles [17]. For a given dataset, changing the number of categories can affect the inferential results obtained from the data [18, 19]. Other studies focus on merging categories to reduce the size of contingency tables by using the homogeneity of the corresponding rows (or columns) or the structure criterion (see [20] and reference therein). Section 6 of the current paper shows that collapsing categories – in case of a univariate ordinal response – reduces the information content of the sample generating a loss of efficiency which becomes extremely high in case of dichotomization.

Another critical point in data analysis concerns the appropriate number of categories with respect to a given sample size n [21]. Section 7 shows that increasing the number of categories enhances the efficiency of the estimators even if n is small. The relationship between m and n represents a crucial point discussed also in close field related to the association among categorical variables (see [20, 22, 23], among others).

In summary the paper handles three topics regarding data analysis of discretized continuous variables and ordinal variables: the choice of the number of response categories, the consequences of collapsing categories, and the relationship between m and the sample size n and it is organized as follows. The next Section provides a brief overview of the POM, whereas Section 3 describes the models used for the analysis. The information matrices of these models are analytically derivable so that the evaluation of the asymptotic efficiency of the estimators of the regression coefficients can be carried out without need of simulations. The impact of the choice of the number of response categories on efficiency and on hypotheses testing is investigated in Sections 4 and 5. The effect of various forms of merging categories is examined in Section 6, and the relationship between the number of categories and the sample size is analyzed in Section 7. In Section 8 two case studies related to the medical context illustrate the main findings of the paper. The first is about the perceived health-related quality of life of chemotherapy patients, whereas the second one deals with perceived pain by women during labor. Final remarks end the paper.

2 Theoretical background

In the POM framework, the ordinal response Y depends on an underlying continuous variable Y* through the relationship

(2.1)Y=jαj1<Y*αj,j=1,2,,m,

where −∞ = α0 < α1 < ⋯ < αm = + ∞ are the thresholds of the support of Y*. The choice of m determines in how many intervals the support of Y* is divided.

The variable Y*, in turn, depends on p ≥ 1 covariates, so that for the ith statistical unit we have the regression model

(2.2)Yi*=Xi1β1+Xi2β2++Xipβp+ϵi=Xiβ+ϵi,i=1,2,,n,

where Xi = (Xi1, Xi2, …, Xip)′, β = (β1, β2, …, βp)′ and ϵi is a random variable whose distribution function is denoted by G(ϵ).

Formula (2.1) implies that from the same underlying variable Y* a countable set of ordinal variables Y(m) can be generated by allowing m to vary in {3, 4, …}. These variables differ from each other for the number of categories. Nevertheless all of them refer to the same regression model (2.2) and therefore the different estimators of the regression coefficients, which are obtained by varying m, estimate always the same β. This property – known as invariance to choice of response categories ([8]; p. 56) or collapsibility ([17]; p. 255) - of the POM is exploited to analyze how the choice of m affects the efficiency of the estimators and the accuracy of the derived inferential procedures.

Let θ = (α′, β′)′ be the parameter vector, where α = (α1, …, αm−1)′ is the vector of the thresholds. Given an observed random sample (yi, xi), for i = 1, 2, , n, the log-likelihood function is i=1n(θ;yi,xi) with individual term

(θ;yi,xi)=j=1mI[yi=j]logP(Yi=j|xi),

where I[ω] is an indicator function which takes value 1 when ω holds and 0 otherwise, and PYi=j|xi=Pαj1<Yi*αj|xi=G(αjxiβ)G(αj1xiβ), for j = 1, 2, …, m. The score function is Sn(θ)=i=1nS(θ;yi,xi), where

S(θ,yi,xi)=l(θ;yi,xi)θ=j=1mI[yi=j]1P(Yi=j|xi)P(Yi=j|xi)θ,

and P(Yi=j|xi)/θ=P(Yi=j|xi)/α,P(Yi=j|xi)/β (see [24] for the analytic expression of S(θ, yi, xi)). The maximum likelihood estimator (MLE) is the solution θ̂=(α̂,β̂) of Sn(θ̂)=0.

The generic term of the information matrix I(θ,X) for a single statistical unit, conditionally on X = x, is given by

Irs(θ,x)=EY(θ,Y,X)θr(θ,Y,X)θsX=x=y=1mSrθ;y,xSsθ;y,xP(Y=y|x),(r,s)=1,2,,m+p1,

where Srθ;y,x is the element of the score function related to the r-th element θr of θ. The elements of the unconditional information matrix I(θ) are given by

(2.3)Irs(θ)=EXIrs(θ,X),(r,s)=1,2,,m+p1.

The asymptotic variance-covariance matrix of the MLEs is I(θ)1.

Asymptotically we have n(θ̂θ)N0,I(θ)1. In particular for the estimator β̂k of the single regression coefficient βk we have

(2.4)n(β̂kβk)N0,I(θ)βkβk,

where I(θ)βkβk is the element on the diagonal of I(θ)1 corresponding to βk.

3 The models

To investigate the asymptotic efficiency of the estimators of the regression coefficients when m varies, we focus on models whose (unconditional) information matrix can be analytically derived through (2.3). In particular we consider the following three underlying regression models.

  1. Model 1(with a continuous covariate). The variable Y* depends on a continuous covariate Y* = + ϵ, where XN(0, 1) and β = 1.5. The information matrix is given by

    I(θ)=RI(θ;x)ϕ(x)dx,

    where ϕ(⋅) is the standard normal density function.

  2. Model 2(with dichotomous covariates). The variable Y* depends on two dichotomous covariates Y* = X1β1 + X2β2 + ϵ, where X1Ber(0.5), X2Ber(0.25) and X1 and X2 are mutually independent. The regression coefficients are β1 = 1.5 and β2 = 0.7. Denote the conditional information matrix, given X1 = x1 and X2 = x2, by I(θ;x1,x2); the information matrix is given by

    I(θ)=x1=01x2=01I(θ;x1,x2)PX1=x1PX2=x2.
  3. Model 3(with mixed covariates). The variable Y* depends on a continuous covariate, a dichotomous one and their interaction. The regression model is Y* = X1β1 + X2β2 + X1X2β3 + ϵ, where X1N(0, 1), X2Ber(0.5) and X1 and X2 are mutually independent. The regression coefficients are β1 = 2.7, β2 = 1.5 and β3 = 0.7. In obvious notation, the information matrix is given by

    I(θ)=12RI(θ;x1,0)ϕ(x1)dx1+12RI(θ;x1,1)ϕ(x1)dx1.

    Notice that Model 3 is generally used in the analysis of differential item functioning in grading scales (see [9] and references therein).

We consider the probit, the logit and the complementary log-log link function which assume the Gaussian, the logistic and the extreme value distribution for ϵi in (2.2), respectively.

Furthermore, in the analysis of the asymptotic efficiency, four different sets of thresholds are considered. By following the literature [7, 25, 26], a first set is given by equidistant thresholds αjED, which satisfy the constraint αjEDαj1ED=h. Alternative thresholds αjID are obtained by considering smaller classes around the median of Y*, and by progressively increasing the length of the classes by a factor of (1 + 1/m) when moving towards the tails. A further set of thresholds αjEP splits the support of Y* in classes of equal probability, so that P(Y = j) = 1/m for j = 1, …, m. A final set of thresholds αjDP corresponds to classes with larger probability at the center and decreasing probability, by a factor of (1 − 1/m), when moving from the center towards the extremes. Details on the construction of the four sets of thresholds are given in the supplementary material. Each set of thresholds generates its specific distribution of the ordinal response from the same underlying variable. The distributions produced by the four sets of thresholds, in Model 1 with the logit link and m = 7, are displayed in Figure 1 and appear markedly different.

Figure 1: Distribution of the ordinal variable for the αED thresholds (upper left panel), for the αID thresholds (upper right panel), for the αEP thresholds (lower left panel), and for the αDP thresholds (lower right panel) in Model 1 with logit link and m = 7.

Figure 1:

Distribution of the ordinal variable for the αED thresholds (upper left panel), for the αID thresholds (upper right panel), for the αEP thresholds (lower left panel), and for the αDP thresholds (lower right panel) in Model 1 with logit link and m = 7.

Since the analytical expression of the (unconditional) information matrix is available for the three models, most of the analyses carried out in the following sections, and in particular the evaluation of the asymptotic efficiency of the estimators, the approximation of the power of the test and the assessment of the loss of efficiency produced by collapsing categories and dichotomization, do not require simulations. Numerical experiments are carried out only for Figure 3 and more extensively in Section 7 where the appropriate choice of m for a given sample size n is investigated. The purpose is to take into account numerical issues which may arise for specific combinations of m and n especially when n is small. Estimation is implemented in the R package MASS [27] and the simulation is always performed on 10 000 samples. Both analytical results and simulation experiments consider a number of parameters increasing with m, since estimation involves m − 1 thresholds in addition to the p regression coefficients.

4 The asymptotic efficiency of the estimators

When m increases, each category of Y corresponds to a smaller class on the support of Y*, and the resulting finer categorization yields more information on the underlying variable. In this context it is useful to remind that, given an event E, its information content is given by log{PE}: the smaller P(E) the larger the information ([28]; p. 4), since occurrence of rare events is more informative than occurrence of likely ones. Now suppose we have a finer discretization of the support of Y* in m classes C1, …, Cm, and a coarser categorization of Y* in m′ classes C̃1,,Cm̃, with m′ < m. Let Y and Ỹ be the responses obtained from the discretization in m and m′ classes. When Y = j, Ỹ takes a value k such that CjC̃k. Since Ỹ is based on a coarser categorization, it is reasonable to assume that P(Y=j)=P(Y*Cj)P(Ỹ=k)=P(Y*C̃k). Under this condition, the following result holds.

Proposition 1

Let Y* ∼ F. Given two alternative discretizations of its support in the classes (C1, …, Cm) and (C̃1,,Cm̃), with m′ < m, let πj=P(Y=j)=CjdF(y), for j = 1, …, m, and π̃k=P(Ỹ=k)=C̃kdF(y), for k = 1, …, m′. We have

logP(Y)=j=1mI[Y=j]log(πj),logP(Ỹ)=k=1mI[Ỹ=k]log(π̃k),

If πj<π̃k for all k such that CjC̃k, for j = 1, …, m, then

(4.1)logP(Y)logP(Ỹ).

Equation (4.1) implies that the information content of Y is larger than that of Ỹ. Hence a finer categorization provides more information content to the data, which – in turn – yields more efficient estimators.

Tables 14 illustrate the asymptotic efficiency of the estimators of the regression coefficients in Models 1, 2 and 3 for the four sets of thresholds. In Model 1 there is a single regression coefficient, so that the efficiency is measured by its asymptotic variance I(θ)ββ. In case of Models 2 and 3, where there is a vector of regression coefficients, the efficiency is measured by the trace of the portion of the asymptotic variance-covariance matrix related to the regression coefficients, i.e. by the sum of the asymptotic variances of the elements of β̂, k=1pI(θ)βkβk.

Table 1:

Asymptotic efficiency of the estimators with the αED thresholds when m varies.

mModel 1Model 2Model 3
ProbitLogitC.log-logProbitLogitC.log-logProbitLogitC.log-log
35.078.006.0816.0048.1122.2565.9794.2067.65
43.796.504.7813.4937.5818.5048.9372.7551.40
53.185.824.1012.5135.4617.3038.4060.6341.46
62.865.463.6911.9733.7615.6331.8953.9435.36
72.665.243.4211.6532.8614.5027.7149.8731.33
82.545.103.2411.4432.2413.8924.9347.2228.52
92.455.003.1111.3031.8213.4923.0145.4026.48
102.394.933.0111.2031.5213.1821.6444.1024.95
122.314.842.8711.0731.1312.7520.9542.4022.83
152.254.772.7610.9630.8112.4019.1241.0120.96
202.204.712.6610.8730.5612.1116.9539.9220.30
Table 2:

Asymptotic efficiency of the estimators with the αID thresholds when m varies.

mModel 1Model 2Model 3
ProbitLogitC.log-logProbitLogitC.log-logProbitLogitC.log-log
34.547.355.4814.7341.0619.4164.0192.6667.04
43.576.244.4913.0936.3017.5345.8269.7148.83
53.045.643.8612.2233.9214.9536.0158.9839.62
62.775.343.5111.7932.8914.2130.1952.8033.98
72.605.153.2811.5132.1413.7726.4649.0930.24
82.495.033.1311.3431.7213.3324.0046.6227.65
92.424.943.0111.2231.3913.0422.3044.9525.77
102.364.892.9311.1331.1812.8021.0743.7324.35
122.294.812.8111.0230.8912.4819.4842.1522.40
152.244.752.7210.9330.6512.2118.1740.8520.67
202.194.702.6310.8630.4711.9917.1439.8319.22
Table 3:

Asymptotic efficiency of the estimators with the αEP thresholds when m varies.

mModel 1Model 2Model 3
ProbitLogitC.log-logProbitLogitC.log-logProbitLogitC.log-log
34.287.285.0114.2835.5718.1575.52110.8582.48
43.416.234.0612.8133.0715.9447.8579.4353.51
53.025.743.6012.1432.0114.8436.9666.2841.76
62.805.463.3311.7731.4514.1731.3159.2135.48
72.675.293.1711.5431.1213.7127.9154.8431.61
82.575.163.0511.3930.9113.3825.6651.9029.00
92.505.082.9711.2830.7713.1424.0849.7927.14
102.455.012.9011.2030.6612.9622.9048.2025.74
122.384.922.8111.0930.5312.6921.2946.0123.80
152.324.842.7310.9930.4212.4419.8544.0122.03
202.264.772.6610.9130.3412.2118.5642.2120.43
Table 4:

Asymptotic efficiency of the estimators with the αDP thresholds when m varies.

mModel 1Model 2Model 3
ProbitLogitC.log-logProbitLogitC.log-logProbitLogitC.log-log
34.297.105.0714.3736.1517.8867.2298.5971.88
43.356.084.0412.7333.1815.8044.9873.6149.76
52.965.603.5712.0732.1214.6134.7961.7139.13
62.745.353.3011.6931.5013.9329.6455.8233.62
72.615.183.1311.4731.1613.4726.4451.9430.04
82.525.083.0111.3230.9413.1724.4049.4927.69
92.455.002.9311.2230.7912.9422.9347.6525.96
102.414.942.8711.1530.6812.7721.8846.3524.70
122.344.862.7811.0430.5412.5320.4344.5022.92
152.284.792.7110.9630.4312.3119.1542.8321.30
202.234.742.6410.8830.3512.1118.0341.3619.88

The outcomes clearly point out that, for all the sets of thresholds, the efficiency of β̂ increases with m accordingly with the larger amount of information associated with a finer categorization. The decrease of the asymptotic variances is especially marked for low values of m while it tapers off when m increases, and this behavior is shared by the three models and by the three links. Increasing m up to 7 usually yields considerable gains in efficiency, while marginal benefits are obtained by increasing m beyond 10.

As concerns the impact of the thresholds on the asymptotic efficiency, Tables 14 show that no set of thresholds outperforms the others, and the optimal set depends on the model as well as on the value of m and on the link (see also the Figures S.1, S.2, and S.3 in the supplementary material). In Model 1, with the probit and the logit link, the αjDP thresholds produce more efficient estimators for small m, while when m ≥ 6 the αjED and αjID thresholds are to be preferred. In the same model, the best results for the complementary log-log link are generally obtained with the αjDP thresholds. In Model 2, with the probit and the complementary log-log link, larger efficiency is usually achieved by using the αjDP thresholds for small m and the αjID thresholds for larger m, while with the logit link the best thresholds are the αjEP. In Model 3 with the probit link the αjDP thresholds appear preferable for small m and the αjID and the αjED ones for m > 7, while the best thresholds for the logit link and the complementary log-log link are the αjID and the αDP.

Although no set of thresholds dominates the others, appreciable differences in efficiency are observed only for small m, while the efficiency of the estimators obtained with the four sets of thresholds gets progressively closer when m increases. Hence a sufficiently large m can reduce the loss of efficiency produced by an inappropriate choice of the thresholds. In the following sections of the paper, the αjED thresholds will be considered. This choice is motivated by their simplicity and frequency of use in applications (whenever no other indications arise from the concrete problem at hand), without implying any preference in this direction. Analogous analyses for the other sets of thresholds are reported in the supplementary material (Section S.1).

The efficiency of β̂ affects also the efficiency of the derived measures used to investigate the impact of the covariates and, in particular, the efficiency of the estimators of the odds-ratio (OR). As an example consider Model 1 with the logit link. We have OR(X1) = exp(−β) which is estimated by OR̂(X1)=exp(β̂). The standard error of the estimator is SEOR(X1)̂=exp(β)SE(β̂), whose asymptotic version is exp(β)I(θ)ββ1/2. Figure 2 shows the asymptotic standard error of OR̂(X1) when m varies. It can be appreciated how the efficiency increases with m. Consistently with the results for β̂, the gain in efficiency is considerably large for small values of m and becomes smaller for m > 10.

Figure 2: Asymptotic standard error of OR̂(X1)$\hat{\text{OR}}\left({X}_{1}\right)$ in Model 1 with the logit link and the αED thresholds when m varies.

Figure 2:

Asymptotic standard error of OR̂(X1) in Model 1 with the logit link and the αED thresholds when m varies.

5 Hypothesis testing

The choice of m affects also the power of the test. Consider the hypotheses on a single regression coefficient H0:βk=βk0,H1:βkβk0. They can be tested through a t-type statistic tk=(β̂kβk0)/SE(β̂k) where the standard error of β̂k is given by SE(β̂k)=I(θ̂)βkβk/n. By (2.4), under H0, this statistic is asymptotically N(0, 1) distributed. The null hypothesis is rejected when tk>z1α/2 where z1α/2=Φ11α/2 and Φ(⋅) is the standard normal distribution function. Hence the power of the test can be approximated by

(5.1)γ(β)=Φzα/2βkβk0I(θ)βkβk/n1/2+1Φz1α/2βkβk0I(θ)βkβk/n1/2.

To investigate the impact of the choice of m on γ(β) we consider the null hypothesis H0 : β3 = 0 in Model 3. It implies that the interaction between X1 and X2 is omitted from the latent model, which becomes Y* = X1β1 + X2β2 + ϵ. Table 5 shows the power of the test, computed analytically through (5.1), at the 5% significance level, for the sample sizes n = 250, 500 and the three links (see also the analogous Tables S.1.1, S.1.2, and S.1.3 in the supplementary material). The power clearly increases with m. Intuitively the gain in the efficiency of β̂, obtained when m increases, induces a decrease of SE(β̂k) so that high absolute values of the tk statistic are more likely. A large m is especially recommended when ϵ has a large variance. This is the case of the cumulative logit model: the power of the test can be very low for small m, consequently large values of m are required to offset the variability of the error term.

Table 5:

Power of the test on β3 = 0 in Model 3 with the αED thresholds, at the 5% significance level, when m varies and n = 250, 500.

mn = 250n = 500
ProbitLogitC. log-logProbitLogitC. log-log
30.720.490.700.950.780.94
40.840.580.810.990.870.98
50.920.670.891.000.920.99
60.950.720.931.000.951.00
70.970.750.961.000.961.00
80.980.780.971.000.971.00
90.990.790.981.000.981.00
100.990.810.981.000.981.00
120.990.820.991.000.981.00
151.000.830.991.000.991.00
201.000.841.001.000.991.00

The results of Table 5 are based on the asymptotic efficiency analytically evaluated. To take into account also the numerical issues which may arise when the estimation is actually implemented, Figure 3 shows the power of the test assessed through a simulation when the logit link is adopted, for sample sizes between 100 and 500. The magnitude of γ(β) is mainly determined by the sample size. Nevertheless it can be appreciated the gain in power which can be achieved by increasing the number of categories especially when the initial m is small, say m ≤ 6, though the marginal benefits are decreasing with m.

Figure 3: Simulated power of the test on the null hypothesis β3 = 0 in Model 3 with the logit link, at the 5% significance level, when m varies and n = 100 (continuous line with circles), n = 200 (short-dashed line with squares), n = 300 (dotted line with asterisks), n = 400 (dot-dashed line with triangles) and n = 500 (long-dashed line with diamonds).

Figure 3:

Simulated power of the test on the null hypothesis β3 = 0 in Model 3 with the logit link, at the 5% significance level, when m varies and n = 100 (continuous line with circles), n = 200 (short-dashed line with squares), n = 300 (dotted line with asterisks), n = 400 (dot-dashed line with triangles) and n = 500 (long-dashed line with diamonds).

6 Merging categories

A widespread practice in data analysis is collapsing adjacent categories into one larger category (see [8, 18, 19], among others). This is typically done with extreme categories when there is a concern about their frequencies being very low (for instance in case of extreme response styles which cause the observations to be concentrated only on one side of the scale). An alternative reason for merging arises when a limited sample size yields unobserved categories or categories with very low frequencies. Finally the reduction of the number of categories may be finalized to simplify the interpretation, and in the extreme case it reaches its limit when the response is dichotomized.

In the main literature on categorical data, merging categories is a common practice for reducing the dimension of contingency tables (see [29], among others), and avoiding sparseness or small cell entries especially at the edges of the classification scale. However, an easier interpretation of the model is also a recurrent motivation (see [20], among others), and guidelines criteria for merging are homogeneity and structure (see [30], [31], [32], for further details).

In this paper merging categories is considered for a single variable, i.e. the ordinal response Y. Because of the collapsibility property of the POM, the regression parameters remain unchanged when the categories are merged. Nevertheless, it is important to point out that collapsing categories reduces the information content of the sample outcome, as shown by the following proposition.

Proposition 2

Let Y be a response with m categories and YM be a response obtained by merging two or more categories of Y, then

log{P(YM)}log{P(Y)}>0.

See the Appendix for the proof.

Clearly the loss of information produced by collapsing is likely to turn into a loss of efficiency.

Here we investigate the impact of various forms of merging categories. For the case extreme categories are involved, we consider merging performed symmetrically on both sides of the scale, and merging implemented only on one side. A selection of cases is illustrated in the current section, while a more extensive investigation is carried out in the supplementary material (Section S.2). Furthermore the impact of halving the number of categories is analyzed, and finally the effect of dichotomizing the response is examined (see also Section S.1.4 of the supplementary material for similar analyses with the αID, αEP and αDP thresholds).

For the first form of merging Model 1 with the logit link is considered. In this model the distribution of the underlying variable is symmetric, so that (to avoid low extreme frequencies) it is reasonable to join both the first two categories and the last two. Consequently the first and the last thresholds α1 and αm−1 are neglected, and the categories are based on the remaining thresholds α2, …, αm−2. This procedure reduces the number of categories from m to m − 2. Table 6 shows the asymptotic efficiency ratio between the “before merging” estimator β̂m and the “after merging” estimator β̂m2. The efficiency loss is considerable when m is small (m = 5, 6, 7). When the number of categories is reduced from 5 to 3 the loss of efficiency can be as high as almost 22%. The loss of efficiency is restrained – does not exceed 5% – when the number of categories is m > 7 and the probability of the extreme categories (which disappear) is below 0.025. Similar results for the value of m (which should be fairly large, say m ≥ 10) and the probability of the vanishing categories (which should be sufficiently low, say around 0.025) are observed also for the other models and for the probit link (see Tables S.2.1–S.2.5 of the supplementary material).

Table 6:

Efficiency loss produced by merging of the extreme categories on both sides in Model 1 with the logit link.

mProbability of the categoriesVar(β̂m)Var(β̂m2)Efficiency ratio
j = 1j = 2j = m − 1j = m
50.0510.2360.2360.0515.8237.1001.219
60.0350.1400.1400.0355.4576.0091.101
70.0270.0900.0900.0275.2385.5261.055
80.0220.0620.0620.0225.0965.2661.033
90.0190.0450.0450.0194.9995.1091.022
100.0170.0340.0340.0174.9305.0051.015
120.0140.0220.0220.0144.8404.8821.008
150.0110.0130.0130.0114.7674.7881.004
200.0090.0070.0070.0094.7104.7201.002

To investigate the consequences of merging when it occurs only on one side, reference is still made to Model 1 but the link is now the complementary log-log one, so that the underlying variable has a skewed distribution with low probability on the first categories. When the first two categories are merged the number of scale points is reduced from m to m − 1, since the threshold α1 is neglected. Consequently the “before merging” estimator is β̂m and the “after merging” estimator is β̂m1. Table 7 shows the efficiency ratio between β̂m and β̂m1. When merging occurs only on one side the loss of efficiency is less dramatic than when both tails are involved. The loss of efficiency is below 5% when m ≥ 6 and the probability of the first category (which disappears) is around 0.025 or below. Similar results for the probability of the vanishing category hold also for the other models (see Tables S.2.6 and S.2.7 of the supplementary material), although m should be increased with the complexity of the model (for instance in Model 3 we find again m ≥ 10).

Table 7:

Efficiency loss produced by merging the lowest categories in Model 1 with the complementary log-log link.

mProbability of the categoriesVar(β̂m)Var(β̂m1)Efficiency ratio
j = 1j = 2
40.0580.3104.7845.5901.169
50.0370.1594.0994.3531.062
60.0270.0923.6893.7921.028
70.0210.0583.4233.4731.014
100.0140.0233.0093.0201.004
150.0100.0092.7572.7601.001
200.0080.0052.6602.6611.000

As anticipated, another merging option consists in halving the number of categories by joining adjacent ones. Table 8 shows the efficiency ratio between the estimators obtained from m and m/2 categories, which is computed as follows k=1pVarβ̂m2,k/k=1pVarβ̂m,k, where Var(β̂m,k) is the asymptotic variance of the k-th element of the estimator β̂m obtained from m categories. Halving the number of scale points can have a remarkably high price in terms of efficiency especially when the number of covariates increases or the link is the probit or the complementary log-log one (the neglected information turns out to be especially valuable in these cases). Furthermore, consistently with previous results, the negative effect of merging is larger when the initial value of m is small, since collapsing produces a much coarser categorization.

Table 8:

Efficiency ratio between β̂m/2 and β̂m produced by halving the number of categories.

mModel 1Model 2Model 3
ProbitLogitC.log-logProbitLogitC.log-logProbitLogitC.log-log
41.911.651.711.431.183.205.163.554.97
61.781.471.651.341.421.422.071.751.91
81.501.281.481.181.171.331.961.541.80
101.331.181.361.121.121.311.771.381.66
121.241.131.281.081.081.231.521.271.55
141.171.091.231.061.061.161.411.211.46
201.091.051.131.031.031.091.281.101.23

Finally a common practice in applications is to reduce an ordinal response into a dichotomous one to easy interpretation (see [33, 34], among others).

Table 9 shows the cost in terms of efficiency to be paid for dichotomization, which is measured by k=1pVarβ̂2,k/k=1pVarβ̂m,k. The loss of efficiency due to dichotomization can be extremely severe (see also [35] and [36] for similar results). If a response with 4 categories is dichotomized, the efficiency ratio varies between roughly 1.2 and more than 5 (see Model 3). The loss of efficiency constantly increases with m. In the worst case, Model 3 with the probit or the complementary log-log link, if a 10-point response is dichotomized the efficiency ratio largely exceeds 10, and it gets even worse for larger m.

Table 9:

Efficiency ratio between β̂2 and β̂m produced by dichotomization.

mModel 1Model 2Model 3
ProbitLogitC.log-logProbitLogitC.log-logProbitLogitC.log-log
41.911.651.711.431.183.205.163.554.97
62.531.972.211.611.313.797.914.797.23
82.852.112.521.681.384.2710.125.478.96
103.022.182.711.721.414.5011.665.8610.24
123.132.222.841.741.434.6512.056.0911.19
163.242.262.991.761.444.8114.256.3412.45
203.292.283.071.771.454.8914.886.4712.59

Table 9 shows also a different pattern for the three link functions: although for all of them the dichotomization has a considerable impact, the estimators obtained with the logit link appear to exploit better the reduced amount of information limiting the loss of efficiency.

These outcomes call for a recommendation against the use of dichotomization. Similar suggestions can be also found in [8, 12, 37, 38]. In particular [36] define dicothomization an arbitrary researchers’ choice and show that the loss of efficiency can be exacerbated by the selection of an inappropriate cut-point.

Overall, the above results point out that a reduction of the number of categories, in any of the forms considered here, by decreasing the amount of information, can produce a remarkable loss of efficiency especially when the merging involves the central categories (with higher frequencies) or reaches the limiting case of dichotomization.

7 Choice of m for given n

A question which frequently arises, when setting the number of response options, concerns the appropriate number of categories for a given sample size n. On one side the positive relationship between efficiency and m would suggest a large number of categories. On the other side, if the sample size n is small, when m increases one or more categories may not produce observations.

In this regard, it is to be pointed out that, although in different statistical contexts categories with null frequencies give rise to the well known problems of sparse data, in the POM context a missing category in the sample produces no computational problems. Indeed, when one or more categories are unobserved, the estimation of the model can be still carried out by considering only the sampled categories.

The relationship between m and n is investigated through a simulation (with the details given in Section 3) to take into account the numerical issues which may arise for specific combinations of n and m, especially when the sample size is small. The analysis is carried out in the context of Model 3 with the logit link, though similar results are obtained for the other models and the other links as reported in Section S.3 of the supplementary material (see also Section S.1.5 for thresholds different from the αED).

Let mobs be the observed number of categories, Table 10 shows the percentage of samples such that mobs < m. This percentage is extremely large for n = 50 or n = 100, though it rapidly reduces when n increases: it is below 1.5% for n = 300 and it becomes negligible for n = 400. These results indicate that, in order to avoid unobserved categories, the samples size should be n ≥ 300 (see also Tables S.3.1–S.3.8 of the supplementary material for analogous results on the other models and the other links).

Table 10:

Percentage of simulated samples with a number of categories smaller than m in Model 3 with the logit link, when n and m vary.

nm
34567891011
500.021.8011.4325.1039.4252.6264.4174.5182.48
1000.000.040.793.488.7115.9123.6932.4140.92
2000.000.000.000.030.371.352.945.107.71
3000.000.000.000.000.030.170.430.821.44
4000.000.000.000.000.000.010.050.120.35
5000.000.000.000.000.000.000.000.000.06

Table 11 shows the sum of the mean square errors of β̂ computed on the same samples of Table 10. The estimation is performed on the observed number of categories, whether mobs = m or mobs < m. Consequently the mean square errors are computed always on 10 000 samples although they may have a different number of observed scale points. Notice that the efficiency can be very poor for extremely small sample sizes, say n = 50 or n = 100, although it quickly increases with n. A sample size n ≥ 300 seems adequate also with respect to the efficiency (consistent results are obtained also for the other models and the other links as shown in Tables S.3.9–S.3.16 of the supplementary material).

Table 11:

Simulated efficiency 100×kMSE(β̂k) in Model 3 with the logit link, when n and m vary.

nm
34567891011
501835.07408.86280.22215.09189.58175.70165.87161.66155.05
100142.38104.3982.3572.3066.2762.2359.8957.4656.72
20056.8443.4635.1531.0728.6227.3826.1225.2624.89
30034.9726.7422.2619.6317.9217.1216.4816.1215.76
40025.1519.5416.0314.1013.2012.4411.8911.5911.37
50020.1615.7312.8711.3810.5210.019.579.339.09

It is to be pointed out that, regardless whether the observed number of categories corresponds to m or not, the efficiency of β̂ increases with m for any sample size in agreement with the results of Section 4. Although [39] notice that when n is small and m is large maximum likelihood can yield biased estimators of the regression coefficients, the larger amount of information provided by a finer categorization produces a reduction in variance sufficiently large to offset the bias, yielding decreasing mean square errors. Hence the circumstance that some categories may be missing in the sample, does not alter the positive relationship between efficiency of β̂ and m, which holds for any sample size.

Notice that comparable efficiency is obtained by the couples (n = 50, m = 11) and (n = 100, m = 3), (n = 100, m = 11) and (n = 200, m = 3), (n = 200, m = 11) and (n = 400, m = 3), (n = 300, m = 11) and (n = 500, m = 4), etc. On the basis of Table 11, Figure 4 sketches couples (n, m) which yield the same efficiency. These results indicate that a small n requires a larger number of categories to compensate the limited availability of data with more information on the underlying variable, i.e. with a better quality of the data. The choice of m becomes less crucial when n increases, because the waste of information produced by a coarser categorization is balanced by a larger amount of data. Hence, m needs to be large especially if n is small.

Figure 4: Simulated efficiency ∑kMSE(β̂k)$\left({\sum }_{k}MSE\left({\hat{\beta }}_{k}\right)\right)$ in Model 3 with the logit link, with increasing n (horizontal axis) and m (vertical axis). Levels indicate the efficiency of the couple (n, m).

Figure 4:

Simulated efficiency kMSE(β̂k) in Model 3 with the logit link, with increasing n (horizontal axis) and m (vertical axis). Levels indicate the efficiency of the couple (n, m).

8 Case studies

Two different case studies concerning the Linear Analogue Self-Assessment (LASA) scale and the visual analog pain scale (VAPS) are considered. Aim of the analysis is to show the impact of the choice of m in the discretization of scales which are endpoint-anchored lines. Researchers are interested in investigating the self-assessment of the quality of life (in the first example) and the perceived pain (in the second example) originally measured on an interval scale. In both examples an increasing number of categories allows to reduce the standard errors of the estimates and improve their significance. The second case study is also related to a small sample size to illustrate the opportunity of a relatively large m when the number of interviewees is limited.

8.1 Quality of life measured on linear analogue self-assessment scale

Data stem from the ANZ0001 trial conducted by the ANZ Breast Cancer Trials Group with the aim of assessing health-related quality of life of patients with advanced breast cancer [40]. Our analysis focuses on the overall quality of life, recorded on an LASA scale, normalized to (0, 100) where 0 represents ‘as bad as it can be’ and 100 ‘as good as it can be’. The treatments intermittent capecitabine (IC) and continuous capecitabine (CC) are compared with the standard combination treatment (CMF), each with its own protocol.

The chemotherapy cycle number (cycle num.) and the body surface area (m2) (body surface) are recorded for each assessment of the quality of life, in addition to the treatment (Treatment). The dataset, which contains 2473 observations, is available in the R package ordinalCont [41], see also [42]. The regression model corresponding to (2.2) is

(8.1)Yi*=cycle numiβ1+body surfaceiβ2+ZiICβ3+ZiCCβ4+ϵ,i=1,,n,

where ZiIC and ZiCC are dichotomous variables which identify the modalities IC and CC of the nominal variable Treatment, while CMF is the reference category.

The LASA scale has been discretized into equal-length intervals with m varying between 3 and 15. The fitted models with the logit link are shown in Table 12.

Table 12:

Fitted model (8.1).

EstimateSt. errort-statistic
(a)m = 3
Cycle num.−0.0480.006−7.808
Body surface0.3720.3321.120
IC−0.0710.113−0.628
CC−0.0930.114−0.824
(b)m = 5
Cycle num.−0.0390.005−8.420
Body surface0.6150.2962.081
IC−0.0940.102−0.918
CC−0.0140.102−0.134
(c)m = 7
Cycle num.−0.0370.004−8.848
Body surface0.6290.2832.223
IC−0.1380.099−1.401
CC−0.0110.098−0.113
(d)m = 10
Cycle num.−0.0350.004−8.624
Body surface0.7570.2752.759
IC−0.1280.097−1.314
CC0.0100.0970.107
(e)m = 15
Cycle num.−0.0350.004−8.965
Body surface0.7250.2702.681
IC−0.1530.096−1.590
CC0.0150.0960.161

Consistently with the analytical results of the previous sections, the standard errors of the estimators decrease with m. Consequently the estimated coefficient of the variable ZIC, which is not significant for m = 3 and m = 5, became significant for m ≥ 7, pointing out that this type of Treatment can negatively affect the patients’ quality of life.

Different effects of the two treatments IC and CC can be tested by considering the null hypothesis β4β3 = 0. The t-statistic, for varying m, is reported in Table 13. As m increases, it becomes evident that the CC treatment leads to a better quality of life with respect to the IC treatment. There is instead no significant difference between CC and CMF.

Table 13:

Test on the hypothesis β4β3 = 0 when m varies.

m3571015
t-statistic−0.2250.9291.5381.7102.106

These outcomes show that an increasing number of categories may enhance model specification.

8.2 Pain measured on visual analog pain scale

Data are about a small sample of 56 women aged between 23 and 44 years interviewed on their perceived pain during labor until childbirth. These women delivered at the Città di Roma Hospital (Rome) or at the Saint Raffaele Hospital (Milan), and most of them attended hospitals’ childbirth preparation classes there. Details on these data are in [43]. The perceived pain has been collected by means of a VAPS. It consists in a slide rule with the patient’s side unmarked and the observer’s side marked from 0 to 100 mm, where 0 represents ‘no pain’ and 100 represents ‘worst pain ever’. We consider the discretization of the VAPS with a number of intervals from m = 3 to the maximum rating considered for the analysis of pain m = 11 (for comparison with the Numerical Rating Pain Scale, see [44].

The position of the unborn (head), the participation to the pre-birth course (course) and the occurrence of previous events in which women perceived pain (previous pain) are the three covariates. The regression model corresponding to (2.2) is

(8.2)Yi*=headiβ1+courseiβ2+previous painiβ3+ϵ,i=1,,n.

The fitted models with the complementary log-log link are in Table 14.

Table 14:

Fitted model (8.2).

EstimateSt. errort-statistic
(a)m = 3
Head0.0300.3650.083
Course−0.2730.540−0.505
Previous pain−1.7850.561−3.184
(b)m = 5
Head0.7300.2792.612
Course−0.7220.459−1.574
Previous pain−1.4850.415−3.577
(c)m = 11
Head0.5410.2262.394
Course−0.6780.386−1.757
Previous pain−1.1110.348−3.197

In accordance with previous results the standard errors decrease with m. The estimated coefficients of Head and Course, which are not significant for m = 3, become significant for m ≥ 5. The hypothesis β1 = β2 = 0, which implies that Head and Course do not affect the perceived pain, can be tested trough the likelihood ratio (LR) test. The corresponding statistic and the related p-value are reported in Table 15. A large m is required to detect the relevance of these two covariates as explanatory factors. Despite the small sample size, a large number of categories is necessary to convey power to the tests.

Table 15:

Likelihood ratio test – null hypothesis no Head and Course covariates.

m3511
LR-statistic0.2608.1617.765
p-value0.8780.0170.021

9 Final remarks

The paper exploits the collapsibility property of the cumulative models with proportional odds assumption, which allows to generate ordinal responses with a different number of categories from the same underlying variable, and investigates the impact of the choice of m on the reliability of inferential analyses. It proves that increasing m augments the information content of the data, yielding more efficient estimators and more powerful tests. However the benefits of increasing m are considerable when the initial number of categories is small, and become progressively smaller when m increases. The analyses carried out in the paper suggest values of m between 7 and 10. This range of values for m limits also the impact of inappropriate thresholds used in the categorization of continuous measurements.

Since the variance of the estimators decreases with m, the opportunity of merging categories should be carefully evaluated. Combining extreme categories should be applied only when m ≥ 10 and the probability of the vanishing category is sufficiently small (say around 0.025). Halving the number of categories appears an inconvenient procedure in terms of efficiency. The dichotomization is the most critical practice because it produces an extremely severe loss of information and, consequently, of efficiency, which can be only partially restrained by choosing the logit link instead of alternative link functions.

Finally numerical simulations show that increasing m enhances the efficiency even if the sample size is small. A high number of scale points is recommended to gather all the information contained in the sample especially if it is of limited size. These experiments indicate also that a sample size n ≥ 300 allows to avoid unobserved categories to a large extent and produces sufficiently efficient estimators.

These findings are illustrated through two case studies based on discretization of continuous scales. In both cases an increasing number of categories allows to reveal the relevance of the explanatory variables, which may remain undetected if the categorization is too coarse. Hence a large m enhances model specification.


Corresponding author: Maria Iannario, Department of Political Sciences, University of Naples Federico II, Napoli, Italy, E-mail:

Acknowledgments

We would like to thank Alan Agresti for helpful discussions and constructive comments and Alessio Farcomeni for graciously allowing us to utilize the Pain data in the Case studies Section.

  1. Author contribution: All the authors have accepted responsibility for the entire content of this submitted manuscript and approved submission.

  2. Research funding: None declared.

  3. Conflict of interest statement: The authors have declared no conflict of interest.

Appendix

Proof of Proposition 2

Let Y=Y1,,Ym be a random vector with a multinomial distribution with parameter π = (π1, π2, …, πm), where πj ≥ 0 for j = 1, …, m, and j=1mπj=1. Its probability mass function is

P(Y)=π1Y1π2Y2πm1Ym11π1πm11j=1m1Yj.

Suppose, without loss of generality, that the first two categories are merged. Since Y1 = 1 and Y2 = 1 are incompatible events, the event (Y1 = 1) ∪ (Y2 = 1) is observed with probability π1 + π2. The probability of the merged variable YM=Y1+Y2,Y3,,Ym is

PMYM=π1+π2Y1+Y2π3Y3πm1Ym11π1πm11j=1m1Yj.

Let (Y) =  log{P(Y)} and M(YM) =  log{PM(YM)}. Their difference is always negative

(A.1)(Y)M(YM)=Y1log(π1)+Y2log(π2)Y1+Y2log(π1+π2)=Y1logπ1π1+π2+Y2logπ2π1+π2<0.

Inequality (A.1) shows that there is more information in the original distribution, with a larger number of categories, than in the distribution derived from merging, i.e. collapsing categories reduces the amount of sample information. □

References

1. Hosmer, DW, Lehemenshow, S. Applied logistic regression. New York: John Wiley & Sons; 2000. Search in Google Scholar

2. Gurland, J, Lee, I, Dahm, PA. Polychotomous quantal response in biological assay. Biometrics 1960;16:382–98. https://doi.org/10.2307/2527689. Search in Google Scholar

3. Snapinn, S, Small, R. Tests of significance using regression models for ordered categorical data. Biometrics 1986;42:583–92. https://doi.org/10.2307/2531208. Search in Google Scholar

4. Peracchi, F, Perotti, V. Subjective survival probabilities and life tables: an empirical analysis of cohort effects. Genus 2009;LXV:23–57. Search in Google Scholar

5. O’ Brien, SM. Cutpoint selection for categorizing a continuous predictor. Biometrics 2004;60:504–9. Search in Google Scholar

6. Winship, C, Mare, RD. Regression models with ordinal variables. Am Socio Rev 1984;1:512–25. https://doi.org/10.2307/2095465. Search in Google Scholar

7. Ramsay, JO. The effect of number of categories in rating scales on precision of estimation of scale values. Psychometrika 1973;38:513–32. https://doi.org/10.1007/bf02291492. Search in Google Scholar

8. Agresti, A. Analysis of ordinal categorical data, 2nd ed. Hoboken: John Wiley & Sons; 2010. Search in Google Scholar

9. Allahyari, E, Jafari, P, Bagheri, Z. A simulation study to assess the effect of the number of response categories on the power of ordinal logistic regression for differential item functioning analysis in rating scales. Comput Math Methods Med 2016:1–8. https://doi.org/10.1155/2016/5080826. Search in Google Scholar

10. Iannario, M, Monti, AC, Piccolo, D. Robustness issues for CUB models. Test 2016;25:731–50. https://doi.org/10.1007/s11749-016-0493-3. Search in Google Scholar

11. Piccolo, D. On the moments of a mixture of uniform and shifted binomial random variables. Quad Stat 2003;5:85–104. Search in Google Scholar

12. McCullagh, P. Regression models for ordinal data. J Roy Stat Soc B 1980;42:109–42. https://doi.org/10.1111/j.2517-6161.1980.tb01109.x. Search in Google Scholar

13. McCullagh, P, Nelder, JA. Generalized linear models, 2nd ed. London: Chapman & Hall; 1989. Search in Google Scholar

14. Van Meter, EM, Garrett-Mayer, E, Bandyopadhyay, D. Proportional odds model for dose finding clinical trial designs with ordinal toxicity grading. Stat Med 2011;30:2070–80. https://doi.org/10.1002/sim.4069. Search in Google Scholar

15. Everitt, BS, Palmer, CR, Horton, R. Encyclopaedic companion to medical statistics, 2nd ed. New York: Wiley; 2011. Search in Google Scholar

16. Kotz, S, Read, CB, Balakrishnan, N, Vidakovic, B, Johnson, NL, Liu, I, et al.. Proportional odds model. In: Kotz, S, Read, CB, Balakrishnan, N, Vidakovic, B, Johnson, NL, editors. Encyclopedia of statistical sciences. New York: Wiley; 2014. Search in Google Scholar

17. Tutz, G. Regression for categorical data. Cambridge: Cambridge University Press; 2012. Search in Google Scholar

18. Strömberg, S. Collapsing ordered outcome categories: a note of concern. Am J Epidemiol 1996;144:421–4. https://doi.org/10.1093/oxfordjournals.aje.a008944. Search in Google Scholar

19. Johnson, VE, Albert, JH. Ordinal data modeling. New York: Springer-Verlag; 1999. Search in Google Scholar

20. Kateri, M. Contingency table analysis. Methods and implementation using R. Birkhãuser, Basel: Springer; 2014. Search in Google Scholar

21. Whitehead, J. Sample size calculations for ordered categorical data. Stat Med 1993;12:2257–71. https://doi.org/10.1002/sim.4780122404. Search in Google Scholar

22. Oyeyemi, GM, Adewara, AA, Adebola, FB, Salau, SI. On the estimation of power and sample size in test of independence. Asian J Math Stat 2010;3:139–46. https://doi.org/10.3923/ajms.2010.139.146. Search in Google Scholar

23. Iannario, M, Lang, JB. Testing conditional independence in sets of I × J tables by means of moment and correlation score tests with application to HPV vaccine. Stat Med 2016;35:4573–87. https://doi.org/10.1002/sim.7006. Search in Google Scholar

24. Iannario, M, Monti, AC, Piccolo, D, Ronchetti, E. Robust inference for ordinal response models. Electron J Stat 2017;11:3407–45. https://doi.org/10.1214/17-ejs1314. Search in Google Scholar

25. Rattray, J, Jones, MC. Essential elements of questionnaire design and development. J Clin Nurs 2007;16:234–43. https://doi.org/10.1111/j.1365-2702.2006.01573.x. Search in Google Scholar

26. Christensen, RHB. ordinal - regression models for ordinal data. R package version 2019.4-25; 2019. Available from: http://www.cran.r-project.org/package=ordinal/. Search in Google Scholar

27. Venables, WN, Ripley, BD. Modern applied statistics with S, 4th ed. New York: Springer; 2002. Search in Google Scholar

28. McMahon, DM. Computing explained. Hoboken, NJ: Wiley-Interscience; 2008. Search in Google Scholar

29. Bishop, YMM. Effects of collapsing multidimensional contingency tables. Biometrics 1971;27:5453–562. https://doi.org/10.2307/2528596. Search in Google Scholar

30. Goodman, LA. Association models and canonical correlation in the analysis of cross-classifications having ordered categories. J Am Stat Assoc 1981;76:320–34. https://doi.org/10.1080/01621459.1981.10477651. Search in Google Scholar

31. Gilula, Z. Grouping and association in contingency tables: an exploratory canonical correlation approach. J Am Stat Assoc 1986;81:773–9. https://doi.org/10.1080/01621459.1986.10478334. Search in Google Scholar

32. Kateri, M, Iliopoulos, G. On collapsing categories in two-way contingency tables. Statistics. J Theor Appl Stat 2003;37:443–55. https://doi.org/10.1080/0233188031000123780. Search in Google Scholar

33. Manor, O, Matthews, S, Power, C. Dichotomous or categorical response? Analysing self-rated health and lifetime social class. Int J Epidemiol 2000;29:149–57. https://doi.org/10.1093/ije/29.1.149. Search in Google Scholar

34. Purgato, M, Barbui, C. Dichotomizing rating scale scores in psychiatry: a bad idea? Epidemiol Psychiatr Sci 2013;22:17–9. https://doi.org/10.1017/s2045796012000613. Search in Google Scholar

35. Coehn, J. The cost of dichotomization. Appl Psychol Meas 1983;7:249–53. https://doi.org/10.1177/014662168300700314. Search in Google Scholar

36. Armstrong, BG, Sloan, M. Ordinal regression models for epidemiologic data. Am J Epidemiol 1989;129:191–204. https://doi.org/10.1093/oxfordjournals.aje.a115109. Search in Google Scholar

37. Archer, KJ, Williams, AAA. L1 penalized continuation ratio models for ordinal response prediction using high-dimensional datasets. Stat Med 2012;31:1464–74. https://doi.org/10.1002/sim.4484. Search in Google Scholar

38. Ananth, CV, Kleinbaum, DG. Regression models for ordinal responses: a review of methods and application. Int J Epidemiol 1997;26:1323–33. https://doi.org/10.1093/ije/26.6.1323. Search in Google Scholar

39. Lipsitz, SR, Fitzmaurice, GM, Regenbogen, SE, Sinha, D, Ibrahim, JG, Gawade, AA. Bias correction for the proportional odds logistic regression model with application to a study of surgical complications. J Roy Stat Soc C 2012;62:233–50. https://doi.org/10.1111/j.1467-9876.2012.01057.x. Search in Google Scholar

40. Stockler, M, Sourjina, T, Grimison, P, Gebski, V, Byrne, M, Harvey, V, et al.. A randomized trial of capecitabine (C) given intermittently (IC) rather than continuously (CC) compared to classical CMF as first-line chemotherapy for advanced breast cancer (ABC). J Clin Oncol 2007;25:1031. https://doi.org/10.1200/jco.2007.25.18_suppl.1031. Search in Google Scholar

41. Manuguerra, M, Heller, G. ordinalCont - ordinal regression analysis for continuous scales. R package version 2019.2.0.1; 2019. Available from: http://www.cran.r-project.org/package=ordinalCont/. Search in Google Scholar

42. Manuguerra, M, Heller, G. Ordinal regression models for continuous scales. Int J Biostat 2010;6:14. https://doi.org/10.2202/1557-4679.1230. Search in Google Scholar

43. Capogna, G, Camorcia, M, Stirparo, S, Valentini, G, Garassino, A, Farcomeni, A. Multidimensional evaluation of pain during early and late labor: a comparison of nulliparous and multiparous women. Int J Obstet Anesth 2010;19:167–70. https://doi.org/10.1016/j.ijoa.2009.05.013. Search in Google Scholar

44. Hjermstad, MJ, Fayers, PM, Haugen, DF, Caraceni, A, Hanks, GW, Loge, JH, et al.. Studies comparing numerical rating scales, verbal rating scales, and visual Analogue scales for assessment of pain intensity in adults: a systematic literature review. J Pain Symptom Manag 2011;41:1073–93. https://doi.org/10.1016/j.jpainsymman.2010.08.016. Search in Google Scholar

Supplementary Material

The online version of this article offers supplementary material (https://doi.org/10.1515/ijb-2021-0013).

Received: 2020-08-05
Accepted: 2021-08-23
Published Online: 2021-09-21

© 2021 Maria Iannario et al., published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.