Whereas we believe that the proposed loss function *l*_{0}(*f*, *p*) in (6) is reasonable for the given application, there are many loss functions in the literature and various criteria underlying the choice of a loss function (Jafari Jozani and Tabrizi 2013). We now investigate Bayes estimators based on three alternative loss functions where we retain the same Beta(*a*, *b*) prior for the unknown parameter *p*. Whereas the proposed loss functions are individually appealing, we will observe that the choice of loss function can have a considerable impact on the resultant betting fraction.

Our first alternative loss function is absolute error loss which is a common loss function and is given by

$${l}_{1}(f,p)=\mid f-k\left(p\right)\mid $$

where *k*(*p*) is the Kelly fraction (3).

With the absolute error loss function, it is well known (Berger 1985) that the posterior median of *k*(*p*) is the Bayes estimator. Some distribution theory gives the following distribution function for *k*(*p*):

$$\begin{array}{ccccc}F\left(k\right)\hfill & =\text{Prob}(k\left(p\right)\le k)\hfill & & & \\ & =\{\begin{array}{cc}{\int}_{0}^{1/\theta}\pi (p\mid x)dp\hfill & k=0\hfill \\ {\int}_{0}^{\frac{k\left(\theta -1\right)+1}{\theta}}\pi (p\mid x)dp\hfill & \mathrm{\hspace{0.25em}\hspace{0.25em}\hspace{0.25em}0}<k\le 1\hfill \end{array}\hfill & & & \end{array}$$

from which *F*(*f*_{1}) = 1/2 provides the the Bayes estimator based on absolute error loss

$$\begin{array}{ccccc}{f}_{1}\hfill & =\{\begin{array}{cc}\frac{\stackrel{~}{p}\theta -1}{\theta -1}\hfill & {\int}_{0}^{1/\theta}\pi (p\mid x)dp<1/2\hfill \\ 0\hfill & {\int}_{0}^{1/\theta}\pi (p\mid x)dp\ge 1/2\hfill \end{array}\hfill & & & \\ & =\{\begin{array}{cc}\frac{\stackrel{~}{p}\theta -1}{\theta -1}\hfill & \stackrel{~}{p}>1/\theta \hfill \\ 0\hfill & \stackrel{~}{p}\le 1/\theta \hfill \end{array}\hfill & & & \end{array}$$(16)

where $\stackrel{~}{p}$ is the posterior median of *p* corresponding to (11). We again note the similarity of (16) with the Kelly fraction (3) and the Bayes estimator based on absolute error loss (15). In applications where the posterior distribution of *p* is nearly symmetric, there is little difference between the two Bayes estimators *f*_{0} and *f*_{1}. Near symmetry occurs when the posterior Beta parameters *x* + *a* and *n* − *x* + *b* are large and comparable in magnitude. Given specified values of *a*, *b*, *x*, *n* and *θ*, we can easily obtain *f*_{1} in (16) numerically.

Our second alternative loss function is squared error loss which is also a common loss function and is given by

$${l}_{2}(f,p)={\left(f-k\left(p\right)\right)}^{2}$$

where *k*(*p*) is the Kelly fraction (3).

With the squared error loss function, it is well known (Berger 1985) that the posterior mean of *k*(*p*) is the Bayes estimator. Therefore, the Bayes estimator based on squared error loss is

$$\begin{array}{ccccc}{f}_{2}\hfill & ={\int}_{0}^{1}k\left(p\right)\pi (p\mid x)dp\hfill & & & \\ & ={\int}_{1/\theta}^{1}\left(\frac{p\theta -1}{\theta -1}\right)\pi (p\mid x)dp.\hfill & & & \end{array}$$(17)

Given specified values of *a*, *b*, *x*, *n* and *θ*, we note that *f*_{2} in (17) can be easily obtained numerically.

Even though squared error loss is a common loss function, we see from (3), (15), (16) and (17) that *f*_{2} (the Bayes estimator based on squared error loss) provides a fundamentally different betting fraction than *k*(*p*), *f*_{0} and *f*_{1}. With the other three fractions, there are always scenarios in which one will not bet (i.e. the betting fraction is zero). This is never the case with *f*_{2}.

Our third alternative loss function provides a compromise between absolute error loss and squared error loss (i.e. we consider an exponent 1 < *k* < 2). The loss function is also motivated by common complaints involving the Kelly criterion. As mentioned, many gamblers claim that the Kelly fraction is too large. We therefore consider a loss function that introduces a penalty on overestimation and underestimation of the true Kelly fraction via the parameters *c*_{1} > 0 and *c*_{2} > 0. We define this general loss function

$${l}_{3}(f,p)=\left({c}_{1}{I}_{f>k\left(p\right)}+{c}_{2}\right){\mid f-k\left(p\right)\mid}^{k}$$(18)

where *I* is the indicator function. With appropriate selections of *c*_{1}, *c*_{2} and *k*, we observe that *l*_{1}(*f*, *p*) and *l*_{2}(*f*, *p*) are special cases of *l*_{3}(*f*, *p*). For illustration of a different sort of loss function, we consider

$${c}_{1}=1,{c}_{2}=1\text{and}k=1.5$$(19)

such that *k* lies halfway between absolute error loss and squared error loss, and the penalty of overestimation is double the penalty of underestimation. The estimation penalties in (19) may be considered extreme. Therefore, we also consider the settings

$${c}_{1}=1,{c}_{2}=2\text{and}k=1.5$$(20)

such that the penalty of overestimation is 1.5 times the penalty of underestimation.

In the case of the loss function *l*_{3}(*f*, *p*), the Bayes estimator is obtained by minimizing the expected posterior loss

$$G\left(f\right)={\int}_{0}^{1}{l}_{3}(f,p)\pi (p\mid x)dp$$(21)

with respect to *f* = *f*(*x*).

Obtaining an analytic expression for the minimum of *G*(*f*) in (21) does not seem to be within our capabilities. Fortunately, simple quadrature rules such as Simpson’s rule can approximate the integral (21). With quadrature rules, one does need to be careful of the discontinuity in (18) as a function of *p*. Also, the minimization problem is essentially a discrete optimization problem; in practice, we only need the optimal fraction to roughly three decimal points. Therefore, a brute force procedure can be used where *f* is incremented from 0.000 to 1.000 in steps of size 0.001, and *G*(*f*) is calculated for each incremental value. We then obtain the Bayes estimator *f*_{3a} = *f*_{3a}(*x*) which minimizes *G*(*f*) for the observed *x* based on the loss function settings in (19). We also obtain the Bayes estimator *f*_{3b} = *f*_{3b}(*x*) which minimizes *G*(*f*) for the observed *x* based on the loss function settings in (20). R code which carries out the optimization is provided in the Appendix.

## Comments (0)

General note:By using the comment function on degruyter.com you agree to our Privacy Statement. A respectful treatment of one another is important to us. Therefore we would like to draw your attention to our House Rules.