A Hybrid of Deep CNN and Bidirectional LSTM for Automatic Speech Recognition

Abstract Deep neural networks (DNNs) have been playing a significant role in acoustic modeling. Convolutional neural networks (CNNs) are the advanced version of DNNs that achieve 4–12% relative gain in the word error rate (WER) over DNNs. Existence of spectral variations and local correlations in speech signal makes CNNs more capable of speech recognition. Recently, it has been demonstrated that bidirectional long short-term memory (BLSTM) produces higher recognition rate in acoustic modeling because they are adequate to reinforce higher-level representations of acoustic data. Spatial and temporal properties of the speech signal are essential for high recognition rate, so the concept of combining two different networks came into mind. In this paper, a hybrid architecture of CNN-BLSTM is proposed to appropriately use these properties and to improve the continuous speech recognition task. Further, we explore different methods like weight sharing, the appropriate number of hidden units, and ideal pooling strategy for CNN to achieve a high recognition rate. Specifically, the focus is also on how many BLSTM layers are effective. This paper also attempts to overcome another shortcoming of CNN, i.e. speaker-adapted features, which are not possible to be directly modeled in CNN. Next, various non-linearities with or without dropout are analyzed for speech tasks. Experiments indicate that proposed hybrid architecture with speaker-adapted features and maxout non-linearity with dropout idea shows 5.8% and 10% relative decrease in WER over the CNN and DNN systems, respectively.


Introduction
Deep neural network (DNN)-based acoustic models have almost displaced Gaussian mixture models (GMMs) from automatic speech recognition (ASR) systems [18]. Due to fully connected nature of DNN, structural locality from the feature space is not grabbed, which is a fundamental drawback. Second, DNN faces vanishing gradient problem at the time of stochastic gradient descent (SGD) training [11]. Although convolutional neural networks (CNNs) successfully model the structural locality from the feature space [24]. They also reduce the translational variance and take care of disturbances and small shifts in the feature space because they adopt the pooling at a local frequency region. They can utilize the long-time dependencies among the speech frames by exploiting prior knowledge of speech signal. However, Soltau et al. [35] claimed that CNNs do not take stress for semi-clean data in the ASR systems; as a result the performance of the system deteriorates. On the other side, recurrent neural networks (RNNs) offer higher recognition accuracy by capturing long contexts especially for noise robust tasks [27]. However, vanishing and exploding gradient problem bound the capability of RNNs to learn time dependencies [3]. To tackle these problems, long short-term memory (LSTM) was introduced that controls the flow of information by a special unit called memory block [31]. LSTM-RNNs are sensitive to the static data, so target delays with respect to features arise. Low latency between inputs and corresponding outputs is the preferred choice for acoustic modeling. Therefore, a special architecture that operates the input sequence in both directions to make decisions came into existence and is called bidirectional LSTM (BLSTM) [33].
In this paper, we aim to design a deep hybrid structure for acoustic modeling that resolves the problem of vanishing gradient and makes use of prior knowledge. After analyzing the input feature set, various nonlinearities, and CNN architectures, a hybrid of CNN and BLSTM structure is proposed as a solution in which CNN controls the translational variance and BLSTM resolves the vanishing gradient problem. The information from speech frames is captured by CNN, and BLSTM layers process this input in both directions. Abdel-Hamid et al. [1] used a single limited weight sharing (LWS) layer for CNN in speech. The benefit of LWS is that it allows each local weight to focus on the most confusable parts of the signal. Sainath et al. [29] investigated multiple full weight sharing (FWS) convolutional layers and argued that these layers are more beneficial as compared to a single LWS convolutional layer. In this paper, the analysis of multiple layers of LWS and FWS is done. Various pooling strategies like L p pooling [34] and stochastic pooling [44] are offering better generalization in vision task than max pooling. Hence, the effectiveness of these pooling methods is explored for continuous speech recognition tasks. Locality in frequency and time must be present in the features for CNN. Feature space maximum likelihood linear regression (fMLLR) features [10] improve the performance for DNNs. Sainath et al. [29] made an attempt to apply fMLLR transformation directly on log-mel features. In their work, log-mel features were transformed into an uncorrelated space, and fMLLR transformation was applied to them then these new features were again transformed into correlated space. Unfortunately, no improvement was observed in recognition rate. Therefore, a new methodology in which fMLLR features are directly fed to a fully connected (FC) layer is introduced. In the next FC layer, CNN-BLSTM features are joined with fMLLR features as in [34]. This method is found more fruitful as compared to creating fMLLR transformed log-mel features. Hessian-free (HF) sequence training [21] offers 10-15% relative gain over cross-entropy (CE) trained DNN. Therefore, various non-linearities and dropout are investigated for HF sequence training.
The performance of hybrid architecture is explored for continuous speech recognition task on a Hindi dataset. After experiments, it is observed that LWS and FWS are almost the same with multiple layers for continuous speech recognition. It is also noted that there is not much difference in various pooling strategies. Although the max pooling is found best, fMLLR features improve the input feature set, and as a result, the word error rate (WER) reduces 1.1% relatively. Dropout + HF training also improves the recognition rate. A relative improvement of 5.8% and 10% is obtained over best-performing CNN and DNN, respectively.
The subsequent sections of the paper are organized as follows. In Section 2, the basic CNN, LSTM-RNN, and BLSTM acoustic models are given. Section 3 provides details about the proposed architecture and its experimental setup. In Section 4, an exploration of various pooling strategies is presented. Analysis of input features is discussed in Section 5. In Section 6, the various non-linear functions and dropout are explained. In Section 7, experimental results are compared with other existing and competitive techniques. Finally, Section 8 presents the conclusion of the paper.

Baseline CNN and LSTM Framework
This section introduces the CNN, LSTM-RNN, and BLSTM acoustic models that are employed as baseline systems.

CNN Acoustic Model
The main aim of CNN, the advanced version of DNN is to discover local structure in input data. CNN successfully reduces the spectral variations and adequately models the spectral correlations in acoustic features. Earlier, the convolution was applied only on frequency axis which made the ASR system more robust against variations generated from different speaking styles and speakers [1,9,29]. Later, Toth [37] and Waibel et al. [41] applied convolution along the time axis. In many computer vision tasks, researchers applied convolution in both frequency and time [22,34]. CNN introduces three new concepts over the DNN: local filters, weight sharing, and pooling. Local filters are imposed on a subset region of the previous layer to capture a specific kind of local patterns known as structural locality. Stride of local filters is the necessary element of CNN that reduces the temporal resolution. Weight sharing is used with local filters to reduce translational variance.
To utilize the characteristics of different spectral bands, LWS and FWS are convolutional weights that are applied to convolutional layers [2]. They combine the information received from the neurons and assign the collected information to various spectral bands. Abdel-Hamid et al. [1] favor LWS by presenting different spectral regions for different spectral phenomena. On the other hand, Sainath et al. [28] favor FWS by showing the same performance as LWS. FWS applies all the neurons across all spectral regions which is much simpler. Pooling layer regularly follows the convolutional layer. Pooling provides additional translational and rotational invariances. It also provides a reduction in dimensionality of feature map and parameters [19]. CNN-based acoustic models have performed better than state-of-the-art DNN systems in ASR by selecting more adequate features for different classes like speaker, gender, and phone [1,2,28,29,38]. Toth [39] argues that CNN improves the mean of the per-speaker recognition rate and reduces their variance by ≈5.7% when compared to DNN.

LSTM-RNN Acoustic Model
LSTM-RNN introduces the idea of gate functions which determine the value of the neuron's activation to transform or just pass through [42]. This gating function forces the layer's inputs and outputs to be of the same size. LSTM-RNN successfully models the long-time dependencies. It models the input sequence unidirectionally and hence cannot retain structural locality, and it is more prone to overfitting. It has hidden state − → h in forward direction that processes the input from left to right by using the left context of the current input. Deep LSTM-RNNs are made by arraying LSTM-RNN layers. The benefit of deep LSTM-RNNs over conventional LSTM-RNNs is that it optimally uses its parameters by distributing them over the space through multiple layers. Deep LSTM-RNNs have given good results in large vocabulary speech recognition tasks [15,31].

Bidirectional LSTM
LSTM is unidirectional; hence, it accesses a small amount of right context which deteriorates the recognition rate. The way to improve recognition rate without increasing latency is to process the input bidirectionally. BLSTM processes the input in both directions forward as well as backward by the use of separate layers. It has hidden states with forward sequence − → h for the left context and backward sequence ← − h for the right context. It should be ensured that every next layer must be fed from both forward as well as backward layer. Graves et al. used deep BLSTM to perform framewise phoneme classification [13] and hybrid speech recognition [14] on the TIMIT database.

Hybrid CNN-BLSTM Architecture
In this section, hybrid CNN-BLSTM architecture and its experimental setup are explained.

Proposed Architecture
The proposed architecture incorporates three different models like CNN, BLSTM, and fully connected layers, as shown in Figure 1. Initially, a few convolutional layers are located to reduce the frequency variance present in input signal. Initially, two convolutional layers having 256 feature maps in each CNN layer are taken. The reason behind this is that the feature dimension for speech is small (i.e. 40). High-and lowfrequency regions show quite different behaviors. After processing by two convolutional layers, the feature map is reduced into much smaller size near 16. Therefore, there is no further need to model locality and remove invariance. Sainath et al. [30] argue that 9 × 9 frequency-time filter for first convolutional layer and 4 × 3 frequency-time filter for second convolutional layer cover the entire frequency-time space. Therefore, the first convolution layer uses 9 × 9 frequency-time filter, and the second layer uses 4 × 3 frequency-time filter. Initially, max pooling is used in our model, and pooling is applied only in the frequency domain. For both layers, the pooling size used is 2, and the value of stride is also 2.
In CNN, the dimension of the layer is a function of the number of feature maps × time × frequency, resulting in the larger dimension of the next layer. Thus, there is a need to reduce the feature dimensions. After CNN layers, a linear layer is applied for a reduction in dimension without any loss in accuracy, as shown in Figure 1. Two hundred fifty-six outputs from linear layer are appropriate so the dimensionality is reduced in a similar fashion, and this process is called frequency modeling.
After frequency modeling, CNNs output is passed to BLSTM layer which is well-suited for modeling the signal in time. Here, two BLSTM and three FC layers are preferred, but according to the nature of the experiment, the number of layers can vary. Each BLSTM layer contains 832 cells and 512 units (256 LSTM units per direction) projection layer for dimensionality reduction. The BLSTM is pre-trained for 20 time steps with truncated backpropagation through time.
After frequency and temporal modeling, the output of BLSTM layers is passed to FC layers. These layers are suitable for generating higher-order feature representation that is easily discriminable into the different classes. All fully connected layers contain 1024 hidden units. Different speakers have different accent, loudness, etc., that create variability in speech. This variability can be reduced by a popular speaker adaptation technique, fMLLR [10]. CNN cannot be directly used for modeling fMLLR features. Earlier, fMLLR transformation was applied to log-mel features [29]. Unfortunately, no improvement was shown by this method. In our work, a new methodology is introduced to integrate fMLLR features with log-mel features effectively. Specifically, CNN layer is fed by log-mel filter bank (FB) + ∆ + ∆∆ features, and FC layer is fed by fMLLR features. The output of both fMLLR FC layer and log-mel FB CNN-BLSTM layers are combined in the next FC layer in the same way as given in [34]. Most CNN work makes use of FC layers to do class discrimination based on local information learned. Our framework is a flexible joint architecture in which CNN-BLSTM module is used for frequency and temporal modeling, DNN module is used for joining fMLLR and CNN-BLSTM features, and class discrimination is done by the topmost layer, i.e. softmax layer. The complete model is trained jointly.

Experimental Setup and Hardware Configuration
The various experiments on CNN-BLSTM architecture are carried out on a medium size corpus containing 200 K Hindi spoken utterances (around 150 h). All the models are fed with 40-dimensional log-mel FB features. These features are computed at an interval of 10 ms. Asynchronous stochastic gradient descent (ASGD) [8] is a better choice for optimizing the network. It works with a constant learning rate = 10 −5 to pre-train all the neural networks with the CE criteria. The distributed ASGD [17] is also used to perform the HF sequence training experiments. The Glorot-Bengio strategy [11] is used to initialize the weights. All the layers in BLSTM are initialized to be Gaussian, with a variance of 1/(number of inputs). Moreover, the learning rate is explicitly selected for each network, and for stable training, the chosen value is the largest among others. Learning rates are exponentially decayed.
The proposed model is tested on a supercomputer named PARAM Shavak. It consists of two multicore CPUs having 18 cores each along with two accelerator cards (i.e. GPGPU). The computer has 64 GB RAM, 8 TB storage, Nvidia Pascal architecture based co-processing technologies, and deep learning GPU software environment. It works under Ubuntu v17.04 operating system. Kaldi toolkit is used for implementation of the proposed model with Python.

Limited vs. Full Weight Sharing
Some advantages of weight sharing are the following: it reduces the model complexity and makes the network easier to train. In computer vision, the results are good for LWS as compared to FWS [23]. The properties of speech signal change on different frequency bands. Separate sets of weights for different frequency bands may be more fruitful because it allows the detection of distinct feature patterns in different filter bands along the frequency axis. Therefore, LWS scheme looks more appropriate for CNN acoustic models. In ASR task, most LWS work has been performed on a single layer using eight-band different LWS filter [1]. However, our architecture uses multiple convolutional layers because it has been proved that multiple convolutional layers enhance the performance of the system [29]. Therefore, multiple LWS convolutional layers with two LWS filters are used. Local activities of the LWS filter preserve the local information (i.e. locality), and the next LWS filter is fed by a lower LWS filter. Abdel-Hamid et al. [1] reported that an increase in the number of LWS filters and moving to a single layer would deteriorate the performance further. Therefore, the use of multiple FWS convolution layers is logical. The idea of FWS is implemented in the same way as in image recognition. Sainath et al. [30] also showed in their experiments that adding additional convolutional layers was beneficial and claimed that the differences between low-and high-frequency components are expertly captured by using FWS in the convolutional layers. Table 1 shows the result of LWS and FWS for different setups. Two LWS filters are used in the experiment and compared with FWS in performance and number of parameters. The relative improvement in WER

Number of Hidden Units
The local behavior of speech features varies differently in high-frequency regions when compared to the features in low-frequency regions [1]. This issue is addressed by applying weight sharing to frequency components. In simple words, different weights are used for different frequency components. Weight sharing can be applied across all the time and frequency components, although the differences between different frequency regions can also be captured with a large number of hidden units by performing FWS. Table 2 shows the effect of different number of hidden units for two FWS convolutional layers on WER. The result confirms that as the number of hidden units increases, the WER steadily decreases. The reason of this gain is that increasing the hidden units with FWS excellently deals with the variations in frequency in the input signal.

Type of Pooling (Max, Stochastic, L p )
Pooling is an important concept that transforms the joint feature representation into valuable information by keeping the useful information and eliminating irrelevant information. Small frequency shifts that are common in speech signal are efficiently handled using pooling. Pooling also helps in reducing the spectral variance present in the input speech. It maps the input from p adjacent units into the output by applying a special function. After the element-wise non-linearities, the features are passed through the pooling layer. This layer down-samples the feature maps coming from the previous layer and produces the new feature maps with a condensed resolution. This layer drastically reduces the spatial dimension of input. It serves the two main purposes. First, it reduces the amount of parameters or weights up to 65%, thus lessening the computational cost. Second, it controls the overfitting of the training data. Overfitting refers to when a model is so tuned to the training examples.
In this section, various pooling strategies that have been successfully applied in computer vision tasks are investigated for speech recognition tasks. A local region of the previous convolutional layer feeds the inputs to the pooling layer that down-samples the inputs to fetch a single output from that region. Max pooling is the most popular strategy for CNNs [1]. It selects the maximum value from the pooling region. The function for max pooling is given in Eq. (1).
where R j is a pooling region and {a 1 , . . . , a |R j | } is a set of activations. Zeiler and Fergus [44] have shown in their experiments that overfitting of the training data is a major problem with max pooling. L p pooling [34] and stochastic pooling [44] are alternate pooling strategies that address this problem. Bruna et al. [4] claim that the generalization offered by L p pooling is better than max pooling. In L p pooling, a weighted average of activations is taken in the pooling region. Equation (2) shows the operation for L p pooling.
If p = 1 then it works as an average pooling, while p = ∞ leads to max pooling. The areas of high activation are downweighted by areas of low activation in average pooling because all elements in the pooling region are examined and their average is taken. It is the major problem with average pooling. The issues of max and average pooling are addressed using another pooling strategy known as stochastic pooling. Stochastic pooling first computes the probabilities p for each region R j by normalizing the activations within the region, as given in Eq. (3).
These probabilities create a multinomial distribution that is used to select location l and corresponding pooled activation a l . In simple words, the activations are selected randomly based on the probabilities calculated by multinomial distribution. Stochastic pooling prohibits overfitting because of its stochastic component. The advantages of max pooling are also available in stochastic pooling. For L p pooling, p > 1 is examined as a trade-off between average and max pooling. Both L p and stochastic pooling strategies are compared to max pooling on the Hindi dataset. For both convolutional layers, the pooling size of 2 (i.e. p = 2) is used for all the three pooling strategies, and the stride value is also taken to be 2. Their results are shown in Table 3.
Max pooling shows improvements over stochastic and L p pooling. However, these gains are not significant on speech tasks. The experiment concludes that the pooling strategies like L p and stochastic pooling are not offering any improvements over max pooling for speech tasks.

Pooling in Both Frequency and Time
In speech recognition task, CNNs with pooling in frequency is generally investigated [1,9,29]. Although Toth [37] and Waibel et al. [41] explored CNNs for pooling in time only, in computer vision, pooling is generally applied in both frequency and time [22,34]. We also analyze the pooling in both frequency and time for our architecture. Table 4 shows the result for pooling in both frequency and time for various pooling strategies. Pooling in both frequency and time slightly improves the stochastic pooling, but these gains are limited. In   the analysis, it is clear that if pooling in time has the overlap between pooling windows, then it keeps the same performance. If pooling in time is applied without overlapped pooling windows, then it is seen as subsampling of the signal in time. Therefore, the overlapped pooling window is only a way to smooth out the signal in time. It is also observed that pooling in frequency and time is slightly better than pooling in frequency, but this gain will be diminished for large tasks.

Number of BLSTM and Fully Connected Layers
The proposed architecture is deep in both domains, i.e. time and spatial. Different structures may impact the performance of the proposed architecture. For example, the capability of the architecture to learn temporal relationship may be affected by BLSTM layers, and the capability of the architecture to learn feature transforms may be affected by the number of FC layers.
To get optimal performance of the architecture, seven different structures are designed by varying the number of BLSTM layers and FC layers. The non-linearity activation for the FC layer is maxout. The WER is a function of the number of BLSTM layers and FC layers, as shown in Table 5. The results show that increasing the number of BLSTM layers up to 3 improves the performance after that performance starts to deteriorate. At the same time, it is also observed that BLSTM layers offer improvements over FC layers for the same input feature set.

Analysis of Input Features
The features required for CNNs are locally correlated in time and frequency. Mel-frequency cepstral coefficients [7] features are generally the preferred choice in speech recognition, but they do not have locality in frequency and hence cannot be used with CNN [1]. This property, i.e. locality in frequency, is offered by mel FB features [26]. Various researchers [10,25,40] have proposed many speaker adaptation techniques that are adapted to improve ASR systems performance, and the results have shown marvelous improvements in recognition rate.
Combining extra time dynamic information of features is a common method to improve the performance. By combining time-derivative information of features called delta (∆) and double-delta (∆∆), the performance of the system is also increased [43]. The experiments discussed in Section 4 were performed with log-mel + ∆ + ∆∆ features. Another common method, vocal tract length normalization (VTLN), normalizes the speech differences like difference of speaker, accents, stress, etc., by averaging vocal tract length [40]. It keeps the locality in frequency by applying the warping to the log-mel FB of each speaker; hence, it looks more fruitful. In this section, the performance of complex features is analyzed.
The improvements in the recognition rate by including VTLN and delta information is presented in Table 6. Note that the same number of parameters is used for each input feature. The performance of different transformations of log-mel features is not tested for DNN because the performance of fMLLR has been observed to be better than VTLN-warped log-mel features [10]. Sainath et al. [29] applied fMLLR features to CNN by transforming the VTLN-warped log-mel features. This method has not shown any improvement in performance. Multi-scale neural networks [34] combine the input from different layers to improve the results. The same idea is applied to combine fMLLR features with CNN-BLSTM features in FC layers. The output of the fMLLR FC layer is fed into the next FC layer where it combines with CNN-BLSTM features. By combining fMLLR features and CNN-BLSTM features, 1.1% relative gain is achieved. In the analysis, it is also observed that combining fMLLR features through the FC layer is more effective than fMLLR transformed log-mel features.

Non-Linear Units and Dropout
In this section, the performance of different non-linear functions, i.e. sigmoid, maxout, rectified linear units (ReLU), and parameterized rectified linear unit (PReLU), is evaluated by varying the nature of fully connected layers with and without using dropout networks on the Hindi dataset.

Sigmoid Neurons
For acoustic modeling, the standard sigmoid is the preferred choice. Fixed function shapes and no adaptive parameters are its strengths. The sigmoid family's function has been widely explored for many ASR tasks.
f (α) is the logistic function, and θ, γ, and η are known as the learnable parameters. Equation (4) denotes the p-sigmoid(η, γ, θ). In the p-sigmoid function, the curve f (α) has different effects of η, γ, and θ. Among the three parameters, the curve f (α) is highly changed by η because it scales linearly. The value of f (α), i.e. |f (α)| is always less than or equal to |η|. η can be any real number. If η < 0, the hidden unit makes a negative contribution, if η = 0, the hidden unit is disabled, and if η > 0, the hidden unit makes a positive contribution that can be seen as a case of learning hidden unit contributions [32,40] and outputs are constant values. If γ 0, then f (α) is similar to input around 0. The horizontal difference to f (α) is managed using parameter θ. If γ ̸ = 0, then θ/γ is the x value of the mid-point.

Maxout Neurons
Maxout neurons came as a good substitute of the sigmoid neurons. The main issue with the use of conventional sigmoid neurons is vanishing gradient problem during SGD training. Maxout neurons effectively resolve this issue by producing a constant gradient. Cai et al. [5] achieved 1-5% relative gain using the maxout neurons instead of ReLU on the switchboard dataset. Each maxout neuron gets input from several pieces of alternative activations. The maximum value among its piece group is taken as the output of a maxout neuron as given in Eq. (5).
where h l i represents ith maxout neuron output in the lth layer. k represents the number of activation inputs for the maxout neurons. z l ij represents the jth input activation of the ith neuron in the lth layer as given in Eq. (6).
where W l T represents the transpose of the weight matrix for layer l, h l−1 represents previous hidden layers output, b l is a bias vector for layer l, and z l is piece activations (or input activations). The computation of z l or h l does not include any nonlinear transforms like sigmoid or tanh. The process of selection of maximum value which is the non-linearity of the maxout neuron is represented in Eq. (5). This process is also known as feature selection process.
During training for maxout neuron, the gradient is computed as given in Eq. (7).
Equation (7) shows that the value of the gradient is 1 for the field with the maximum activation and 0 otherwise. The value of gradients is either 0 or 1 during training, so the vanishing gradient problem is resolved easily with maxout neurons. Therefore, the deep maxout neural networks are easily optimized as compared to conventional sigmoid neural networks.

Rectified Linear Units
Neural network training is generally performed by two types. First, a frame discriminative SGD CE criterion is used to train the DNN. Second, the sequence level objective function is used to readjust the CE-trained DNN weights [20]. Since speech is a non-stationary sequence level task, the second type is more relevant for speech recognition problem. Various researchers have demonstrated that sequence training improves the performance of ASR over a CE-trained DNN by 10-15% relatively [21,29]. Second-order HF optimization is relevant in terms of performance gain with sequence training, though not as important for CE training [21]. Srivastava et al. [36] proposed a new way to regularize DNNs by the use of ReLU and dropouts. Dahl et al. [6] have shown in their research that a 5% relative reduction has been achieved in WER for CE trained DNNs using ReLU + dropout on a 50-h English Broadcast News large vocabulary continuous speech recognition. ReLU is a non-saturated linear activation function. Its output is 0 for negative values and input itself otherwise. ReLU function is defined as However, subsequent HF sequence training without dropout rubbed out some of the gains and performance provided by a DNN trained with a sigmoid non-linearity without dropout. In this paper, dropout is efficiently applied with HF sequence training.

Parameterized Rectified Linear Units
ReLU units generate zero gradients whenever the units are not active. Therefore, gradient-based optimization will not update their weights. The result of constant zero gradients is slow down in the training process. To overcome this issue, He et al. [16] proposed PReLU which is an advanced version of ReLU and includes the negative part to fasten the learning. It successfully obviates the vanishing gradient problem. In this model, if the input is negative, then the output is produced by multiplying input with a slope of α; otherwise the output is the input itself. A PReLU function is defined as follows:

Dropout
Maxout neurons expertly manage the problem of under-fitting, so they have better optimization performance [38]. However, CNNs using maxout non-linearity are more prone to overfitting due to their high capacity. To address the overfitting problem, regularization methods like L p -norm, weight decay, weight tying, etc., have been proposed. Srivastava et al. [36] proposed a promising regularization technique, dropout, to efficiently reduce the problem of overfitting. In this method, half of the activations within a layer are stochastically set to 0 for each training sample. By doing this, the hidden units cannot co-adapt to each other and learn better representation for the inputs. Goodfellow et al. [12] showed that dropout is an effective way to control the overfitting for maxout networks because of better model averaging. Various strategies are used for dropout regularization for the training and testing phase. Specifically, dropout ignores each hidden unit stochastically with probability p during the feed-forward operation in neural network training. This anticipates complex co-adaptions between hidden units by making them independent from other units. Especially using dropout, y l is given in Eq. (10).
where y l is the activation of layer l. y l−1 is the input into layer l. W l represents the weight matrix for layer l. b l represents a bias vector for layer l. f (.) is the non-linear activation function such as sigmoid or maxout, and r is a binary mask where Bernoulli distribution with probability p of being 1 is used to draw each entry. During decoding, dropout is not used. The hyper-parameter p is called dropout rate, and it is the ratio of the number of neurons to be omitted from training in order to improve generalization. A lower value of p means more information, and a higher value of p means more aggressive regularization. During testing, the factor 1 (1−p) is used to ensure that no units should be dropped out at test time and the total input will pass to each layer. During dropout training, the coefficient (1 − p) is used to scale down the neuron activations. This is an intelligent way to perform model averaging and to improve the model generalization ability.
The experimental results of different non-linear networks with or without dropout are shown in Table 7 and conclude that maxout networks converge faster than ReLU, PReLU, and sigmoidal networks. The WER is the same for sigmoid in both cases with or without dropout. During training, the maxout networks show better abilities to fit the training dataset. During testing, maxout and PReLU networks have shown almost the same WER, but sigmoidal and ReLU perform less. The sequence training is found more closely linked to objective function as compared to CE. The same dropout rate is applied for each layer. Dropout is varied with a step size of 0.05. The experiments confirmed that the dropout probability p = 0.5 is reasonable.

Result Analysis and Comparison
By the results of the experiments performed earlier, the maximum performance is achieved by CNN-BLSTM hybrid system with two CNN, three BLSTM, and three FC layers. The results are best for LWS with 512/512 units, but its number of parameters is too high, i.e. 12.7 M. On the other hand, FWS with 256/256 units uses only 6.9 M parameters. FWS uses approximately half parameters as compared to LWS, and relative deterioration is only 2%. Therefore, FWS with 6.9 M parameters looks more appropriate to use. No doubt, the increase in hidden units also increases the recognition rate, but it increases the complexity of the network. Maxout neurons also performed well because of clean speech. In noisy environment, results may vary. Speaker adaption techniques improve the feature set, and as a result, the performance of the system rises. The comparison of WER of CNN-BLSTM hybrid systems with CNN, DNN, and RNN is shown in Table 8. The CNN-BLSTM hybrid trained by including speaker-adaption and maxout + dropout can achieve a 24.25%, 10%, and 5.8% relative improvement over the Gaussian-based hidden Markov model (HMM), DNN, and CNN, respectively. This helps to strengthen the hypothesis that CNN-BLSTM hybrid structure is better than other models for speech recognition task, and this advancement is attained due to its special structure.

Conclusion
In this paper, a more powerful hybrid acoustic model is proposed by including the advantages of CNN, BLSTM, and fully connected layers. The strength of CNN-BLSTM architecture is demonstrated for speech recognition task. Various weight sharing techniques and pooling strategies are explored that are normally used in computer vision. Unfortunately, none of the pooling techniques showed significant improvement in the ASR task. It is found that the structure having two CNN, three BLSTM, and three fully connected layers along with maxout neurons + dropout offers optimal result. In addition, VTLN-warped mel-FB + ∆ + ∆∆ are found to be the best locally correlated feature set for CNN. We also incorporated fMLLR features with CNN-BLSTM features by a new way. Overall, the proposed architecture achieved a relative improvement of 5.8% over best-performing CNN and 10% over a DNN system. The hybrid system achieved this high gain due to the maxout neuron, dropout, and speaker-adapted features. The experimental results proved that the hybrid of CNN-BLSTM is computationally efficient as well as competitive with the existing baseline system.