Online English writing teaching method that enhances teacher – student interaction

: A signi ﬁ cant component of the online learning platform is the online exercise assessment system, which has access to a wealth of past student exercise data that may be used for data mining research. However, the data from the present online exercise system is not e ﬃ ciently used, making each exercise less relevant for students and decreasing their interest and interaction with the teacher as she explains the activities. In light of this, this research creates an exercise knowledge map based on the connections between workouts, knowledge points, and previous tournaments. The neural matrix was then improved using cross-feature sharing and feature augmentation units to deconstruct the workout recommendation model. The study also developed an interactive text sentiment analysis model based on the expansion of the self-associative word association network to assess how students interacted after the introduction of the personalized exercise advice teaching approach. The outcomes demonstrated that the suggested model ’ s mean diversity value at completion was 0.93, an increase of 0.14 and 0.23 over collaborative ﬁ ltering algorithm and DeepFM (deep factor decompose modle), respectively, and that the proposed model ’ s ﬁ nal convergence value was 92.3%, an improvement of 2.3 and 4.1% over the latter two models. The extended model used in the study outperformed the support vector machine (SVM) and Random Forest models in terms of accuracy by 5.9 and 1.7%, respectively. In terms of F 1 value indicator, the model proposed by the research has a value of 90.4%, which is 2.5 and 2.1% higher than the SVM model and Random Forest model; in terms of recall rate indicators, the model proposed by the research institute has a value of 94.3%, which is an increase of 6.2 and 9.8% compared to the latter two models. This suggests that the study ’ s methodology has some application potential and is advantageous in terms of customized recommendation and interactive sentiment recognition.


Introduction
In a specific teaching and learning setting, classroom interaction is the process through which different components interact to accomplish a certain teaching task and teaching aim.For educators to truly engage students in the teaching and learning activity, it necessitates the use of interactive, communication, and feedback abilities on both sides of the interaction [1].The teaching and learning process is viewed as a growing and changing process of mutual impact and activity in the interactive model of teaching and learning.Teaching and learning activities are viewed as a sort of mental contact and communication between teachers and students.The "teaching interaction" is optimized during this process to improve the efficiency of teaching and learning.However, there are not enough cutting-edge teaching strategies in use today to improve teacher-student engagement, and there are not enough digital tools available to do so.The main goal of individualized education is to maximize each student's unique abilities in light of their unique traits [2].The conventional collaborative filtering method and the deep learning method are the two primary categories of individualized recommendation techniques.The majority of individualized learning resource suggestions now available are based on collaborative filtering, which has a "cold start" issue with new users and low recommendation accuracy because of the absence of structural data pertaining to the exercises.Since knowledge graphs may lay out the intricate relationships between knowledge and knowledge, they can be utilized to study the implicit links between workouts to improve the performance of the exercise recommendation model.Text sentiment analysis is an important research direction in opinion mining.It is the process of analyzing and summarizing emotional text content using natural language processing technology [3,4].The main methods of sentiment analysis include dictionary-based methods and statistical-based methods.The dictionary-based method utilizes a semantic construction of sentiment word dictionaries for sentiment analysis.This approach can effectively utilize existing data, reduce the computational complexity of machine algorithms, and also reduce the tedious work of manually annotating samples.However, the use of which method to dynamically construct an emotional word polarity dictionary is a very important issue.The dictionary method maximizes the coverage of language rules to various possibilities of text representation, but due to the diversity and complexity of language, text in online environments is often filled with various network terms, making it difficult to cover all language rules using this method, resulting in low accuracy in text sentiment analysis [5].Statistical methods are generally divided into traditional machine learningbased methods and deep learning-based methods.Traditional machine learning methods analyze words that represent their features in a text, input them into a classifier, and finally use the classifier for sentiment polarity classification, such as support vector machines (SVMs) and maximum entropy.They are not deep learners, and the data used in establishing algorithm models is relatively simple.At the same time, traditional machine learners overly rely on the results of feature extraction, neglecting the connection between partial semantics and global semantics of the text, so they do not perform well in the accuracy of sentiment analysis.In order to meet the personalized training needs of online learning and understand the interaction between teachers and students, the study will construct a knowledge graph-enhanced exercise recommendation model.In order to optimize the effectiveness of sentiment analysis in short texts, an interactive text sentiment analysis model based on the expansion of autocorrelation word association networks was adopted to examine students' interaction.

Related work
Online English writing training has been the subject of extensive and diverse research by numerous academics.Chinese university students' attitudes toward written corrective feedback and their usage of selfregulated learning writing procedures in an online English writing course during the new coronary pneumonia were investigated by scholar Xu using a mixed-study approach.About 311 and 12 students, respectively, were given questionnaires and semi-structured interviews.In accordance with the findings, students usually had favorable opinions of online WCF (written corrective feedback) during New Crown Pneumonia, and teachers offered more tutorials and feedback and could evaluate them at any time, which made for a relaxing learning environment for students [6].Scholar A descriptive qualitative study was employed by Fitria to offer an overview of "grammar" as an AI (artificial intelligence) tool for English writing.The findings revealed that kids' test scores were 34 before grammar was used.Following the usage of grammar, pupils' performance on the text resulted in a score of 77 out of 100.This rating showed that the text's writing quality has improved [7].Writing teachers' use of online formative assessment and the factors affecting it were the subjects of an investigation by Zou et al.A qualitative case study established three types of teacher participation: interfering, supporting, and integrating [8], by analyzing the involvement of three English writing instructors at three Chinese institutions in online formative assessment during the NCCP (national college entrance examination).With the goal to better understand how students felt about the use of online feedback as a formative evaluation to enhance their writing abilities in the English classroom, Prastikawati et al. employed a mixed-methods approach.The results showed that people had favorable opinions of using the online reverse channel as a formative assessment to advance writing abilities [9].University students' perceptions of and difficulties in using synchronous online discussions were the subject of research by Rinekso et al.A number of current university students participated in semi-structured interviews and virtual observations for the study.The findings indicated that the usage of synchronous online conversations was well received by the students [10].
Numerous academics have created sophisticated algorithms or teaching-related strategies.Ramos G. and other academics described interactive machine teaching and its potential to make the building of machine learning models simpler.They also suggested an integrated teaching environment to help teachers [11].A novel marine predator algorithm based on teaching was proposed by academics like Zhong et al.The outcomes demonstrated that the algorithm outperformed other cutting-edge metaheuristics [12].To help to achieve a good balance between local and global search as well as to raise the standard of the solution, Kumar and Singh provide an algorithm for solving optimization issues based on pedagogical learning and including local search methods.In accordance with experimental findings, the algorithm performs well while solving benchmark test functions.The technique is also used to address clustering issues.In comparison with other methods, the algorithm produced better clustering results [13].Shukla et al. created a brand-new optimization technique based on instruction.On 25 numerical test sets, it was contrasted with various metaheuristic algorithms such as PSO (particle swarm optimization) and GA (genetic algorithm).The approach beats the PSO and GA algorithms in terms of convergent solutions, according to experimental results [14].A feature selection and integrated teaching-based optimization technique was put out by Allam and colleagues.The algorithm proved more reliable in tests using various data sets.
In light of this, this project will develop a text analysis model based on sentiment relevance to study the student interaction and a tailored workout recommendation model based on knowledge mapping enhancement to increase students' interactive feedback.
3 Building a personalized writing exercise recommendation model for teaching interaction enhancement  Online English writing teaching method  3 As can be observed from Figure 1, the exercise knowledge map first establishes the exercise nodes, then establishes the exercise and knowledge point relationships, and finally establishes the exercise and event relationships [15].The exercises are constituted by a number of knowledge points.Knowledge points represent the smallest units of relatively independent information such as knowledge and theory.Events represent the set of exercises that students need to complete within a limited time frame.
The exercises will be designated as , the events as , and the knowledge points as k.The exercises' knowledge base will be referred to as . Exercises, knowledge points, and events are all included in the collection of entities known as E. R stands for the logical connections between entities [16].Examples are denoted by h and t.Exercises serve as test questions with reference answers that students can understand.Exercises, knowledge points, and events are all examples of the entity set, as can be observed, and when these two are translated into the knowledge graph, an exercise node, a knowledge point node, and an event node will arise, respectively, adding a relationship between the two nodes.Figure 2 shows the INMD exercise recommendation model.
A feature sharing unit, a feature improvement unit, a neural matrix decomposition unit, and a knowledge graph embedding unit make up the model, as shown in Figure 2. The enhanced neuromatrix decomposition exercise recommendation model has a multi-task learning architecture and is trained in an alternating learning mode, enabling the alternate training of the exercise recommendation task and the knowledge graph learning job [17,18].The knowledge graph feature learning task is integrated into the training process of the recommendation task by the feature sharing unit through crossover and compression operations, enhancing the effectiveness of the recommendation task with the knowledge graph data.The feature sharing unit's structure is shown in Figure 3.A vector input layer, crossover operation processing, crossover feature matrix creation, compression operation processing, and a vector output layer are all components of the feature sharing unit, as shown in Figure 3.The vector compression operation is carried out by the unit after the vector crossover operation.The two one-dimensional input vectors are ready to be operated upon at the crossover step.The operation is a vector multiplication, and the resulting two-dimensional matrix is then used to merge the data [19].The weight matrix is multiplied by the two 2D matrices containing the input vector information during the vector compression step to produce the compressed 1D vectors for sharing vector information.It is clear that the feature sharing unit will be crucial to raising the effectiveness of the recommendation model.The crossover operation's mathematical equation is as follows: where I l and H l stand for the input one-dimensional vectors.The vector length is denoted by d.C l stands for the two-dimensional cross-featured matrix.The compression operation's mathematical expression is as follows: The weight matrix and bias vector necessary for the compression process are denoted in equation ( 2) by the letters w l and b l , respectively.The new vector after compression is indicated by + I l 1 and + H l 1 , respectively.A feature enhancement unit receives the compressed new feature vector input.Factorization machine (FM) and deep neural networks (DNN) are used by this unit to capture the vector's lower-order features and higherorder features, respectively.The mathematical equation for feature improvement is as follows: where x i stands for the numerous attribute data used in student learning.The weight coefficients are denoted by α i .The sigmoid function is represented by σ .M stands for the neural network that is fully connected.The feature vector after integrating the features from FM and DNN is represented by the letter U L .The neural

Cross characteristic matrix
Cross connection Layer 1 matrix decomposition unit can calculate the learner's preferred exercise by entering U L , as shown in the following equation: where the letters ×, p U G , and p U M stand for vector multiplication, generalized matrix decomposition, and multilayer perceptron, respectively.y ˆUI indicates how much the learner liked the exercise.The knowledge mapping embedding unit's mathematical calculation is expressed in the following equation: ( ) where t ˆstands for the expected value of the vector corresponding to the lower node of knowledge.For the purpose of acquiring t ˆ, this unit will take the features from the knowledge superordinate phases and relationships and combine them.The phrase for determining how similar the expected and actual values are is found in the following equation: where f KG denotes the similarity calculation function and ( ) H R t , , denotes the knowledge triad.

Interactive short-text sentiment analysis model construction
The study began with textual sentiment analysis to look at students' attitudinal tendencies toward learning comments in order to objectively analyze students' interactive qualities in online learning.The brief text type includes the interactive remarks made by students.Due to the considerable feature sparsity of this text type, direct text sentiment analysis typically produces inaccurate results.To improve the extraction of sentiment data from short messages, a text extension method will be employed.The approach begins by pre-processing data for short-text datasets.To increase data purity, pre-processing cleans and eliminates abrasive information from the raw data, such as stop words and pointless symbols.The pre-processed text data type must be converted into numerical input data based on a vectorized representation because it cannot be supplied directly into the algorithm model.The Word2Vec model often serves as the foundation for the vectorized representation of text.The Continuous Jumping Word Model and the Continuity Bag of Words (CBOW) model make up the model.The CBOW model was chosen because of its quick transformation of text properties into high-dimensional vectors, which was all that the investigation required.The CBOW model's structure is shown in Figure 4.
As seen in Figure 4, the CBOW model's input layer uses one-hot coding to convert the text into an Ndimensional vector.The input layer contains n N-dimensional vectors.The letters W stand for the mapping weight matrix, N stands for the number of words in a phrase, and M stands for the dimensionality of the word vector.The N-dimensional word vectors are averaged to get the implied layer vectors.A classifier with Softmax is used as the output layer, and the output values are probabilities.For the calculation of sentiment relevance, the vectorized representation of the text features will be employed.Lexical classification weights are calculated before the sentiment relevance computation.The study uses Word Frequency-Inverse Document Frequency (TF-IDF) to calculate the importance of different words and obtain their weights for classification purposes.The TF-IDF calculation is expressed as: where n i d , represents the quantity of instances of feature , represents the total number of times each feature appears in document d, and d indicates how many articles include feature f i .The uniformiza- tion of word frequencies in equation (8) prevents the detrimental effects of significant gaps in text length: The unimproved TF-IDF does not account for feature category information, which can reduce the algorithm's clustering power for short texts.As a result, the study will use the term "information gain" to define the degree to which features contribute to the themes that are categorized.The information gain is expressed in the following equation: where X denotes the text set, Y denotes the text features, ( ) H X denotes the entropy value of the text set to describe the uncertainty of the classification of text X , and ( ) H X Y , denotes the uncertainty of feature Y for the classification.When the information gain of all features is calculated, comparing the size will obtain the maximum value of information gain F ig max .At this point, if k in data set is obtained by random sampling from the original data set and { } − F F ig max is used as the feature set, k decision trees can be constructed.Let k and N ij denote the number of positive samples for classification before and after adding noise to the features, respectively, then the quantified value of the thematic relevance of a feature in each tree is Thus, the sentiment relevance can be calculated based on the following equation: where stands for the median topic relevance, while ( ) R f i stands for the median sentiment relevance.By ranking all sentiment relatedness in descending order, the top word vectors were chosen to create the  keyword set.The density of the keyword set was then increased by applying an expansion method based on autocorrelation association.An illustration of a self-associative word association network may be found in Figure 5.
The word association network is made up of three components, as shown in Figure 5: graph nodes, connected edges, and weight values.The weights are used to gauge how closely subsets are related.Cooccurrence probabilities and feature dependencies are computed to construct a word association network.The co-occurrence probability shows the likelihood that one trait will emerge when another one does.The percentage of the feature set's feature sequence that is represented by feature dependency.The co-occurrence probability can be calculated as follows: where both f i and f j denote the features, ( ) P f f , i j denotes the probability of both features occurring at the same time, and ( ) P f i denotes the probability of feature f occurring.The calculation of feature dependence is expressed as follows: where freq represents the feature frequency The ( ) I f f , i j indicates how closely traits f i and feature f j are related.

Analysis of the results of a personalized exercise recommendation model for teaching interaction enhancement 4.1 Utility analysis of INMD models based on knowledge graph enhancement
Exercises, competitions, user data, user records, and knowledge data will all be included in the dataset for the study, which was derived from the online testing platform Codeforces.There are 6,920, 1,315, 45,469, 89,365, and 262 entries in total.The event name and the event exercise set make up the event data.The submission result, event name, and submission time are all included in the user record, as shown in Table 1.In this study, the training and test sets were split into two groups in a 3:1 ratio.Accuracy, recall, F1 value, and diversity mean were chosen as the measures for evaluating the model.The knowledge graph's vector length was also set to 8, the knowledge graph embedding model's learning rate was set to 0.02, and the INMD exercise recommendation model's learning rate was set to 0.05.The recall rate and F1 value comparison curves for several models are shown in Figure 6.
As can be noted from Figure 6, the proposed model obtained the highest values in terms of recall as well as F1 values.The proposed model achieved the highest recall and F1 values.The collaborative filtering algorithm (CFA) and deep factor decomposition module (DeepFM) observed the second highest and lowest values, respectively.All three models' recall and F1 curves exhibit a sharp ascent followed by a gradual turn toward convergence.The recall measure for the proposed model starts to converge at a value of 84.6%, whereas the corresponding values for the CFA and DeepFM methods are 70.1 and 58.2%.When there had been 200 iterations, the recall of the suggested model had finally converged to 94.6%, as opposed to 90.2 and 88.5% for the other two algorithms.At the end of the iterations, the recall metrics of the proposed model outperformed those of the CFA and DeepFM algorithms by 4.4 and 6.1%, respectively.Additionally, the proposed model's ultimate convergence rate was 92.3% when 200 iterations were used, as opposed to 90.0 and 88.2% for the CFA and DeepFM methods.As can be observed from the comparison, the suggested model improved by 2.3 and 4.1%, respectively.This demonstrates that the suggested model may successfully include the knowledge graph information from the exercises and enhance the efficacy of the task of providing learners with personalized exercise recommendations.Figure 7 displays the mean diversity curves of the comparison models as well as the mean diversity curves for various matching criteria.
Figure 7 illustrates how the study's matching conditions were Pass, Interest, Suit, and Interest with Suit (IWS).For each of the several matching criteria, the study's suggested mean values for model diversity varied.When there were 20 iterations, IWS increased the diversity mean metric over the first three matching conditions by 0.19, 0.16, and 0.09, respectively.When there were 120 iterations, or when the iterations were finished, the IWS matching condition had the highest diversity mean (0.83), compared to the Pass, Interest, and Suit matching conditions' 0.74, 0.55, and 0.48, respectively.The comparison reveals that the IWS matching condition is better than the subsequent three by 0.09, 0.28, and 0.35.The diversity mean curves for the three various models under the IWS matching requirement are displayed in Figure 7(b).The experiments show that the mean value of diversity for the proposed model is 0.93 at the end of the iteration, while the corresponding mean values for CFA and DeepFM are 0.79 and 0.70, respectively.The comparison reveals that the proposed model outperforms the other two algorithms by 0.14 and 0.23.Therefore, this indicates that the model can simultaneously consider the learner's interest and difficulty adaptation in the exercises, making the diversity of exercise recommendation results the most abundant.The accuracy and loss value comparison curves for the various models are displayed in Figure 8.The loss value curve for the DeepFM algorithm's test set declines at the slowest pace, followed by the loss value curve for the proposed model's test set, which decreases at the fastest rate, and the loss value curve for the CFA method, which decreases at a moderate rate.Before 200 iterations, the loss value of the CFA algorithm's test set began to converge, eventually settling at 23.5%.After 150 iterations, the loss value curve for the test set of the suggested model begins to converge, with a final convergence value of 15.8%.The proposed model achieved the greatest value of recommendation accuracy based on the test set.Figure 8(c) shows that the model's accuracy curve climbs more quickly and begins to converge at iteration number 170, eventually reaching a maximum accuracy of 88.6% at iteration number 250.The most accurate algorithms, CFA and DeepFM, had accuracy rates of 78.2 and 68.9%, respectively.The comparison reveals that the proposed model outperformed the latter two algorithms in terms of accuracy metrics by 10.4 and 19.7%.This proves that the model is better at recommending exercises that are specific to each individual.Online English writing teaching method  11

Analysis of the utility of the sentiment analysis model for interactive short texts
Student interactive texts from the MOOC website's College English Writing course were chosen for analysis as part of the study's effort to evaluate the efficacy of the sentiment analysis model of interactive texts based on sentiment relevance.Positive and negative attitudes were separated into separate categories in the student interactive text.Three dimensions made up positive emotions: satisfaction, happiness, and enjoyment.There were three types of negative emotions: dissatisfaction, dislike, and boredom.Accuracy, recall, and F1 value were chosen as the model evaluation criteria.Random Forest and SVM were the comparison algorithms of choice.The accuracy curves for the three models are shown in Figure 9 both before and after the brief text expansion.
Figure 9 shows that, regardless of the model, the accuracy curves before and after the expansion grew.When there were 100 iterations, the pre-expansion accuracy convergence value for the SVM model was 69.3%.The accuracy convergence after extension increased by 14.1% between before and after, to 83.4%.At the end of the iterations, the Random Forest model's pre-and post-expansion accuracy convergence values were 83.8 and 87.6%, respectively.Pre-and post-extension accuracy for the study's model was 82.1 and 89.3%, respectively, indicating an improvement in accuracy of 7.2%.The comparison revealed that the study's model outperformed the first two models after extension by 5.9 and 1.7%.Figure 10 displays the F1 value curves and recall for the three models both before and after the brief text expansion.Figure 10(a) illustrates the overall growing trend in recall curves for the extended SVM, Random Forest, and the study's adopted sentiment relevance-based interactive text sentiment analysis model.The SVM model, which had a size of 88.1%, had the highest recall rate among them after 90 iterations.With a size of 84.5%, the Random Forest model achieved the highest recall after 50 iterations.The study's model, which had a size of 94.3%, had the highest recall after 90 iterations.The comparison reveals that the model's recall is up to 6.2 and 9.8% from the previous two models.The F1 value curves of the extended SVM and randomness forest models all exhibit a rising and then a declining trend, while the F1 value curves of the study's suggested model exhibit an increasing trend, as can be shown in Figure 10(b).The highest F1 value of the SVM model among them is 87.9%.The Random Forest model's greatest F1 value was 88.3%.The study's model has an F1 value as high as 90.4%.Comparing the two models reveals that this model has boosted the F1 value by 2.5 and 2.1%.This indicates that the model has an advantage in interactive text sentiment analysis.Figure 11 shows the results of the performance metrics of the model proposed in the study under different extensions.
Figure 11 shows that in terms of accuracy, recall, and F1 value for both the training and test sets, the associative word association expansion technique used in the study is superior to the synonym expansion approach and the external corpus expansion approach.The study's expansion method yields accuracy rates of 90.1%, recall rates of 96.8%, and F1 values of 96.8% for the test set, which are 9.6, 10.5, and 8.9% higher than the synonym expansion method and 6.7, 7.2, and 9.2% higher than the external corpus expansion method, respectively.based on knowledge graph enhancement and an interactive text sentiment analysis model based on sentiment relevance to increase students' interest in learning and improve the teaching interaction between students and teachers.In comparison with the CFA and DeepFM algorithms, which had recall convergence values of 90.2 and 88.5%, respectively, the study's proposed model had a recall convergence value of 94.6%, an improvement of 4.4 and 6.1%.The convergence value of the F1 value of the proposed model was 92.3%, while the convergence values of the F1 values of the CFA and DeepFM algorithms were 90.0 and 88.2%, an improvement of 2.3 and 4.1%.The convergence value of the loss value of the test set of the model proposed in the study was 15.8%, while the convergence values of the loss value of the test set of the CFA algorithm and DeepFM algorithm were 23.5 and 28.2% respectively, a decrease of 7.7 and 12.4%.In the post-expansion experiments of the interactive short text, the F1 value curves of the SVM and randomness forest models all showed a rising and then decreasing trend, while the F1 value curves of the proposed model of the study showed an increasing trend.

Conclusion
Over the previous two models, the model's F1 values increased by 2.5 and 2.1% respectively.Additionally, the study's associative word association expansion method outperformed the synonym expansion method by improvements of 9.6, 10.5, and 8.9%, respectively, in accuracy, recall, and F1 values.This showed that the study's proposed methodology was more suited for both sentiment analysis of interactive short texts and customized workout recommendations.The innovation of the exercise recommendation model proposed in this study lies in the use of cross-compressed shared units and the use of correlation information between exercises in the knowledge graph to alleviate the cold start problem and data sparsity problem of traditional recommendation algorithms.In addition, the innovation of the interactive text sentiment analysis model proposed by the research institute lies in its belonging to the short-text extension algorithm based on emotional features, which helps to address the problem of feature sparsity caused by short missing features in short texts.

Figure 1 :
Figure 1: Schematic diagram of the establishment process of exercise knowledge map.

Figure 3 :
Figure 3: Structural diagram of feature sharing unit.

Figure 4 :
Figure 4: Structural diagram of the CBOW model.

Figure 5 :
Figure 5: Example diagram of self-associative word association network.

Figure 6 :
Figure 6: Comparison curve of recall rate and F1 value of different models: (a) recall and (b) F1 value.

Figure 7 :Figure 8 :
Figure 7: Diversity mean curve under different matching conditions and comparison model diversity mean curve: (a) mean value of diversity under different matching conditions and (b) comparison model diversity mean.

Figure 9 :
Figure 9: Accuracy curves of three models before and after short text expansion: (a) SVM, (b) Random Forest, and (c) model in this article.

Figure 10 :
Figure 10: Recall rate and F1 value curve of three models after short-text expansion: (a) recall and (b) F1 value.
Due to unsupervised pupils, online learning frequently suffers from low interest in studying, limited classroom interaction, and low learning effectiveness.The study developed an INMD exercise recommendation model

Figure 11 :
Figure 11: Performance index results of the model proposed in the study under different expansion methods: (a) training set and (b) test set.

Table 1 :
Codeforces user -Exercise-related datasetExercise dataIncluding the event ID of the exercise, the list of knowledge points involved in the exercise, the name of the exercise, the number of passes, and other data items