Abstract
Inefficient interaction such as long and/or repetitive questionnaires can be detrimental to user experience, which leads us to investigate the computation of an intelligent questionnaire for a prediction task. Given time and budget constraints (maximum q questions asked), this questionnaire will select adaptively the question sequence based on answers already given. Several use-cases with increased user and customer experience are given.
The problem is framed as a Markov Decision Process and solved numerically with approximate dynamic programming, exploiting the hierarchical and episodic structure of the problem. The approach, evaluated on toy models and classic supervised learning datasets, outperforms two baselines: a decision tree with budget constraint and a model with q best features systematically asked. The online problem, quite critical for deployment seems to pose no particular issue, under the right exploration strategy.
This setting is quite flexible and can incorporate easily initial available data and grouped questions.
About the authors

Trained in statistics, machine learning and programming, I am currently pursuing a PhD at the Centre de Mathématiques APpliquées (CMAP) of Ecole Polytechnique. My current research interest is in the application of Reinforcement Learning (RL) techniques to real-world problems. I am also interested in bandit problems and generative methods for data augmentation in RL.

I have been an associate professor in the Department of Applied Mathematics at École Polytechnique since September 2013. My research is carried out at the Centre de Mathématiques APpliquées (CMAP). I work there on data problems in signal processing, statistics and learning with a taste for applications. I am co-directing with Emmanuel Gobet the SIMPAS team (Signal IMage Probabibilités numériques et Apprentissage Statistique). I am also a member of the Inria Xpop team. I am in charge of many training programs offered by the school: PA of Data Science, MScT Data Science for Business and Data Science Starter Program of Polytechnique Executive Education. I also contributed to the creation of the M2 Datascience at the Institut Polytechnique de Paris.

I’m an International Expert in Data Science and Digital who had the good fortune to believe, at the beginning of the 21st century, that the advances in Machine Learning and Artificial Intelligence could be of great help in solving scientific problems for industrial applications. It is now a truism that the whole world will be impacted in profound ways by the revolution of Artificial Intelligence. Lucky enough to have jumped early into this exciting field, I completed my PhD in Machine Learning and Data Mining in 2006, and have ever since been addressing various data science challenges both in academic and industrial settings.
References
[1] Framingham Heart study dataset. https://www.kaggle.com/amanajmera1/framingham-heart-study-dataset. Accessed: 2020-05-01.10.1007/978-1-4614-6439-6_802-3Search in Google Scholar
[2] Framingham Heart Study, Three Generations of Dedication. https://framinghamheartstudy.org. Accessed: 2020-05-01.Search in Google Scholar
[3] Bellman, R. Dynamic Programming, 1 ed. Princeton University Press, Princeton, NJ, USA, 1957.Search in Google Scholar
[4] Bertsekas, D. P., and Tsitsiklis, J. N. Neuro-dynamic programming. Athena Scientific, 1996.Search in Google Scholar
[5] Besson, R., Pennec, E. L., Allassonniere, S., Stirnemann, J., Spaggiari, E., and Neuraz, A. A model-based reinforcement learning approach for a rare disease diagnostic task. arXiv preprint arXiv:1811.10112 (2018).Search in Google Scholar
[6] Breiman, L., Friedman, J., Stone, C. J., and Olshen, R. A. Classification and regression trees. CRC press, 1984.Search in Google Scholar
[7] Chen, Y., Chen, B., Duan, X., Lou, J.-G., Wang, Y., Zhu, W., and Cao, Y. Learning-to-ask: Knowledge acquisition via 20 questions. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (2018), pp. 1216–1225.10.1145/3219819.3220047Search in Google Scholar
[8] De Cock, D. Ames, Iowa: Alternative to the Boston housing data as an end of semester regression project. Journal of Statistics Education 19, 3 (2011).10.1080/10691898.2011.11889627Search in Google Scholar
[9] Dunlop, M. D. Ontology-Driven, Adaptive, Medical Questionnaires for Patients with Mild Learning Disabilities. In Artificial Intelligence XXXVI: 39th SGAI International Conference on Artificial Intelligence, AI 2019, Cambridge, UK, December 17–19, 2019, Proceedings (2019), Springer, p. 107.10.1007/978-3-030-34885-4_8Search in Google Scholar
[10] Harrison Jr, D., and Rubinfeld, D. L. Hedonic housing prices and the demand for clean air.Search in Google Scholar
[11] Kaelbling, L. P., Littman, M. L., and Cassandra, A. R. Planning and acting in partially observable stochastic domains. Artificial intelligence 101, 1-2 (1998), 99–134.10.1016/S0004-3702(98)00023-XSearch in Google Scholar
[12] Magelssen, M., Supphellen, M., Nortvedt, P., and Materstvedt, L. J. Attitudes towards assisted dying are influenced by question wording and order: a survey experiment. BMC medical ethics 17, 1 (2016), 24.10.1186/s12910-016-0107-3Search in Google Scholar PubMed PubMed Central
[13] Mwamikazi, E., Fournier-Viger, P., Moghrabi, C., Barhoumi, A., and Baudouin, R. An adaptive questionnaire for automatic identification of learning styles. In International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems (2014), Springer, pp. 399–409.Search in Google Scholar
[14] Nokelainen, P., Niemivirta, M., Kurhila, J., Miettinen, M., Silander, T., and Tirri, H. Implementation of an adaptive questionnaire. In Proceedings of the ED-MEDIA Conference (2001), pp. 1412–1413.Search in Google Scholar
[15] Provost, F., Melville, P., and Saar-Tsechansky, M. Data acquisition and cost-effective predictive modeling: targeting offers for electronic commerce. In Proceedings of the ninth international conference on Electronic commerce (2007), pp. 389–398.10.1145/1282100.1282172Search in Google Scholar
[16] Puterman, M. Markov Decision Processes: Discrete Stochastic Dynamic Programming. Wiley Series in Probability and Statistics. Wiley, 2014.Search in Google Scholar
[17] Sutton, R. S., and Barto, A. G. Reinforcement learning: An introduction. MIT press, 2018.Search in Google Scholar
© 2020 Walter de Gruyter GmbH, Berlin/Boston