Accessible Requires Authentication Published by De Gruyter Oldenbourg April 26, 2019

Mapping platforms into a new open science model for machine learning

Thomas Weißgerber ORCID logo and Michael Granitzer

Abstract

Data-centric disciplines like machine learning and data science have become major research areas within computer science and beyond. However, the development of research processes and tools did not keep pace with the rapid advancement of the disciplines, resulting in several insufficiently tackled challenges to attain reproducibility, replicability, and comparability of achieved results. In this discussion paper, we review existing tools, platforms and standardization efforts for addressing these challenges. As a common ground for our analysis, we develop an open science centred process model for machine learning research, which combines openness and transparency with the core processes of machine learning and data science. Based on the features of over 40 tools, platforms and standards, we list the, in our opinion, 11 most central platforms for the research process in this paper. We conclude that most platforms cover only parts of the requirements for overcoming the identified challenges.

ACM CCS:

References

1. Martin Abadi, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, CCS’16, pages 308–318. ACM, New York, NY, USA, 2016. Search in Google Scholar

2. Michele Alberti, Vinaychandran Pondenkandath, Marcel Würsch, Rolf Ingold, and Marcus Liwicki. Deepdiva: A highly-functional python framework for reproducible experiments. CoRR, abs/1805.00329, 2018. Search in Google Scholar

3. Ilkay Altintas, Chad Berkley, Efrat Jaeger, Matthew B. Jones, Bertram Ludäscher, and Steve Mock. Kepler: an extensible system for design and execution of scientific workflows. In Proceedings. 16th International Conference on Scientific and Statistical Database Management, 2004, pages 423–424, 2004. Search in Google Scholar

4. Patrick Andreoli-Versbach and Frank Mueller-Langer. Open access to data: An ideal professed but not practised. Research Policy, 43(9):1621–1633, 2014. Search in Google Scholar

5. Timothy G Armstrong, Alistair Moffat, William Webber, and Justin Zobel. Improvements that don’t add up: ad-hoc retrieval results since 1998. In Proceedings of the 18th ACM conference on Information and knowledge management, pages 601–610. ACM, 2009. Search in Google Scholar

6. Michael Ashburner, Catherine A Ball, Judith A Blake, David Botstein, Heather Butler, J Michael Cherry, Allan P Davis, Kara Dolinski, Selina S Dwight, Janan T Eppig, et al.Gene ontology: tool for the unification of biology. Nature genetics, 25(1):25, 2000. Search in Google Scholar

7. Monya Baker. 1,500 scientists lift the lid on reproducibility. Nature News, 533(7604):452, 2016. Search in Google Scholar

8. Tim Berners-Lee, James Hendler, and Ora Lassila. The semantic web. Scientific american, 284(5):34–43, 2001. Search in Google Scholar

9. Steven P. Callahan, Juliana Freire, Emanuele Santos, Carlos Eduardo Scheidegger, Cláudio T. Silva, and Huy T. Vo. Vistrails: visualization meets data management. In SIGMOD Conference, 2006. Search in Google Scholar

10. Fernando Seabra Chirigati, Rémi Rampin, Dennis Shasha, and Juliana Freire. Reprozip: Computational reproducibility with ease. In SIGMOD Conference, 2016. Search in Google Scholar

11. Andrew P. Davison. Automated capture of experiment context for easier reproducibility in computational research. Computing in Science and Engineering, 14:48–56, 2012. Search in Google Scholar

12. Graham Dove, Kim Halskov, Jodi Forlizzi, and John Zimmerman. Ux design innovation: Challenges for working with machine learning as a design material. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, CHI’17, pages 278–288. ACM, New York, NY, USA, 2017. Search in Google Scholar

13. Chris Drummond. Replicability is not reproducibility: Nor is it good science, 2009. Search in Google Scholar

14. Usama M. Fayyad, Gregory Piatetsky-Shapiro, and Padhraic Smyth. The kdd process for extracting useful knowledge from volumes of data. Commun. ACM, 39:27–34, 1996. Search in Google Scholar

15. Benedikt Fecher and Sascha Friesike. Open science: one term, five schools of thought. In Opening science, pages 17–47. Springer, 2014. Search in Google Scholar

16. Erin D Foster and Ariel Deardorff. Open science framework (osf). Journal of the Medical Library Association: JMLA, 105(2):203, 2017. Search in Google Scholar

17. Juliana Freire, Norbert Fuhr, and Andreas Rauber. Reproducibility of data-oriented experiments in e-science (dagstuhl seminar 16041). Dagstuhl Reports, 6:108–159, 2016. Search in Google Scholar

18. Tim Gollub, Benno Stein, Steven Burrows, and Dennis Hoppe. Tira: Configuring, executing, and disseminating information retrieval experiments. In 2012 23rd International Workshop on Database and Expert Systems Applications, pages 151–155, 2012. Search in Google Scholar

19. Klaus Greff, Aaron Klein, Martin Chovanec, Frank Hutter, and Jürgen Schmidhuber. The sacred infrastructure for computational research. In Proceedings of the Python in Science Conferences-SciPy Conferences, 2017. Search in Google Scholar

20. Robert Grossman, Simon Kasif, Reagan Moore, David Rocke, and Jeff Ullman. Data mining research: Opportunities and challenges. A report of three NSF workshops on mining large, massive, and distributed data, 1999. Search in Google Scholar

21. Odd Erik Gundersen and Sigbjørn Kjensmo. State of the art: Reproducibility in artificial intelligence. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018. Search in Google Scholar

22. Philip J. Guo and Dawson R. Engler. Cde: Using system call interposition to automatically create portable software packages. In USENIX Annual Technical Conference, 2011. Search in Google Scholar

23. Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, and David Meger. Deep reinforcement learning that matters. CoRR, abs/1709.06560, 2017. Search in Google Scholar

24. Matthew Hutson. Artificial intelligence faces reproducibility crisis, 2018. Search in Google Scholar

25. Yves Janin, Cédric Vincent, and Rémi Duraffort. Care, the comprehensive archiver for reproducible execution. In TRUST@PLDI, 2014. Search in Google Scholar

26. Brewster Kahle, Rick Prelinger, Mary E Jackson, Kevin W Boyack, Brian N Wylie, George S Davidson, Ian H Witten, David Bainbridge, Stefan J Boddie, William A Garrison, et al.Public access to digital material; a call to researchers: Digital libraries need collaboration across disciplines; report on the first joint conference on digital libraries. D-Lib Magazine, 7(10):n10, 2001. Search in Google Scholar

27. Zachary C Lipton and Jacob Steinhardt. Troubling trends in machine learning scholarship. arXiv preprint arXiv:1807.03341, 2018. Search in Google Scholar

28. Liz Lyon. Transparency: the emerging third dimension of open science and open data. Liber quarterly, 25(4), 2016. Search in Google Scholar

29. Raúl Palma, Piotr Holubowicz, Óscar Corcho, José Manuél Gómez-Pérez, and Cezary Mazurek. Rohub—a digital library of research objects supporting scientists towards reproducible science. In SemWebEval@ESWC, 2014. Search in Google Scholar

30. Ken Peffers, Tuure Tuunanen, Marcus A Rothenberger, and Samir Chatterjee. A design science research methodology for information systems research. Journal of management information systems, 24(3):45–77, 2007. Search in Google Scholar

31. Isabella Peters, Peter Kraker, Elisabeth Lex, Christian Gumpenberger, and Juan Gorraiz. Zenodo in the spotlight of traditional and new metrics. In Front. Res. Metr. Anal., 2017. Search in Google Scholar

32. Quan Pham, Tanu Malik, and Ian T. Foster. Using provenance for repeatability. In TaPP, 2013. Search in Google Scholar

33. Gregory Piatetsky. Crisp-dm, still the top methodology for analytics, data mining, or data science projects. KDD News, 2014. Search in Google Scholar

34. Vedran Sabol, Gerwald Tschinkel, Eduardo Veas, Patrick Hoefler, Belgin Mutlu, and Michael Granitzer. Discovery and visual analysis of linked data for humans. In International Semantic Web Conference, pages 309–324. Springer, 2014. Search in Google Scholar

35. Erich Schubert and Michael Gertz. Numerically stable parallel computation of (co-) variance. In Proceedings of the 30th International Conference on Scientific and Statistical Database Management, page 10. ACM, 2018. Search in Google Scholar

36. Oleksandr Shchur, Maximilian Mumme, Aleksandar Bojchevski, and Stephan Günnemann. Pitfalls of graph neural network evaluation. arXiv preprint arXiv:1811.05868, 2018. Search in Google Scholar

37. Sören Sonnenburg, Mikio L Braun, Cheng Soon Ong, Samy Bengio, Leon Bottou, Geoffrey Holmes, Yann LeCun, Klaus-Robert Müller, Fernando Pereira, Carl Edward Rasmussen, et al.The need for open source software in machine learning. Journal of Machine Learning Research, 8(Oct):2443–2466, 2007. Search in Google Scholar

38. Joaquin Vanschoren, Jan N. van Rijn, Bernd Bischl, and Luís Torgo. Openml: networked science in machine learning. SIGKDD Explorations, 15:49–60, 2013. Search in Google Scholar

39. Kiri Wagstaff. Machine learning that matters. CoRR, abs/1206.4656, 2012. Search in Google Scholar

40. Mark D Wilkinson, Michel Dumontier, IJsbrand Jan Aalbersberg, Gabrielle Appleton, Myles Axton, Arie Baak, Niklas Blomberg, Jan-Willem Boiten, Luiz Bonino da Silva Santos, Philip E Bourne, et al.The fair guiding principles for scientific data management and stewardship. Scientific data, 3, 2016. Search in Google Scholar

41. Rüdiger Wirth. Crisp-dm: Towards a standard process model for data mining, 2000. Search in Google Scholar

42. G Peter Zhang. Avoiding pitfalls in neural network research. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 37(1):3–16, 2007. Search in Google Scholar

Received: 2018-08-31
Revised: 2019-03-27
Accepted: 2019-04-05
Published Online: 2019-04-26
Published in Print: 2019-08-27

© 2019 Walter de Gruyter GmbH, Berlin/Boston