Barry, C., & Lardner, M. (2011). A study of first click behaviour and user interaction on the Google SERP. In J. Pokorny, V. Repa, K. Richta, W. Wojtkowski, H. Linger, C. Barry, & M. Lang (Hrsg.), Information Systems Development (S. 89–99). New York, NY: Springer New York. http://doi.org/10.1007/978-1-4419-9790-6_7.
Beel, J., & Gipp, B. (2009). Google Scholar’s ranking algorithm: The impact of citation counts (an empirical study). In 2009 Third International Conference on Research Challenges in Information Science (S. 439–446). IEEE. http://doi.org/10.1109/RCIS. 2009.5089308.
Behnert, C. (2016). Evaluation methods within the LibRank project. Working Paper. http://www.librank.info/wp-content/uploads/2016/07/Working_paper_LibRank2016.pdf [1.12.2018]
Behnert, C., & Borst, T. (2015). Neue Formen der Relevanz-Sortierung in bibliothekarischen Informationssystemen: Das DFG-Projekt LibRank. Bibliothek Forschung und Praxis, 39(3), 384–393. http://doi.org/10.1515/bfp-2015-0052.
Behnert, C., & Lewandowski, D. (2015). Ranking search results in library information systems — Considering ranking approaches adapted from web search engines. The Journal of Academic Librarianship, 41(6), 725–735. http://doi.org/10.1016/j.acalib.2015.07.010.
Behnert, C., & Lewandowski, D. (2017). A framework for designing retrieval effectiveness studies of library information systems using human relevance assessments. Journal of Documentation, 73(3), 509–527. http://doi.org/10.1108/JD-08-2016-0099.
Behnert, C., & Plassmeier, K. (2016). Results of evaluation runs and data analysis in the LibRank project. Working Paper. http://www.librank.info/wp-content/uploads/2016/10/AP4_Evaluierungsbericht.pdf [1.12.2018].
Blenkle, M., Ellis, R., Haake, E., & Zillmann, H. (2015). Nur die ersten Drei zählen! Optimierung der Rankingverfahren über Popularitätsfaktoren bei der Elektronischen Bibliothek Bremen (E-LIB). O-Bib, 2, 33–42. http://doi.org/10.5282/o-bib/2015H2S33-42.
Busa-Fekete, R., Szarvas, G., Élteto, T., & Kégl, B. (2012). An apple-to-apple comparison of Learning-to-rank algorithms in terms of Normalized Discounted Cumulative Gain. 20th European Conference on Artificial Intelligence (ECAI 2012): Preference Learning: Problems and Applications in AI Workshop. Ios Press.Google Scholar
Carterette, B., & Soboroff, I. (2010). The Effect of Assessor Errors on IR System Evaluation. Information Sciences, 539–546. http://doi.org/10.1145/1835449.1835540.
Connaway, L. S., & Dickey, T. J. (2010). The digital information seeker: Report of findings from selected OCLC, RIN and JISC user behaviour projects. Higher Education Funding Council for England (HEFCE). http://www.jisc.ac.uk/media/documents/publications/reports/2010/digitalinformationseekerreport.pdf [1.12.2018].
da Costa Pereira, C., Dragoni, M., Pasi, G., Pereira, C. da C., Dragoni, M., & Pasi, G. (2012). Multidimensional relevance: Prioritized aggregation in a personalized Information Retrieval setting. Information Processing & Management, 48(2), 340–357. http://doi.org/10.1016/j.ipm.2011.07.001.
Glänzel, W., & Schubert, A. (1988). Characteristic scores and scales in assessing citation impact. Journal of Information Science, 14(2), 123–127. http://doi.org/10.1177/016555158801400208.
Hennies, M., & Dressler, J. (2006). Clients information seeking behaviour: An OPAC transaction log analysis. In click 06, ALIA 2006 Biennial Conference, 19-22 September 2006. Perth, AU.Google Scholar
Höchstötter, N., & Koch, M. (2008). Standard parameters for searching behaviour in search engines and their empirical evaluation. Journal of Information Science, 35(1), 45–65. http://doi.org/10.1177/0165551508091311.
Jansen, B. J., & Spink, A. (2006). How are we searching the World Wide Web? A comparison of nine search engine transaction logs. Information Processing & Management, 42(1), 248–263.Google Scholar
Langenstein, A., & Maylein, L. (2009). Relevanz-Ranking im OPAC der Universitätsbibliothek Heidelberg. B.I.T. Online, 12(4), 408–413.Google Scholar
Lewandowski, D. (2010). Using search engine technology to improve library catalogs. Advances in Librarianship, 32, 35–54. http://doi.org/10.1108/S0065-2830(2010)0000032005.
Lewandowski, D. (2018). Suchmaschinen verstehen (2. Aufl.). Berlin, Heidelberg: Springer Berlin Heidelberg. http://doi.org/10.1007/978-3-662-56411-0.
Lewandowski, D., & Sünkler, S. (2013). Designing search engine retrieval effectiveness tests with RAT. Information Services and Use, 33(1), 53–59. http://doi.org/10.3233/ISU-130691.
Metrikov, P., Pavlu, V., & Aslam, J. A. (2012). Impact of assessor disagreement on ranking performance. In Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval – SIGIR ’12 (S. 1091). New York, New York, USA, New York, USA: ACM Press. http://doi.org/10.1145/2348283.2348484.
Nottelmann, H., & Fuhr, N. (2003). From retrieval status values to probabilities of relevance for advanced IR applications. Information Retrieval, 6(3/4), 363–388. http://doi.org/10.1023/A:1026080230789.
Pan, B., Hembrooke, H., Joachims, T., Lorigo, L., Gay, G., & Granka, L. (2007). In Google we trust: Users’ decisions on rank, position, and relevance. Journal of Computer-Mediated Communication, 12(3), 801–823. http://doi.org/10.1111/j.1083-6101.2007.00351.x.
Plassmeier, K. (2016). Relevance model. Working Paper. http://www.librank.info/wp-content/uploads/2016/10/AP3_Relevanzmodell.pdf [1.12.2018].
Plassmeier, K., Borst, T., Behnert, C., & Lewandowski, D. (2015). Evaluating popularity data for relevance ranking in library information systems. In Proceedings of the 78th ASIS&T Annual Meeting (Bd. 51). https://www.asist.org/files/meetings/am15/proceedings/submissions/posters/270poster.pdf [1.12.2018].
Robertson, S. E. (2010). The Probabilistic Relevance Framework: BM25 and Beyond. Foundations and Trends® in Information Retrieval, 3(4), 333–389. http://doi.org/10.1561/1500000019.
Sakai, T. (2014). Statistical reform in information retrieval? ACM SIGIR Forum, 48(1), 3–12. http://doi.org/10.1145/2641383.2641385.
Schaer, P., & Tavakolpoursaleh, N. (2016). Popularity ranking for scientific literature using the Characteristic Scores and Scale method. Trec.Nist.Gov. http://trec.nist.gov/pubs/trec25/papers/THKoeln-GESIS-O.pdf [1.12.2018].
Schultheiß, S., Sünkler, S., & Lewandowski, D. (2018). We still trust in Google, but less than 10 years ago: An eye-tracking study. Information Research, 23(3), paper 799. http://www.informationr.net/ir/23-3/paper799.html [1.12.2018].
Surowiecki, J. (2005). Die Weisheit der Vielen: warum Gruppen klüger sind als Einzelne und wie wir das kollektive Wissen für unser wirtschaftliches, soziales und politisches Handeln nützen können (1. Aufl.). München: Bertelsmann.Google Scholar
Tavakolpoursaleh, N., Neumann, M., & Schaer, P. (2017). IR-Cologne at TREC 2017 OpenSearch Track: Rerunning popularity ranking experiments in a living lab. https://trec.nist.gov/pubs/trec26/papers/IR-Cologne-O.pdf [1.12.2018].
Webber, W. E. (2010). Measurement in information retrieval evaluation. University of Melbourne. http://www.williamwebber.com/research/wew-thesis-PhD.pdf [1.12.2018].
Webber, W., Moffat, A., & Zobel, J. (2008). Score standardization for inter-collection comparison of retrieval systems. In Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval – SIGIR ’08 (S. 51). New York, New York, USA: ACM Press. http://doi.org/10.1145/1390334.1390346.
Comments (0)