Strumentazioni e pratiche digitali nella valutazione degli apprendimenti

Titolo Rivista RIV Rassegna Italiana di Valutazione
Autori/Curatori Luca Salmieri, Orazio Giancola
Anno di pubblicazione 2019 Fascicolo 2018/70 Lingua Italiano
Numero pagine 23 P. 75-97 Dimensione file 543 KB
DOI 10.3280/RIV2018-070005
Il DOI è il codice a barre della proprietà intellettuale: per saperne di più clicca qui

Qui sotto puoi vedere in anteprima la prima pagina di questo articolo.

Se questo articolo ti interessa, lo puoi acquistare (e scaricare in formato pdf) seguendo le facili indicazioni per acquistare il download credit. Acquista Download Credits per scaricare questo Articolo in formato PDF

Anteprima articolo

FrancoAngeli è membro della Publishers International Linking Association, Inc (PILA)associazione indipendente e non profit per facilitare (attraverso i servizi tecnologici implementati da CrossRef.org) l’accesso degli studiosi ai contenuti digitali nelle pubblicazioni professionali e scientifiche

Le strumentazioni digitali e il mondo dell’ICT hanno già da tempo iniziato a modificare il campo della valutazione dell’istruzione e mostrano di possedere tutto il potenziale per modificarlo ulteriormente. Un aspetto di questo processo è legato alle modalità di somministrazione, più efficienti ed economiche rispetto a quelle tradizionali (Bridgeman 2009). Tra gli elementi a loro favore vi è la possibilità di ampliamento e arricchimento che riflettano meglio i domini di competenze da valutare. Vi sono costrutti cognitivi che sono difficili da valutare con i sistemi tradizionali e che sono in buona parte emersi come parte integrante dell’era digitale (Kelley, Haber 2006). Inoltre, la somministrazione di prove in modalità digitale consente di indagare meglio le interazioni dinamiche tra lo studente e il materiale di valutazione (tempi di risposta, modalità di costruzione della risposta o di risoluzione del problema). Tuttavia, oltre alla questione della "economicità" e a quelle più strettamente tecniche, il digital assessment ha aperto una serie di dibattitti molto importanti per l’immediato futuro della valutazione in campo educativo. L’utilizzo di strumentazioni e pratiche di ordine digitale nelle valutazioni di larga scala costituisce uno sviluppo progressivo delle metodologie di valutazione tradizionali oppure comporta una profonda trasformazione pedagogica del modo in cui l’istruzione e l’apprendimento verranno gestite nell’immediato futuro? Gli autori affrontano una serie di limiti e problemi cruciali che emergono con la rapida diffusione delle pratiche di valutazione digitale.;

Keywords:Valutazione digitale; Valutazioni su larga-scala; Pratiche di valutazione; Prove al Computer; Nuovi costrutti di competenze.

  1. Anderson R., Ainley J. (2010). Technology and learning: Access in schools around the world. In: McGaw B., Baker E., Peterson P. (a cura di), International encyclopedia of education, Amsterdam: Elsevier.
  2. Baker E.L., Niemi D., Chung, G.K. (2008). Simulations and the transfer of problem-solving knowledge and skills. In: Baker E., Dickerson J., Wulfeck W., O’Niel H.F. (a cura di), Assessment of problem solving using simulations, New York: Erlbaum.
  3. Beauchamp G., Kennewell, S. (2010). Interactivity in the Classroom and its Impact on Learning. Computers & Education, 54, 759-766.
  4. Beller M. (2013). Technologies in Large-Scale Assessments: New Directions, Challenges, and Opportunities. In: von Davier M., Gonzalez E., Kirsch I., Yamamoto K. (a cura di), The Role of International Large-Scale Assessments: Perspectives from Technology, Economy, and Educational Research, Dordrecht: Springer
  5. Benadusi L. (2019). Le molte interpretazioni del concetto di competenze. Una maionese impazzita o ben assortita? Scuola democratica, 1, 41-61
  6. Bennett R.E. (1998). Validity and automated scoring: It’s not only the scoring. Educational Measurement: Issues and Practice, 17(4), 9–17.
  7. Bennett R.E. (2010). Technology for large-scale assessment. In: Peterson P., Baker E., McGaw B. (a cura di), International encyclopedia of education, Oxford: Elsevier.
  8. Bennett R.E. (2015). The changing nature of educational assessment. Review of Research in Education, 39(1), 370–407.
  9. Bennett R.E., Braswell J., Oranje A., Sandene B, Kaplan B., Yan F. (2008). Does it matter if I take my mathematics test on computer? Journal of Technology, Learning and Assessment, 6(9)
  10. Bennett R.E., Jenkins F., Persky H., Weiss A. (2003). Assessing complex problem-solving performances. Assessment in Education, 10, 347–59.
  11. Biesta, G.J. (2010). What is education for? Good education in an age of measurement: Ethics, politics, democracy. London: Taylor&Francis.
  12. Bridgeman B., Lennon M.L., Jackenthal A. (2003). Effects of screen size, screen resolution, and display rate on computer-based test performance. Applied Measurement in Education, 16, 191–205.
  13. Bridgeman, B. (2009). Experiences from large-scale computer-based testing in the USA. In: Scheuermann F., Björnsson J. (a cura di), The transition to computer-based assessment. Luxembourg: European Communities.
  14. Buerger, S., Kroehne, U., Goldhammer, F. (2016). The transition to computer-based testing in large-scale assessments: Investigating (partial) measurement invariance between modes. Psychological Test and Assessment Modeling, 58(4), 597.
  15. Chudowsky N., Pellegrino J.W. (2003). Large-scale assessments that support learning: What will it take?, Theory Into Practice, 1: 75–83.
  16. Clariana R., Wallace P. (2002). Paper–based versus computer–based assessment: key factors associated with the test mode effect. British Journal of Educational Technology, 33(5), 593-602.
  17. Collis B., Moonen J. (2012). Flexible learning in a digital world: Experiences and expectations. London: Routledge
  18. Colombo M. (2016). Introduction to the Special Section. The Digitalization of Educational Practices: How Much and What Kind? Italian Journal of Sociology of Education, 8(2), 1-10.
  19. Conole G., Waburton B. (2005). A review of computer-assisted assessment, Research in Learning Technology, 13(1), 17–31.
  20. Dehaene S. (2010). Les neurones de la lecture. La nouvelle science de la lecture et de son apprentissage, Paris: Odile Jacob.
  21. Dehaene S. (2011). The massive impact of literacy on the brain and its consequences for education, Human Neuroplasticity and Education, 117, 19-32.
  22. Di Gioacchino D., Lotti A., Tedeschi S. (2015). Digital Inequality in Italy and Europe. In: Strangio D., Sancetta G. (a cura di), Italy in a European Context. London, Palgrave Macmillan
  23. European Commission, (2019). Beyond achievement A comparative look into 15- year-olds’ school engagement, effort and perseverance in the European Union, Luxembourg: European Communities.
  24. Eynon R. (2015). The quantified self for learning: critical questions for education. Learning, Media and Technology, 40:4, 407-411.
  25. Ferrari, A. (2012). Digital competence in practice: An analysis of frameworks. Luxembourg: European Commission
  26. Giancola O. (2015). Il nuovo scenario delle politiche educative: tra valutazione, quasimercato e l’emergere di nuovi attori. In: Moini G. (a cura di), “Neoliberismi e azione pubblica. Il caso italiano”, in Roma, Edizioni Ediesse.
  27. Giancola O., Lovecchio D. (2018). Le indagini internazionali come standardizzazione delle competenze. In: Benadusi L, Molina S. (a cura di), Le competenze. Una mappa per orientarsi, Bologna, Il Mulino.
  28. Giancola O., Viteritti, A. (2019). Le competenze nello spazio globale dell’educazione. Discorsi, modelli e misure. Scuola democratica, 1, 11-40
  29. Gilles R.M., Adrian F. (2003). Cooperative Learning: The social and intellectual Outcomes of Learning in Groups, London: Farmer Press
  30. Greenhow C., Robelia B. Hughes, J. E. (2009). Learning, teaching, and scholarship in a digital age: Web 2.0 and classroom research: What path should we take now? Educational researcher, 38(4), 246-259.
  31. Gui M., Argentin G. (2011). Digital skills of internet natives: Different forms of digital literacy in a random sample of northern Italian high school students, New Media & Society, 13(6), 963-80.
  32. Halldórsson A., McKelvie P., Bjornsson J. (2009). Are Icelandic boys really better on computerized tests than conventional ones: Interaction between gender test modality and test performance. In: Sheuermann F., Björnsson J. (a cura di), The transition to computerbased. Luxembourg: European Communities.
  33. Horkay N., Bennett R. E., Allen N., Kaplan B., Yan, F. (2006). Does it matter if I take my writing test on computer? An empirical study of mode effects in NAEP. Journal of Technology, Learning and Assessment, 5(2)
  34. INVALSI (2018). Rapporto prove Invalsi 2018, Roma: INVALSI
  35. Johnson M., Green, S. (2006). On-line mathematics assessment: The impact of mode on performance and question answering strategies. Journal of Technology, Learning, and Assessment, 4(5), 311–26.
  36. Jonassen D.H, Land, S.M. (2012). Theoretical Foundations of Learning Environment, New York: Routlegde
  37. Jonassen D.H, Peck K.L., Wilson G.B. (1999). Learning with technology. A constructivist perspective, Upper Saddle River, N.J.: Merrill
  38. Koretz D. (2008). Measuring up. What educational testing really tells us. Cambridge, MA: Harvard University Press.
  39. Landri P. (2018). Digital Governance of Education. Technology, Standards and Europeanization of Education, London: Bloomsbury Academic.
  40. Leeson H. V. (2006). The mode effect: A literature review of human and technological issues in computerized testing. International Journal of Testing, 6(1), 1-24.
  41. Lingard B., Lewis S. (2016), Globalisation of the Anglo-American approach to top-down, test-based educational accountability. In: Brown G.T.L. Harris L.R. (a cura di), Handboo of human and social conditions in assessment, London: Routledge.
  42. Livingstone, S. (2012). Critical reflections on the benefits of ICT in education. Oxford review of education, 38(1), 9-24.
  43. Martin R. (2008). New possibilities and challenges for assessment through the use of technology, In: Scheuermann F., Björnsson J. (a cura di), The transition to computer-based assessment. Luxembourg: European Communities.
  44. McDonald A. S. (2002). The impact of individual differences on the equivalence of computer-based and paper-and-pencil educational assessments. Computers in Education, 39(3), 299–312.
  45. Novak J.D., Gowin D.B. (1984). Learning how to learn. Cambridge : Cambridge University Press.
  46. OECD (2008). Issues arising from the PISA 2009 field trial of the assessment of reading of electronic texts. Paris: OECD Publishing.
  47. OECD (2015). Students, Computers and Learning: Making the Connection, Paris: OECD Publishing.
  48. OECD (2017). PISA 2015 Results (Volume V): Collaborative Problem Solving, PISA, Paris: OECD Publishing.
  49. OECD. (2014). Technical background. PISA 2012 results. Paris: OECD Publishing.
  50. Pandolfini V. (2016). Exploring the Impact of ICTs in Education: Controversies and Challenges. Italian Journal of Sociology of Education, 8(2).
  51. Parshall C.G., Spray J.A., Kalohn J.C., Davey T. (2002). Practical considerations in computer-based testing. New York: Springer.
  52. Ripley M. (2009). Transformational computer-based testing. In: Scheuermann F., Björnsson J. (a cura di), The transition to computer-based assessment. Luxembourg: European Communities.
  53. Rivoltella P.C. (2006). Screen generation: gli adolescenti e le prospettive dell’educazione nell'età dei media digitali, Milano: Vita e Pensiero.
  54. Salmieri L. (2019). The Rhetoric of Digitalization in Italian Educational Policies: Situating Reception among Digitally Skilled Teachers. Italian Journal of Sociology of Education, 11(1), 162-183.
  55. Sharan Y (2010). Cooperative Learning for Academic and Social Gains: valued pedagogy, problematic practice. European Journal of Education. 45(2): 300–13.
  56. Shute V.J., Leighton J.P., Jang E.E., Chu M-W. (2016). Advances in the Science of Assessment. Educational Assessment, 21(1), 34–59.
  57. Thompson N., Wiess D. (2009). Computerised and adaptive testing in educational assessment.
  58. In: Scheuermann F., Björnsson J. (a cura di), The transition to computer-based assessment. Luxembourg: European Communities.
  59. Tout D., Coben D., Geiger V., Ginsburg L., Hoogland K., Maguire T. (2017). Review of the PIAAC numeracy assessment framework: Final report. Camberwell, Australia: ACER.
  60. Van der Linden W.J., Hambleton R.K. (1997). Handbook of modern item response theory, New York: Springer.
  61. Wang S., Jiao H., Young M., Brooks T., Olson J. (2007). A meta-analysis of testing mode effects in grade K-12 mathematics tests. Educational and Psychological Measurement, 67(2), 219–38.
  62. Wang S., Jiao H., Young M., Brooks T., Olson J. (2008). Comparability of computer-based and paper-and-pencil testing in K-12 reading assessments, Educational and Psychological Measurement, 68(1), 5–24.
  63. Weiss D., Kingsbury G. (2004). Application of computer adaptive testing to educational problems. Journal of Educational Measurement, 21, 361–75.
  64. Williamson B. (2015). Digital education governance: data visualization, predictive analytics, and ‘real-time’ policy instruments, Journal of Education Policy, 31(2): 123-41.
  65. Wunenburger J. J. (1997). Philosophie des images, Paris: PUF
  66. Yamamoto K., Shin H. J., Khorramdel L. (2018). Multistage Adaptive Testing Design in International Large‐Scale Assessments. Educational Measurement: Issues and Practice, 37(4), 16-27.
  67. Yan D., von Davier A., Lewis C. (2014). Computerized Multistage Testing: Theory and Applications. Boca Raton: CRC Press

  • Valutare l’apprendimento precoce di una seconda lingua: rilevanza degli studi longitudinali Lucilla Lopriore, in EuroAmerican Journal of Applied Linguistics and Languages /2020 pp.11
    DOI: 10.21283/2376905X.11.195

Luca Salmieri, Orazio Giancola, Strumentazioni e pratiche digitali nella valutazione degli apprendimenti in "RIV Rassegna Italiana di Valutazione" 70/2018, pp 75-97, DOI: 10.3280/RIV2018-070005