Menu Expand

Cite JOURNAL ARTICLE

Style

Hartmann, K. Exploring New Challenges for Street-Level Bureaucrats through the Implementation of ADM Systems. Sozialer Fortschritt, 71(6-7), 447-464. https://doi.org/10.3790/sfo.71.6-7.447
Hartmann, Kathrin "Exploring New Challenges for Street-Level Bureaucrats through the Implementation of ADM Systems" Sozialer Fortschritt 71.6-7, 2022, 447-464. https://doi.org/10.3790/sfo.71.6-7.447
Hartmann, Kathrin (2022): Exploring New Challenges for Street-Level Bureaucrats through the Implementation of ADM Systems, in: Sozialer Fortschritt, vol. 71, iss. 6-7, 447-464, [online] https://doi.org/10.3790/sfo.71.6-7.447

Format

Exploring New Challenges for Street-Level Bureaucrats through the Implementation of ADM Systems

Hartmann, Kathrin

Sozialer Fortschritt, Vol. 71 (2022), Iss. 6-7 : pp. 447–464

Additional Information

Article Details

Pricing

Author Details

Hartmann, Kathrin, TU Kaiserslautern, Chair of Policy Analysis and Political Economy, Postbox 3049, 67653 Kaiserslautern.

References

  1. Allhutter, D./Cech, F./Fischer, F./Gand, G./Mager, A. (2020): Algorithmic Profiling of Job Seekers in Austria: How Austerity Politics Are Made Effective. Front. Big Data 3:5, https://doi.org/10.3389/fdata.2020.00005.  Google Scholar
  2. Angwin, J./Larson, J./Mattu, S./Kirchner, L. (2016): Machine bias—There’s software used across the country to predict future criminals. And it’s biased against blacks, ProPublica, https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.  Google Scholar
  3. Arbeitsmarktservice Österreich [n.d.]: Beratung im AMS, https://www.ams.at/arbeitsuchende/arbeitslos-was-tun/beratung-im-ams [30.07.2021].  Google Scholar
  4. Berk, R. (2017): An impact assessment of machine learning risk forecasts on parole board decisions and recidivism, Journal of Experimental Criminology, 13(2): S. 193–216, https://doi.org/10.1007/s11292-017-9286-2.  Google Scholar
  5. Cohen, J. E. (2012): Configuring the Networked Self. New Haven: Yale University Press.  Google Scholar
  6. Dietvorst, B./Simmons, J. P./Massey, C. (2015): Algorithm Aversion: People Erroneously Avoid Algorithms after Seeing Them Err, Journal of Experimental Psychology: General, 144 (1): S.114–126.  Google Scholar
  7. Dietvorst, B./Simmons, J. P./Massey, C. (2016): Overcoming Algorithm Aversion: People will Use Imperfect Algorithms If They Can (Even Slightly) Modify Them, Management Science, 64 (3), S.1155–1170, http://dx.doi.org/10.1287/mnsc.2016.2643.  Google Scholar
  8. Dietvorst, B./Simmons, J. P./Massey, C. (2018): Overcoming Algorithm Aversion: People Will Use Imperfect Algorithms If They Can (Even Slightly) Modify Them. Manag. Sci., 64: S. 1155–1170.  Google Scholar
  9. Gamper, J./Kernbeiß, G./Wagner-Pinter, M. (2020): Das Assistenzsystem AMAS. Zweck, Grundlagen, Anwendung. Wien: Syntheses Forschung GmbH.  Google Scholar
  10. Grenet, J. (2018): Orientation postbac: Une question technique ou politique?, Administration & Éducation, 159(3), S. 123–127, https://doi.org/10.3917/admed.159.0123.  Google Scholar
  11. Hartmann, K./Wenzelburger, G. (2021): Uncertainty, risk and the use of algorithms in policy decisions: a case study on criminal justice in the USA, Policy Sciences 54: S. 269–287, https://doi.org/10.1007/s11077-020-09414-y.  Google Scholar
  12. Holl, J./Kernbeiß, G./Wagner-Pinter, M. (2018): Das AMS-Arbeitsmarktchancen-Modell, Wien: Synthesis Forschung GmbH.  Google Scholar
  13. Juravle, G./Boudouraki, A./Terziyska, M./Rezlescu, C. (2020): Trust in artificial intelligence for medical diagnoses, 253: S. 263–282, https://doi.org/10.1016/bs.pbr.2020.06.006.  Google Scholar
  14. Kerler, M./Steiner, K. (2018): Mismatch am Arbeitsmarkt: Indikatoren, Handlungsfelder und Matching-Strategien im Wirkungsbereich von Vermittlung und Beratung, AMS report No. 133, Wien: Arbeitsmarktservice Österreich, http://hdl.handle.net/10419/206696.  Google Scholar
  15. Lepri, B./Oliver, A./Letouzé, E./Pentland, A./Vinck, P. (2018): Fair, Transparent, and Accountable Algorithmic Decision-making Processes: The Premise, the Proposed Solutions, and the Open Challenges, Philosophy & Technology 31 (4): S. 611–627.  Google Scholar
  16. Logg, J. M./Minson, J. A./Moore, D. A. (2019): Algorithm appreciation: People prefer algorithmic to human judgment, Organizational Behavior and Human Decision Processes 151: S. 90–103.  Google Scholar
  17. Lopez, P. (2019): Reinforcing Intersectional Inequality via the AMS Algorithm in Austria, Conference Proceedings of the STS Conference Graz 2019, 6th–7th May 2019, S. 289–309, DOI: 10.3217/978-3-85125-668-0-16.  Google Scholar
  18. Mayring, P. (2010): Qualitative Inhaltsanalyse. Grundlagen und Techniken, Weinheim und Basel.  Google Scholar
  19. Mittelstadt, B. D./Allo, P./Taddeo, M./Wachter, S./Floridi, L. (2016): The ethics of algorithms: Mapping the debate, Big Data & Society 3 (2): S. 1–21.  Google Scholar
  20. Österreichischer Rechnungshof (2017): Bericht des Rechnungshofes, S. 90, https://www.rechnungshof.gv.at [20.07.2021].  Google Scholar
  21. Pasquale, F. (2016): The black box society. The secret algorithms that control money and information, Cambridge: Harvard University Press.  Google Scholar
  22. Stevenson, M. (2018): Assessing risk assessment in action. Minnesota Law Review 103, S. 303–384.  Google Scholar
  23. van Zanten, A./Legavre, A. (2014): Engineering access to higher education through higher education fairs. In: Goastellec,G./Picard,F. (Eds.), Higher education in societies: A multi scale perspective, Rotterdam, S. 183–203.  Google Scholar
  24. Veale, M./Brass, I. (2019): Administration by Algorithm? Public Management meets Public Sector Machine Learning. In: Karen Yeung and Martin Lodge eds.: Algorithmic Regulation, Oxford.  Google Scholar
  25. Yeomans, M./Shah, A./Mullainathan, S./Kleinberg, J. (2019): Making Sense of Recommendations, Journal of Behavioral Decision Making 32 (4), S. 403–414.  Google Scholar
  26. Yeung, K. (2017): ‘Hypernudge’: Big Data as a mode of regulation by design, Information, communication and society, 20 (1), S.118–136, https://doi.org/10.1080/1369118X.2016.1186713.  Google Scholar
  27. Yeung, K. (2018). Algorithmic regulation: A critical interrogation, Regulation & Governance, 12 (4), S. 505–523, https://doi.org/10.1111/rego.12158.  Google Scholar
  28. Zweig, K. A./Wenzelburger, G./Krafft, T. D. (2018): On chances and risks of security related algorith-mic decision making systems, European Journal for Security Research, https://doi.org/10.1007/s41125-018-0031-2.  Google Scholar
  29. Allhutter, D./Cech, F./Fischer, F./Gand, G./Mager, A. (2020): Algorithmic Profiling of Job Seekers in Austria: How Austerity Politics Are Made Effective. Front. Big Data 3:5, https://doi.org/10.3389/fdata.2020.00005.  Google Scholar
  30. Angwin, J./Larson, J./Mattu, S./Kirchner, L. (2016): Machine bias—There’s software used across the country to predict future criminals. And it’s biased against blacks, ProPublica, https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.  Google Scholar
  31. Arbeitsmarktservice Österreich [n.d.]: Beratung im AMS, https://www.ams.at/arbeitsuchende/arbeitslos-was-tun/beratung-im-ams [30.07.2021].  Google Scholar
  32. Berk, R. (2017): An impact assessment of machine learning risk forecasts on parole board decisions and recidivism, Journal of Experimental Criminology, 13(2): S. 193–216, https://doi.org/10.1007/s11292-017-9286-2.  Google Scholar
  33. Cohen, J. E. (2012): Configuring the Networked Self. New Haven: Yale University Press.  Google Scholar
  34. Dietvorst, B./Simmons, J. P./Massey, C. (2015): Algorithm Aversion: People Erroneously Avoid Algorithms after Seeing Them Err, Journal of Experimental Psychology: General, 144 (1): S.114–126.  Google Scholar
  35. Dietvorst, B./Simmons, J. P./Massey, C. (2016): Overcoming Algorithm Aversion: People will Use Imperfect Algorithms If They Can (Even Slightly) Modify Them, Management Science, 64 (3), S.1155–1170, http://dx.doi.org/10.1287/mnsc.2016.2643.  Google Scholar
  36. Dietvorst, B./Simmons, J. P./Massey, C. (2018): Overcoming Algorithm Aversion: People Will Use Imperfect Algorithms If They Can (Even Slightly) Modify Them. Manag. Sci., 64: S. 1155–1170.  Google Scholar
  37. Gamper, J./Kernbeiß, G./Wagner-Pinter, M. (2020): Das Assistenzsystem AMAS. Zweck, Grundlagen, Anwendung. Wien: Syntheses Forschung GmbH.  Google Scholar
  38. Grenet, J. (2018): Orientation postbac: Une question technique ou politique?, Administration & Éducation, 159(3), S. 123–127, https://doi.org/10.3917/admed.159.0123.  Google Scholar
  39. Hartmann, K./Wenzelburger, G. (2021): Uncertainty, risk and the use of algorithms in policy decisions: a case study on criminal justice in the USA, Policy Sciences 54: S. 269–287, https://doi.org/10.1007/s11077-020-09414-y.  Google Scholar
  40. Holl, J./Kernbeiß, G./Wagner-Pinter, M. (2018): Das AMS-Arbeitsmarktchancen-Modell, Wien: Synthesis Forschung GmbH.  Google Scholar
  41. Juravle, G./Boudouraki, A./Terziyska, M./Rezlescu, C. (2020): Trust in artificial intelligence for medical diagnoses, 253: S. 263–282, https://doi.org/10.1016/bs.pbr.2020.06.006.  Google Scholar
  42. Kerler, M./Steiner, K. (2018): Mismatch am Arbeitsmarkt: Indikatoren, Handlungsfelder und Matching-Strategien im Wirkungsbereich von Vermittlung und Beratung, AMS report No. 133, Wien: Arbeitsmarktservice Österreich, http://hdl.handle.net/10419/206696.  Google Scholar
  43. Lepri, B./Oliver, A./Letouzé, E./Pentland, A./Vinck, P. (2018): Fair, Transparent, and Accountable Algorithmic Decision-making Processes: The Premise, the Proposed Solutions, and the Open Challenges, Philosophy & Technology 31 (4): S. 611–627.  Google Scholar
  44. Logg, J. M./Minson, J. A./Moore, D. A. (2019): Algorithm appreciation: People prefer algorithmic to human judgment, Organizational Behavior and Human Decision Processes 151: S. 90–103.  Google Scholar
  45. Lopez, P. (2019): Reinforcing Intersectional Inequality via the AMS Algorithm in Austria, Conference Proceedings of the STS Conference Graz 2019, 6th–7th May 2019, S. 289–309, DOI: 10.3217/978-3-85125-668-0-16.  Google Scholar
  46. Mayring, P. (2010): Qualitative Inhaltsanalyse. Grundlagen und Techniken, Weinheim und Basel.  Google Scholar
  47. Mittelstadt, B. D./Allo, P./Taddeo, M./Wachter, S./Floridi, L. (2016): The ethics of algorithms: Mapping the debate, Big Data & Society 3 (2): S. 1–21.  Google Scholar
  48. Österreichischer Rechnungshof (2017): Bericht des Rechnungshofes, S. 90, https://www.rechnungshof.gv.at [20.07.2021].  Google Scholar
  49. Pasquale, F. (2016): The black box society. The secret algorithms that control money and information, Cambridge: Harvard University Press.  Google Scholar
  50. Stevenson, M. (2018): Assessing risk assessment in action. Minnesota Law Review 103, S. 303–384.  Google Scholar
  51. van Zanten, A./Legavre, A. (2014): Engineering access to higher education through higher education fairs. In: Goastellec,G./Picard,F. (Eds.), Higher education in societies: A multi scale perspective, Rotterdam, S. 183–203.  Google Scholar
  52. Veale, M./Brass, I. (2019): Administration by Algorithm? Public Management meets Public Sector Machine Learning. In: Karen Yeung and Martin Lodge eds.: Algorithmic Regulation, Oxford.  Google Scholar
  53. Yeomans, M./Shah, A./Mullainathan, S./Kleinberg, J. (2019): Making Sense of Recommendations, Journal of Behavioral Decision Making 32 (4), S. 403–414.  Google Scholar
  54. Yeung, K. (2017): ‘Hypernudge’: Big Data as a mode of regulation by design, Information, communication and society, 20 (1), S.118–136, https://doi.org/10.1080/1369118X.2016.1186713.  Google Scholar
  55. Yeung, K. (2018). Algorithmic regulation: A critical interrogation, Regulation & Governance, 12 (4), S. 505–523, https://doi.org/10.1111/rego.12158.  Google Scholar
  56. Zweig, K. A./Wenzelburger, G./Krafft, T. D. (2018): On chances and risks of security related algorith-mic decision making systems, European Journal for Security Research, https://doi.org/10.1007/s41125-018-0031-2.  Google Scholar

Abstract

The implementation of algorithms to inform decision-making has been shown to raise issues of quality, fairness, and accountability. However, the consequences of this technology can only really be understood by focusing on the actors using the technology in their daily working routines. It is therefore crucial to understand how decision-making processes change when algorithms are put into practice and to analyse street-level bureaucrats’ perceptions of these algorithms. This paper addresses this question with a case study on the Austrian algorithm-based decision-making system AMAS, which is designed to assist street-level bureaucrats in the Austrian employment service (AMS) through profiling job seekers.

Table of Contents

Section Title Page Action Price
Kathrin Hartmann: Exploring New Challenges for Street-Level Bureaucrats through the Implementation of ADM Systems 1
Abstract 1
Zusammenfassung: Die Untersuchung neuer Herausforderungen für ‚Street-Level Bureaucrats‘ durch die Implementierung von ADM-Systemen 1
1. Introduction 2
2. Human Motives for Algorithm Aversion and Algorithm Appreciation 3
3. Conducting the Case Study: Methods und Data 6
4. How Decisions are Made: From Structuring the Consulting Processes to Decision-Making in the AMS 7
4.1 ADM Systems in the Public Employment Agency: The Case of AMAS 8
4.2 The ADM Tool AMAS 9
4.3 Putting Actors at Stage 1
5. Conclusion 1
References 1