Bestuurskunde

Transparantie en Explainable Artificial Intelligence: beperkingen en strategieën

Trefwoorden transparency, Explainable artificial intelligence, Algorithms
Auteurs Prof. mr. dr. Hans de Bruijn, Prof. dr. ir. Marijn Janssen en Dr. Martijn Warnier
DOI
Auteursinformatie

Prof. mr. dr. Hans de Bruijn
Prof. mr. dr. J.A. de Bruijn is hoogleraar Organisatie & Governance aan de TU Delft, Faculteit Techniek, Bestuur en Management.

Prof. dr. ir. Marijn Janssen
Prof. dr. ir. M.F.W.H.A. Janssen is hoogleraar ICT & Governance aan de TU Delft, Faculteit Techniek, Bestuur en Management.

Dr. Martijn Warnier
Dr. M.E. Warnier is universitair hoofddocent Systeemkunde aan de TU Delft, Faculteit Techniek, Bestuur en Management.
  • Samenvatting

      This article contains a critical reflection on eXplainable Artificial Intelligence (XAI): the idea that AI based decision-making using AI should be transparent to people faced with these decisions. We discuss the main objections to XAI. XAI focuses on a variety of explainees, with different expectations and values; XAI is not a neutral activity, but very value-sensitive; AI is dynamic and so XAI quickly becomes obsolete; many problems are ‘wicked’, which further complicates XAI. In addition, the context of XAI matters – a high level of politicization and a high perceived impact of AI-based decisions, will often result in much criticism of AI and will limit the opportunities of XAI. We also discuss a number of alternative or additional strategies – more attention to negotiated algorithms; to competing algorithms; or to value-sensitive algorithms, which may contribute to more trust in AI-based decision-making.

Om de rest van dit artikel te lezen moet u inloggen



Heeft u een registratiecode ontvangen maar nog geen toegang? Activeer dan hier uw code.

Weet u uw wachtwoord niet meer? Nieuw wachtwoord aanvragen.

Kies uw weergave Covers view Covers view