Towards metaphors for cascading AI
|Author:||Oppenlaender, Jonas1; Benjamin, Jesse Josua2|
1University of Oulu, Oulu, Finland
2University of Twente, Enschede, Netherlands
|Online Access:||PDF Full Text (PDF, 2.8 MB)|
|Persistent link:|| http://urn.fi/urn:nbn:fi-fe2020120198829
|Publish Date:|| 2020-12-01
In the future, more and more systems will be powered by AI. This may exacerbate existing blind spots in explainability research, such as focusing on outputs of an individual AI pipeline as opposed to a holistic and integrative view on the system dynamics of data, algorithms, stakeholders, context and their respective interactions. AI systems will increasingly rely on patterns and models of other AI systems. This will likely introduce a major shift in the desiderata of interpretability, explainability and transparency. In this world of Cascading AI (CAI), AI systems will use the output of other AI systems as their inputs. The typical formulations of desiderata for explaining AI decision-making, such as post-hoc interpretability or model-agnostic explanations, may simply not hold in a world of cascading AI. In this paper, we propose two metaphors which may help designers to frame their efforts when designing Cascading AI systems.
|Pages:||1 - 3|
Metaphors for human-robot interactions. International workshop held in conjunction with the 12th international conference on social robotics (ICSR 2020), 16 November 2020. Online
International conference on social robotics (ICSR)
|Field of Science:||
113 Computer and information sciences
© The Authors 2020. CC-By Attribution 4.0 International.