Evolving Generative AI: Entangling the Accountability Relationship
Marc T.J Elliott, Deepak P, Muiris MaccarthaighSince ChatGPT's debut, generative AI technologies have surged in popularity within the AI community. Recognized for their cutting-edge language processing capabilities, these excel in generating human-like conversations, enabling open-ended dialogues with end-users. We consider that the future adoption of generative AI for critical public domain applications transforms the accountability relationship. Previously characterized by the relationship between an actor and a forum, the introduction of generative systems complicates accountability dynamics as the initial interaction shifts from the actor to an advanced generative system. We conceptualise a dual-phase accountability relationship involving the actor, the forum, and the generative AI as a foundational approach to understanding public sector accountability in the context of these technologies. Focusing on integrating generative AI for assisting healthcare triaging, we identify potential challenges introduced for maintaining effective accountability relationships, highlighting concerns that these technologies relegate actors to a secondary phase of accountability and creates a disconnect between government actors and citizens. We suggest recommendations aimed at disentangling the complexities generative systems bring to the accountability relationship. As we speculate on the technologies disruptive impact on accountability, we urge public servants, policymakers, and system designers to deliberate on the potential accountability impact generative systems produce prior to their deployment.