An organization that advocates for accountability should ensure that algorithmic decisions do not create discriminatory or unjust impacts. It is important to know whether any particular group may be advantaged or disadvantaged in the context of deploying an algorithm. Thus, the organization needs to define and quantify the potential damaging effect of uncertainty/errors on different groups (Yiannis Kanellopoulos, Cutter Business Technology Journal Vol. 32, No. 2)
According to Prof. Diakopoulos by accountability we mean the degree of which one decides when and how an algorithmic system should be guided (or restrained) in the risk of crucial or expensive errors, discrimination, unfair denials, or censorship. Simply put, we may say that holding a system accountable means we should control it at technical as well as at organizational level (Book Chapter from 97 Things About Ethics Everyone in Data Science Should Know by Bill Franks, O'Reilly Media, Inc.).
The ethics guidelines for trustworthy AI which was published by the EU Commission’s High-Level Expert Group on AI (AI HLEG) in April 2019 states transparency as one of seven key requirements for a trustworthy AI. For an algorithm to be transparent it needs to be visible and accessible in order to be unbiased. That means, it has to follow the human-in-the-loop principle.
73 Aiolou Street, Athens, GR10551, Greece