The authority once based on human intuition and reflection is gradually being given to algorithms. The gravity of decisions made by automated algorithmic systems is increasing.
As we see examples daily, algorithms gone awry cause serious consequences for business and humans. For some areas, errors can be particularly dire:
- Cyber-Security & Privacy
- Threat Analysis
- M&A Due Diligence
- Autonomous Decision-Making with Social Impact | Public Sector
Algorithm design and auditing even in the hands of wicked smart coders, with little to no experience in 1. Designing a bias-free system 2. Auditing to check for gaps, does more harm than good.
We need humans in the loop to ensure algorithms are as bias-free as possible. And those humans must have deep experience in auditing software systems via ML tooling and guided by humans deeply experienced in the process.