Home: Who Watches The Watchmen

The authority once based on human intuition and reflection is gradually being given to algorithms. The gravity of decisions made by automated algorithmic systems is increasing.

As we see examples daily, algorithms gone awry cause serious consequences for business and humans. For some areas, errors can be particularly dire:

  • Cyber-Security & Privacy
  • Threat Analysis
  • M&A Due Diligence
  • Autonomous Decision-Making with Social Impact | Public Sector

Algorithm design and auditing even in the hands of wicked smart coders, with little to no experience in 1. Designing a bias-free system 2. Auditing to check for gaps, does more harm than good.

We need humans in the loop to ensure algorithms are as bias-free as possible. And those humans must have deep experience in auditing software systems via ML tooling and guided by humans deeply experienced in the process.

About

The founder of Code4Thought, Yiannis Kanellopoulos has spent the better part of two decades analyzing and evaluating software systems in order to help organizations address any potential risks and flaws related to them. According to his experience these risks or flaws are always due to human involvement.

With Code4Thought, Yiannis is turning his deep auditing expertise into a Technology-Guided-By-Humans solution to ensure algorithmic technology is Explainable, Accountable, Transparent.

Yiannis holds a Ph.D. in computer science from the University of Manchester and he is also a founding member of Orange Grove, a business incubator aided by the Dutch Embassy in Greece to promote entrepreneurship and counter youth unemployment.

Blog

AI | What CEOs, Boards, and Investors Must Keep Top of Mind

Human rights defenders across the world are fighting facial recognition surveillance  Researchers Find Racial Bias in Hospital Algorithm Viral Tweet About Apple Card Leads to Goldman Sachs Probe Google using dubious tactics to target people with ‘darker skin’ in facial recognition project: sources It’s time we faced up to AI’s race problem Tell HUD: Algorithms …

Using explanations for finding bias in black-box models

There is no doubt, that machine learning (ML) models are being used for solving several business and even social problems. Every year, ML algorithms are getting more accurate, more innovative and consequently, more applicable to a wider range of applications. From detecting cancer   to banking and self-driving cars,  the list of ML applications is never …

Ethos 1.0, or the need to build software with intrinsic human values

The Black Box Society book was a source of inspiration for this article (among others). For the last 14 years I have been conducting research and then practicising consultancy on software quality matters. I was merely trying to find answers to questions like: What defines good software? How can we measure it? How can we make …

Contact

Should you want to reach us, you may drop an email at contact@code4thought.eu