At Code4Thought we deeply want to help society addressing the challenges and injustices imposed by automated decision making technology.
Since the adoption of AI/ML models is increasing, so does the criticality and gravity of their decisions. For that reason, Code4Thought is developing the PyThia technology that will ensure AI/ML systems are Fair, Accountable and Explainable.
The purpose of this technology is to be guided by humans, adhering to the Human-In-The-Loop principle and has the following characteristics:
At Code4Thought we analyze and evaluate large-scale enterprise software systems, enabling organizations to address IT-related flaws at the root and manage associated risks and costs. Our solutions aim to improve application health in all stages of the software lifecycle – whether you’re building, buying or operating it.
For that reason we collaborate with Software Improvement Group, whose assurance platform Sigrid, combines deep source code analysis based on ISO 25010 with our team’s unparalleled expertise to enable you to measure, evaluate and monitor your software quality in every stage of the software lifecycle – whether you’re buying, building or operating it.
Sigrid continuously monitors the health of your software applications on critical aspects such as maintainability, security, scalability and reliability, and makes these findings and metrics easily accessible to CIOs, architects and developers alike. Based on these results as well as your own business context, our experts develop actionable, prioritized recommendations to ensure your IT can fuel your organizational objectives.
Together with SIG, the team of Code4Thought is developing the market in Greece and Cyprus and is advising the market-leaders in financial, insurance and telecoms markets.
The authority once based on human intuition and reflection is gradually being given to algorithms. The gravity of decisions made by automated algorithmic systems is increasing. As we see examples daily, algorithms gone awry cause serious consequences for business and humans. For some areas, errors can be particularly dire:
Algorithm design and auditing even in the hands of wicked smart coders, with little to no experience in 1. Designing a bias-free system and 2. Auditing to check for gaps, does more harm than good.
We need humans in the loop to ensure algorithms are as bias-free as possible. And those humans must have deep experience in auditing software systems via ML tooling and guided by humans deeply experienced in the process.
Our team is ready to help and advise as to what are the best-practices for setting up the proper processes and infrastructure that will ensure your AI is Responsible and can be Trusted.
73 Aiolou Street, Athens, GR10551, Greece