-
Fairness :
-
Machine Learning in the wild : en particulier sur le sujet fairness
-
Building Fair and Transparent Machine Learning via Operationalized Risk Management de Quantum Black : approche analyse de risque très détaillée, à suivre courant 2020 si publication de leur plateforme de risques
-
Word Embedding and gender bias:
-
-
AI safety:
-
A Roadmap for Robust End-to-End Alignment, Lê Nguyên Hoang, EPFL : "AI alignment problem. This is the problem of aligning an AI’s objective function with human preferences."
-
Concrete problems in AI safety. Abstract: "[...] the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). [...]"
-
-
Trust in AI systems, explicabilité et interprétabilité:
-
La confiance des utilisateurs dans les systèmes impliquant de l’Intelligence Artificielle, blog Octo Technologies, octobre 2019
-
Interpretable Machine Learning, A Guide for Making Black Box Models Explainable, Christoph Molnar
-
-
Protection of data confidentiality:
-
The secret-sharer: evaluating and testing unintended memorization in neural networks, A. Colyer, 2019
-
Membership Inference Attacks against Machine Learning Models, R. Shokri, M. Stronati, C. Song, V. Shmatikov, 2017 and further analysis Demystifying the membership inference attack, Disaitek, 2019. A tool called ML Privacy Meter to quantify the privacy risks of machine learning models with respect to inference attacks is also available
-
Outils pour la differential privacy : Google differential privacy library and its Python wrapper PyDP by OpenMined
-
La distillation d'un modèle, en plus de la compression qu'elle apporte, peut être utilisée comme une mesure de protection du modèle et des données d'entraînement utilisées, voir par exemple Knowledge Distillation : Simplified, Towards Data Science, 2019, et Distilling the Knowledge in a Neural Network, G. Hinton, O. Vinyals, J. Dean, 2015
-
-
Cycle de vie complet :
- En route vers le cycle de vie des modèles !, G. Martinon, Janvier 2020
-
"Performance is not outcome", erreurs, crises :
- Google’s medical AI was super accurate in a lab. Real life was a different story, MIT Technology Review
-
Various scandals and or controversies:
-
Awful AI: a curated list to track current scary usages of AI - hoping to raise awareness to its misuses in society, David Dao
-
An AI hiring firm says it can predict job hopping based on your interviews: The idea of “bias-free” hiring, already highly misleading, is being used by companies to shirk greater scrutiny for their tools’ labor issues beyond discrimination, MIT Technology Review, July 2020
-
Faulty Facial Recognition Led to His Arrest—Now He’s Suing, Septembre 2020, vice.com
-
Argentina: Child Suspects’ Private Data Published Online - Facial Recognition System Uses Flawed Data, Poses Further Risks to Children
-
Minneapolis prohibits use of facial recognition software by its police department
-
L'Institute for Ethical AI & Machine Learning maintient un panorama très complet des initiatives réglementaires, rapports, guidelines, frameworks divers et variés en lien avec la pratique et l'usage de l'IA et la data science : voir leur repository Awesome AI Guidelines sur Github.
-
Méta-étude The Ethics of AI Ethics: An Evaluation of Guidelines, T. Hagendorff, Octobre 2019
-
Méta-étude The global landscape of AI ethics guidelines, A. Jobin, M. Ienca, E. Vayena, Juin 2019
-
Méta-étude A Unified Framework of Five Principles for AI in Society, F. Floridi, J. Cowls, Juillet 2019
-
Méta-étude Principled Artificial Intelligence, Berkman Klein Center, Février 2020
-
UNESCO - Recommendation on the ethics of artificial intelligence:
- La présente Recommandation a pour objet de formuler des valeurs et des principes éthiques ainsi que des recommandations concrètes concernant la recherche, la conception, le développement, le déploiement et l’utilisation de l’IA, en vue de mettre les systèmes d’IA au service de l’humanité, des individus, des sociétés et de l’environnement.
- Current status is draft, with an open public consultation in progress (as of August 2020)
-
EU Draft Ethics guidelines for trustworthy AI and pilot assessment survey
7 Key requirements:
- Human agency and oversight
- Technical robustness and safety
- Privacy and data governance
- Transparency
- Diversity, non-discrimination and fairness
- Societal and environmental well-being
- Accountability
-
OECD AI Principles focused on 'Responsible stewardship of trustworthy AI'
The Recommendation identifies five complementary values-based principles for the responsible stewardship of trustworthy AI:
- AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.
- AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society.
- There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them.
- AI systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed.
- Organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.
-
The Institute for Ethical AI & Machine Learning: Awesome AI guidelines and The Responsible ML Principles:
The Responsible Machine Learning Principles:
- Human augmentation: I commit to assess the impact of incorrect predictions and, when reasonable, design systems with human-in-the-loop review processes
- Bias evaluation: I commit to continuously develop processes that allow me to understand, document and monitor bias in development and production.
- Explainability by justification: I commit to develop tools and processes to continuously improve transparency and explainability of machine learning systems where reasonable.
- Reproducible operations: I commit to develop the infrastructure required to enable for a reasonable level of reproducibility across the operations of ML systems.
- Displacement strategy: I commit to identify and document relevant information so that business change processes can be developed to mitigate the impact towards workers being automated.
- Practical accuracy: I commit to develop processes to ensure my accuracy and cost metric functions are aligned to the domain-specific applications.
- Trust by privacy: I commit to build and communicate processes that protect and handle data with stakeholders that may interact with the system directly and/or indirectly.
- Data risk awareness: I commit to develop and improve reasonable processes and infrastructure to ensure data and model security are being taken into consideration during the development of machine learning systems.
-
6 thèmes :
- Renforcer la sécurité de l'IA avec validation, surveillance et vérification
- Créer des modèles d'IA transparents, extensibles et prouvables
- Créer des systèmes éthiques, compréhensibles, légaux
- Améliorer la gouvernance avec des modèles d'exploitation et des processus de l'IA
- Tester le biais dans les données, les modèles et l'utilisation d'algorithmes par l'homme
-
Google recommended practices for AI: Fairness, Interpretability, Privacy, Security
- Déclaration de Montréal pour l'IA responsable
- Serment Holberton-Turing
- Serment d'Hippocrate pour data scientist
- Future of Life's AI principles
- Charte internationale pour une IA inclusive
- ADEL - Label éthique pour l'exploitation du big data
- ALTAI - The Assessment List on Trustworthy Artificial Intelligence
- Livre blanc Data Responsable
- Responsible AI Licenses
- FAT ML : semble inactif depuis fin 2018
- AI for social good workshops and research papers
- Building Fair and Transparent Machine Learning via Operationalized Risk Management: Towards an Open-Access Standard Protocol
- Algorithmes publics :
- Guide des algorithmes publics à l'usage des administrations, Etalab
- Rapport Éthique et responsabilité des algorithmes publics, Etalab / ENA, Janvier 2020
- ISO est en train de définir des normes dans le secteur de l'Intelligence Artificielle. Ces travaux devront être suivis
- Rapport Ethics and Algorithms toolkit
- Rapport AI Now Report 2019
- Beaucoup de travaux s'intéressent à l'éthique par les usages et par la non-reproduction de discrimination
- On trouve en revanche peu de choses sur le cycle de vie de l'élaboration d'un modèle (voir par exemple le papier de Quantum Black)
- Le plus complet est peut-être le questionnaire d'évaluation de l'UE, mais il est loin d'être actionnable, opérationnel (63 questions dont de nombreuses sont des questions très ouvertes), et son processus d'élaboration et d'évolution est relativement fermé
- Des référentiels de la sécurité des systèmes d'information, bien plus généraux, pourraient être utilisés comme références pour éviter d'être redondant sur certains points. Par exemple le guide de la sécurité des données personnelles de la CNIL.