-
Cristian Perez,
Publié le 25/01/2019
Last 10th October the World Summit in Artificial Intelligence took place at Amsterdam with more than 4000 attendees, doubling the number of attendees from last year. This annual event bring together practitioners, influencers and users of applied artificial intelligence.
European union has made some progress about the latter subject with the GDPR regulation, which states how to process personal data in order to keep privacy for all individuals. However, this is only applied for European data so there are still remaining questions for other companies in the world in order to have full transparency.
In most cases the software behind remains a black box algorithm: is it ok to rely on systems whose decisions cannot be explained? How to make sure that no programming flaws, corrupted data or silent errors have impacted the decisions if no real diagnosis is possible?
We think that AI applications shouldn’t be black box algorithms. One step forward is to use open source initiatives and make their algorithms public in order to present how they treat data. This is why at Kernix, we use open source technology for our projects so that our clients can access the code at the delivery time. Also when we implement a machine learning model, we care about explainability of the result, we use python libraries like eli5, which allows our clients to know which variables are used in the model and which are the most important to a explain the results.
A lot of people have criticized that most vocal assistants have feminine voices: Alexa, Siri or Google Home. Dr Stephen Cave and Dr Kanta Dihal from @Leveulme Centre for Future of Intelligence, argued that AI is the product of the white male imagination, where women is the subordinated assistant that helps the male to accomplish most of his tasks, and this is why they think that their voices are chosen. This example illustrates how some human biases can be transferred to AI systems from their designers.
In fact, female voice assistants are just the tip of the iceberg of the problem of fairness in AI. Most people see artificial intelligence as more rational and objective than human intelligence. But we are living in a imperfect society which can generate data with sexist or racist bias. The problem is that AI is based on data, and biased data will lead to biased algorithms.
At Kernix we encourage our clients to think about the issues due to biases. Each time we are developing predictive models, most of the time based on machine-learning algorithms, we are warning our clients about their re-training strategies. Let us imagine that our clients decide to put it in production and use these models to select a subset of their “most promising new customers” in order to focus their activity on them. But if they also decide to retrain these models with fresh new data, we tell them that the new data will be restricted to these sub-population of “most promising new customers”, and that it is not representative of the full customer population. The danger is to end up re-training models with biased data. We thus advise our clients to adopt some random sampling strategies and to keep monitoring the predictive performances continuously during the period of exploitation.
Most AI applications are used for business or marketing purposes. For example, Ciaran Jetten from @Heiniken group presented how they use AI to improve operations, marketing and advertising; and Oscar Celma from @Pandora showed how they combine 70 different AI models to improve music recommendation and get more retention and people satisfaction.
We believe that AI should be used for various applications, by private companies but also by public organisms and non-profit associations. For this, it is important that AI should be spread over the world.
In one of the panel discussions, Ambassador Amandeep Singh Gill from @United Nations, stated that everyone should have the opportunity to apply AI, which is not always true. In most cases people don’t have access to hardware needed to create AI or either they don’t have the required coding skills. United Nations have been pushing initiatives in order to distribute knowledge to every single human like for example the right to have internet access.
This issue is actually being addressed by the globalization of education. Online resources and courses are a good place to start. In Kernix, we believe that this is an important issue to address, this is why we usually give talks to demystify AI https://www.linkedin.com/feed/update/urn:li:activity:6466252981984600064.