Skip to main content

Thu, 12/10/2020 - 13:58

COMPANIES INTERVIEWS: MEDIAMONKS

Investigating how companies are approaching ethics in Europe, in this article we hear the experience of Geert Eichhorn, Innovation Director at 

Media Monks

Investigating how companies are approaching ethics in Europe, in this article we hear the experience of Geert Eichhorn, Innovation Director at MediaMonks a global creative production company with headquarter in the Netherlands.

Tell us about your company.

I'm the Innovation Director of MediaMonks Labs. Since 2001 MediaMonks has worked with the world’s leading agencies, biggest brands, governments, and innovative media and technology companies to turn ideas, strategy, and IP into award-winning campaigns, film, content, products, and platforms.  
At the Innovation Lab, we steer and drive MediaMonks’ global solutions focused on technology and innovation, and we pay our learnings forward with results-oriented solutions that delight and add value in any medium.  
By tackling and testing emerging tech, we consistently deliver fresh, exciting, and delightful experiences that redefine what is possible. 
Personally, I keep an extra eye on all the creative work we do around MediaMonks and push to keep innovating in that space; whether it’s using a new technology, or using an existing platform in a different way.  

Image removed.

There is a great hype around AI and some expectations are probably too exaggerated. What impact should we expect from this innovation in your opinion? There is any particular example of positive impact of AI coming from your company or your field that you would like to share?  

I think the hype and expectations aren’t too exaggerated, they are just too early; we don’t have to worry about this stuff too much yet. We need to keep in mind things are moving fast, and even when moving fast; ‘general AI’ is probably still quite a while away. So worrying about being ruled by AI-controlled Robots is probably a waste of your time, but that doesn’t mean we need to rule it out as a possibility in the future.  

In general, we can use AI to make a better user experience, especially visual search and image recognition come to mind here; pointing your camera at a flower and the app saying what species it is based on computer vision and machine learning is so much easier than flipping through an encyclopedia.  

At MediaMonks we’ve made a ‘paint defect tool’ for AkzoNobel which uses computer vision to detect different types of paint defects and helps find what the cause might be. Very simple and efficient. 

One of the main core topics of the European Strategy is Trustworthy AI with its human-centric vision. What does Trustworthy AI mean in your view and how does it translate in your everyday research and business?  

AI is beginning to have above-human levels of skill, from detecting breast cancer to playing games like GO or Starcraft. This naturally makes humans suspicious, or at least alert. However, in these cases it’s very clear we’re dealing with an AI and people know what to expect and what the goals and motivations are. However, when things get more ambitious and we end up in a  grey territory, people have the right to know what’s going on behind the scenes.  

I think a financial trading AI could figure out the economy and become unstoppable, what is happening behind the scenes of facial recognition? Is there a database of every person in the world that has ever had their face on the internet? Deepfakes are a clear example of this grey area; what is this used for and how do you know it is ‘super human’ right? There might need to be a consumer label that says; ‘this was made with help of AI’ something to create more transparency on what / how and what its goals are.  

As a matter of fact, working on AI raises ethical issues. Which are the ethical challenges in your job and which effort you and your team make to address them? Ideally, what resources (e.g. expert advice, training, software tools) would help you address them better?  

Ethical challenges are a lot harder to see than you might think; for example, using any huge dataset might come with inheriting biases that aren’t clear at the start. The main ethical issues we see arise around gender and race. How do we address the user when the gender might not be defined or unclear, and how to deal with facial recognition that notoriously works worse for people of color?  
All of these can be addressed, quite easily, but there needs to be industry-wide consensus this is an issue, and together we need to work on diversifying training data as well as creating guidelines on how to deal with issues like gender or even storing personal/biometric data.

 
In general, I like how in science a paper needs to be peer-reviewed before it can be published,  this is something we could consider for datasets/training data. If you’re making a data set to train an AI on facial recognition; be sure to send it to peers around the world and have them diversify your data for a more accurate and ethical result. 

Would you like to share a personal reflection or experience in your work that had made you realize the importance of an ethical approach? which values inspire you more at work?  

We ran into an issue or at least a grey area around image recognition. These are provided as a  service by the likes of Google, Amazon, Baidu etc. However, non of them tell you what they good at detecting, or which objects are in their database. It’s basically a black box, you throw an image in, you get a result out, but no clue on how that was made and what was considered.  

For a project in The Netherlands, MediaMonks needed to use computer vision to detect school lockers; you know the ones where you store your books outside of classes. These lockers vary greatly around the world, and we didn’t know if Google or Amazon, or Baidu would have the most accurate off-the-shelf solution to recognize the various lockers. In order to determine this, we made a tool (link to the tool) in which you upload a photo which will then be classified by several image recognition services. We did this to determine accuracy but also to reveal biases.  
Something we noticed was that Amazon is very good at detecting specific objects, while Google often has more context on the place or activity or social setting of the image. These biases can be tracked back to their training data (Google Photos/Images vs Amazon marketplace) and something like this tool can reverse engineer ethical issues like the biases inside the black boxes of image recognition.  

AI has demonstrated to have important social (e.g racial bias) or environmental consequences on the environment (e.g. energy consumption). What is your position on that and how does this influence your job?  

From what I’ve seen the impact of AI is overwhelmingly positive; in general, it’s a way to do things more efficiently and accurately; leading to less waste, from energy to agriculture and everything in between.  

I think in the years to come we’ll see AI help index and label our planet, from every tree and every oil-spill to wildlife; (like creating a database of tigers based on their unique stripes). All this data will give us more agency and insights for the conservation and management of the environment. All of these movements I see as positive.  

On the other side, AI will mean a lot of jobs will become automated, I think this is largely a  natural progression; we don’t have a blacksmith in every town anymore either. However we do need to prepare socially on what the implications might be; do we need a basic universal income, what will be the kind of jobs in an AI power future? What is the role of humans in general if AI overtakes us in terms of brainpower? These are things I think about when playing with these ideas in my mind. We haven’t stopped playing chess, even tho an AI can do it better than us, and I think in a more broader term that comes down to craft; we enjoy making, playing,  imagining; and these things bring quality and joy to others, AI or not. 

 

Source: Geert Eichhorn