AI4EU Media Pilot
Face 2 Face traslation from an input video to a different one.

This is a first solution to perform Face to Face traslation using Deep Neural Networks (Autoencoders, GANs). The code is compiled inside a Docker to easily download it and test it with your own videos. The algorithm takes as input a normal video (a video of you for example) and another video where the facial expression is translated (Politician, Famous Person or other video faces). The output provide you the translation of the input face facial expression to the second video.
Hardware architecture: X64
Install & Run:To download the docker use the following command line:
docker pull alberdibu/test_ai4eu:0.1
To test it you have attached some files and videos. Create an empty folder and uncompress the data inside the folders. To run the experiment use the following command:
docker-compose up
By default, the finetuning process is set to 200. This will take some time to retrain the network with the input data depending if GPU is available or not. You can reduce this value to a lower one. The range of this value is 0 - inf (integer!!!). To modify the input parameters open the "docker-compose.yml" and modify the paths to your input videos and the finetuning value.
Additional information: First Version of the Media Pilot inside the AI4EU Project. The Docker work in CPU or GPU if is available. A finetining process should be performed at the end to adapt properly the input faces.