The Next Generation

Frederik De Bleser, Lieven Menschaert

For the opening of the academic year, research group The Algorithmic Gaze (Sint Lucas Antwerp, KdG) organized an experiment in which elementary school children were filmed and questioned about their interests and dreams. The images were then processed by an AI algorithm and synthesized into new images, new faces, the new generation.

Project movie (in Dutch):



Method

After recording all faces, we used Figment with Google's MediaPipe to extract the face mesh from the video recordings.

Extracting the face mesh from a video recording.

We used NVIDIA's pix2pixHD algorithm, a high-quality conditional GAN to learn the mapping between segmented face masks and the recorded video footage.

Training gradually improved over time.

We performed extensive testing and re-training, checking difficult conditions (e.g. side-facing, looking up, blinking) and the limitations of the segmentation algorithm. We discovered that there is a "sweet spot"; placing your face too close or too far would introduce distortions.

Errors in training
Typical errors occuring in training: misplaced eyes and noses, distorted faces due to incorrect segmentation

Through a custom-built app, we could control the algorithm interactively via webcam:

Researcher Imane Benyecif experimenting with the trained model
Researcher Imane Benyecif experimenting with the trained model
Researcher Lieven Menschaert experimenting with the trained model
Researcher Lieven Menschaert experimenting with the trained model

We invited the elementary school students back to our campus to present the results of the training, allowing them to experience and play with the model through the webcam. They discovered they could find their own likeness in the model, but also that of their friends:

Presentation of the interactive AI at Sint Lucas Antwerpen
Presentation of the interactive AI at Sint Lucas Antwerpen

A production movie of the project was presented during the academic opening, on Thursday, September 29nd, in the Stadsschouwburg Antwerpen.

Credits

  • Isabelle De Ridder (University of Antwerp) — project lead
  • Frederik De Bleser, Lieven Menschaert — machine learning and development
  • Mathias Mallentjer, Brent Meynen (Production Office) — general production
  • Alexandra Fraser (Sint Lucas Antwerpen) — feedback and interviews
  • Ine Vanoeveren, Imane Benyecif — testing and support