What happens when you take a perfectly good neural network and, figuratively, stick a screwdriver in its brain? You get melancholy glitch-art music videos that turn talking heads into digital puppets. A machine learning developer named Jeff Zito made a series of music videos using a deep learning network based on Face2Face. Originally developed to generate stunningly realistic image transfers, like controlling a digital Obama in real-time using your own facial movements, this project takes it in a different direction. Sometimes the best AI isn’t good enough. When it comes to art, for example, computations and algorithms often don’t matter…

This story continues at The Next Web