AI Helps Generate Interactive Demo of Salvador Dali

To help create a unique experience for museum attendees, the Salvador Dali Museum in St Petersburg, Florida has just developed a deep learning-based version of Dali himself.

Using thousands of hours of archival footage from interviews, developers from Goodby, Silverstein & Partners trained a convolutional neural network using NVIDIA GeForce GTX 1080TI GPUs, with the cuDNN-accelerated PyTorch deep learning framework, to regenerate Dali’s facial expressions.  The developers then took the facial features generated by the AI and superimposed them on an actor with Dali’s body proportions.

The exhibit, called Dali Lives, features a 5’8 tall life-size Dali inside an interactive video panel.

“Authenticity was probably one of the key words we stood by. You want to be careful, obviously. We want to display the cool factor of this. It’s AI. It’s not a video of him from yesteryear,” explains Kathy Greif, chief operating officer of The Dalí Museum. “At the same time, we didn’t want to put words into his mouth. We have a lot of tenure with the expertise that goes into something like this–extreme familiarity with Dalí, and comfort to debate, ‘Would he say it like that? No, I think he’d say it like this.’”

According to the developers, the interactions change based on the day and the weather. Altogether, there are about 45 minutes of new Dali footage contained in the video panels.

“This morning you hear the true story of Dali and his art, and you see the greatest paintings you have ever seen,” the lifelike AI-based version of Dali says. “Modesty is not my specialty.”

In one of the panels, attendees can even take a selfie with the renowned artist.

The exhibit is now open at the museum in Florida. The demo itself is powered by an NVIDIA GeForce GTX 1060 GPU, running on TouchDesigner on a Windows machine.

Read more>