Domino’s Takes Delivery to a Whole New Level with NVIDIA DGX-1

Have you ever wondered what kind of technology is used to get your delicious cheese pizza, and perhaps an order of Domino’s Chocolate Lava Crunch Cake from a store to your home? 

In a new video posted to LinkedIn, previewing a talk at GTC Digital, the company reveals the magic behind their AI platform, an NVIDIA DGX-1 system. The AI system is helping Domino’s train their delivery prediction models, and the other proprietary AI models that the company says are part of their secret sauce. The system is the second in the company’s cluster. 

“When our customers think about the order experience, they think about what they want on their order, and ‘I want it to show up at my house as a delivery’,” said Zack Fragoso, a data science and AI manager at Domino’s. “They might not think about all the effort and technology that is required to get you that pizza from the store to your house.”

In the video, the company takes delivery and unboxing videos to a whole new level, showing how they received their new NVIDIA DGX-1 system, and telling how they plan to use it. 

“The DGX allows us to automate decisions that are going on in the store, or give customers better information,” Fragoso said. 

In January of this year, Fragoso told NVIDIA that his team increased accuracy from 75% to 95% for predictions of when an order will be ready. The model considers the number of employees working, the complexity of orders in the queue, and current traffic conditions. 

“Domino’s does a very good job cataloging data in the stores, but until recently we lacked the hardware to build such a large model,” Fragoso told NVIDIA in the post, Life of Pie: How AI Delivers at Domino’s, on the NVIDIA corporate blog. “Once we had our DGX server, we could train an even more complicated model in less than an hour,” referring to a 72x speed-up. “That let us iterate very quickly, adding new data and improving the model, which is now in production in a version 3.0,” Fragoso added.

Separately, and as mentioned in the previous NVIDIA Blog, the company’s next step in their AI deployment is to use a series of NVIDIA Turing T4 GPUs to accelerate AI inferencing for all company tasks that involve real-time predictions.

“Model latency is extremely important, so we are building out an inference stack using T4s to host our AI models in production. We’ve already seen pretty extreme improvements with latency down from 50 milliseconds to sub-10 ms,” he told NVIDIA.

The company has recorded a talk about their AI “recipe” for success for GTC Digital: Domino’s Recipe for Enterprise AI: Blending GPUs, Collaboration, and Speed. GTC Digital is NVIDIA’s online developer conference.