An AI-driven imaging system allows users to try on virtual outfits in real time
A team of researchers have created a way for people to visualize themselves wearing items of clothing they don’t have direct physical access to. The virtual try-on system uses a unique capture device and an artificial intelligence (AI)-driven way to digitize items of clothing.
Using a corresponding imaging and display system, a user can then view themself on a screen wearing anything from the digital wardrobe. An AI engine synthesizes photorealistic images, allowing movement and details such as folds and ripples to be seen as if the user is really wearing the virtual clothing.
There have been some attempts over the years to create a sort of digital mirror that can visualize a user and superimpose an item of clothing on top of them. However, these systems lack functionality and realism. Professor Takeo Igarashi from the User Interface Research Group at the University of Tokyo and his team explore different ways humans can interact with computers. They felt they could create their own digital mirror that resolves some limitations of previous systems. Their answer is the virtual try-on system, and the team hopes it can change how we shop for clothes in the future.
“The problem of creating an accurate digital mirror is twofold,” said Igarashi. “Firstly, it’s important to model a wide range of clothes in different sizes. And secondly, it’s essential these garments can be realistically superimposed onto a video of the user. Our solution is unique in the way it works by using a bespoke robotic mannequin and a state-of-the-art AI that translates digitized clothing for visualization.”
To digitize garments, the team designed a mannequin that can move, expand, and contract in different ways to reflect different body poses and sizes. A manufacturer would need to dress this robotic mannequin in a garment, then allow it to cycle through a range of poses and sizes while cameras capture it from different angles. The captured images are fed to an AI that learns how to translate them to fit an as yet unseen user. At present the image capture for one item takes around two hours, but once someone has dressed the mannequin, the rest of the process is automated.
“Though the image capture time is only two hours, the deep-learning system we’ve made to train models for later visualization takes around two days to finish,” said graduate student Toby Chong, one of the paper’s co-authors. “However, this time will naturally decrease as computational performance increases.”
Then comes the user interaction. Someone wishing to try on different clothes would need to come to the store and stand in front of a camera and screen. They would put on a specially made outfit with an unusual pattern on it called the measurement garment. The pattern is a nonrepeating arrangement of different colored squares. As the user moves, the computer synthesizes a plausible image of the garment that follows the user’s motion.
“An advantage of our system is that the virtual garment matches the motion of the measurement garment including things like wrinkles or stretching. It can look very realistic indeed,” said Igarashi. “However, there are still some issues we wish to improve. For example, at present the lighting conditions in the user’s environment need to be tightly controlled. It would be better if it could work well in less-controlled lightning situations.”
The team has several possible applications in mind, online shopping being the most obvious, with a side benefit being it could prevent a large amount of fashion waste as buyers could have greater assurance about their purchase and clothing could even be made on demand rather than at scale.
Source: University of Tokyo