Highlights

Above

Algorithms can help you visualize how clothes worn by others might look on you.

© Pexels

Taking fashion from streets to shops

11 Nov 2020

Researchers have created a new computational framework that is set to transform the online shopping experience.

When it comes to fashion, online shopping is the new black. Convenience, competitive prices and a seemingly endless array of the season’s latest trends are driving more consumers to online stores than ever before. From a customer’s perspective, the online shopping experience is vastly different from walking into a brick-and-mortar fashion outlet. With no fitting rooms, it is nearly impossible to answer the question, “Do these jeans look good on me?”

In a bid to make e-shopping more interactive and intuitive, the fashion industry is looking to next-generation clothing image synthesis technologies. While current fashion image generation platforms work for basic items, clothes with intricate textures, patterns or logos are often too much for existing imaging technologies to handle. The rendered image ends up looking distorted and fine details are lost, making it frustrating for retailers and shoppers alike.

To overcome this bottleneck in imaging quality, a team led by computer vision and machine learning researcher, Huijing Zhan from A*STAR’s Institute for Infocomm Research (I2R), has created the Pose-Normalized and Appearance-Preserved Generative Adversarial Network (PNAP-GAN). This computing framework could revolutionize how shoppers interface with online stores, bringing the retail experience into their living rooms.

“Three scenarios motivate our research into clothing imaging technologies: Firstly, can we generate a novel image from a street image captured on a mobile phone? Secondly, can we find online listings based on user-submitted photos? Finally, can we visualize the clothes on the customer virtually?” Zhan explained.

The team focused on designing an algorithm to teach computers how to ‘scan’ fashion elements from a visual input source, such as a photograph. In Stage I of their two-stage approach, the algorithm guides the system to capture the global structure of the garment, recognizing particular landmarks and creating a general image representation. In Stage II, this visual is refined. Fine details are added, making it easier for the system to accurately and interchangeably depict how the piece would look either on a model or on a virtual clothes rack.

“For example, when customers go window shopping and see a cool blazer worn on others, they would like to know how the blazer looks on them. PNAP-GAN makes it possible to try on the item virtually by pasting your profile image onto the synthesized image, which is also free of the deformation regardless of the pose,” said Zhan.

While this study describes PNAP-GAN’s exciting potential across an array of sweaters, tops and dresses, further enhancements to the technology are already in the pipeline. Zhan says follow-up studies will focus on further fine-tuning the platform to preserve fabric textures and patterns in more elaborate pieces.

The A*STAR-affiliated researcher contributing to this research is from the Institute for Infocomm Research (I2R).

Want to stay up to date with breakthroughs from A*STAR? Follow us on Twitter and LinkedIn!

References

Zhan, H., Yi, C., Shi, B., Jie, L., Duan, L.Y., and Kot, A.C. Pose-Normalized and Appearance-Preserved Street-to-Shop Clothing Image Generation and Feature Learning. IEEE Transactions on Multimedia (2020) | article

About the Researcher

Huijing Zhan

Research Scientist

Institute for Infocomm Research
Huijing Zhan is a Research Scientist at A*STAR’s Institute of Infocomm Research (I2R). She received her BEng degree in 2012 from the Special Class for the Gifted Young at Huazhong University of Science and Technology, and PhD degree in 2018 from Nanyang Technological University, Singapore. Zhan’s research interests include personalized fashion recommendation and retrieval and vision-based fashion analysis.

This article was made for A*STAR Research by Wildtype Media Group