When it comes to fashion, online shopping is the new black. Convenience, competitive prices and a seemingly endless array of the season’s latest trends are driving more consumers to online stores than ever before. From a customer’s perspective, the online shopping experience is vastly different from walking into a brick-and-mortar fashion outlet. With no fitting rooms, it is nearly impossible to answer the question, “Do these jeans look good on me?”
In a bid to make e-shopping more interactive and intuitive, the fashion industry is looking to next-generation clothing image synthesis technologies. While current fashion image generation platforms work for basic items, clothes with intricate textures, patterns or logos are often too much for existing imaging technologies to handle. The rendered image ends up looking distorted and fine details are lost, making it frustrating for retailers and shoppers alike.
To overcome this bottleneck in imaging quality, a team led by computer vision and machine learning researcher, Huijing Zhan from A*STAR’s Institute for Infocomm Research (I2R), has created the Pose-Normalized and Appearance-Preserved Generative Adversarial Network (PNAP-GAN). This computing framework could revolutionize how shoppers interface with online stores, bringing the retail experience into their living rooms.
“Three scenarios motivate our research into clothing imaging technologies: Firstly, can we generate a novel image from a street image captured on a mobile phone? Secondly, can we find online listings based on user-submitted photos? Finally, can we visualize the clothes on the customer virtually?” Zhan explained.
The team focused on designing an algorithm to teach computers how to ‘scan’ fashion elements from a visual input source, such as a photograph. In Stage I of their two-stage approach, the algorithm guides the system to capture the global structure of the garment, recognizing particular landmarks and creating a general image representation. In Stage II, this visual is refined. Fine details are added, making it easier for the system to accurately and interchangeably depict how the piece would look either on a model or on a virtual clothes rack.
“For example, when customers go window shopping and see a cool blazer worn on others, they would like to know how the blazer looks on them. PNAP-GAN makes it possible to try on the item virtually by pasting your profile image onto the synthesized image, which is also free of the deformation regardless of the pose,” said Zhan.
While this study describes PNAP-GAN’s exciting potential across an array of sweaters, tops and dresses, further enhancements to the technology are already in the pipeline. Zhan says follow-up studies will focus on further fine-tuning the platform to preserve fabric textures and patterns in more elaborate pieces.
The A*STAR-affiliated researcher contributing to this research is from the Institute for Infocomm Research (I2R).