CASE STUDY - FASHION INTELLIGENCE

Tagwalk

Paris-based Tagwalk is the world’s first search engine for fashion shows; or as they describe themselves, “Google for fashion”. As search volumes rapidly began picking up, Tagwalk needed an automated solution to annotate over a 1000 elements across the millions of images pouring in every week from fashion runways around the world -- and that’s where Labyrinth came in.

The Problem Statement

During our discussions with Tagwalk, it was clear that their immediate challenge was to find human resources with relevant expertise and knowledge about the latest trends, to accomplish the task of perusing through thousands of images on a daily basis and categorize them for further use.

Tagwalk is in the business of reporting fast-changing trends in real-time, as new designs are perennially revealed during fashion weeks around the world. This put them in a constant crunch for talent that could analyze images.

Another problem Tagwalk faced was maintaining consistency in how images were being annotated, as they were analyzed by different people at different times. Due to this, they were unable to strictly enforce consistent guidelines on annotation. With the demands of the fashion industry’s ever-changing nature, a capable solution was needed to detect and study new trends and efficiently implement the resultant learnings

The Solution

The final solution we designed and built for Tagwalk includes the following components:

  • A hierarchical model has been set up that can quickly spit out a list of fashion attributes that correspond to a given input image

  • A feedback-loop that can learn new attributes and corrections performed by the human experts, to improve overall performance with changing trends

  • An engine that can find images similar to a given one, to be used as a base for recommendations

The Approach

Our first few steps involved sanitizing Tagwalk’s data, which included defining the taxonomy and ontology for about a 1000 or more attributes. A hierarchical group of models was then created based on the hierarchy of the defined attributes. Each model within the hierarchy was built, trained, and tweaked to provide the highest-possible performance. As a result, the system could now predict a set of attributes that could be applied to each given image.

This system was plugged into the firm’s annotation workflow. The human annotation team’s task now changed from analyzing and attaching multiple attributes, to confirming the attributes predicted by the system we built. Since the model was configured to accomplish a low-error rate, Tagwalk’s team would also add any missed attributes to the images they analyzed.

The confirmations and corrections made by the human annotation team were used to retrain the models every week. We adopted a champion-challenger system to replace the production models with better-performing, retrained ones, without this ever leading to a fall in performance.

Since we had control over the model’s parameters, we were able to ensure that the internal annotation guidelines were uniformly applied across all images..

 Want to work with us?