As fashion resets, its algorithms should too – Vogue Business

Posted: June 24, 2020 at 6:24 am

In 2018, futurist and academic Karinna Nobbs worked with a major cosmetics brand on an augmented reality try-on tool. During user testing, Nobbs noticed that some of the technology worked more effectively on white and Asian faces. For darker skin tones and older users, it was not able to track and place the content on the face, and lipstick would wobble on lips.

The brand solved the problem by training the algorithm to recognise more types of people, which also enabled the tool to better calibrate colour cosmetics. This experience demonstrated the importance of inclusivity to build an effective product, Nobbs says. It also shows that artificial intelligence and algorithms can be flawed.

Just like the humans that design it, AI can have bias. In fashion and beauty, this might manifest as online searches showing only certain types of models, or an image-matching engine mistaking legs for dark jeans. It might mean missing out on an entire customer segment or reinforcing harmful stereotypes. So as brands undergo a reckoning to be more inclusive and diverse, their data and algorithms are due for a closer look as well.

Brands have come under fire for using creative that is offensive and discriminatory. If your AI has seen that in the data set and thinks it's acceptable, it will most likely reproduce some of that offensive design, says Ashwini Asokan, CEO of retail automation platform Vue.ai, which works with Thredup and Zilingo. This is why there is no one size fits all in AI. AI must adapt to your business, your goals, your aspirations as a company.

AI and machine learning let machines perform human-like tasks reading text, seeing images, making decisions. It does this through algorithms, or sets of rules, that are applied to data. For example, if you supply multiple images of people wearing red dresses, it could eventually learn to identify red dresses.

The AI is only as good as the data it learns from, Asokan says. This can be a problem if a brand uses limited data, whether its a specific skin colour or body type. If every image of the red dress was on a tall, white model, for example, the algorithm might only work within those limitations meaning that a petite or Black person may never be recommended clothes that are relevant to them.

Brands have come under fire for using creative that is offensive and discriminatory. If your AI has seen that in the data set and thinks it's acceptable, it will most likely reproduce some of that offensive design.

This type of bias is evident even in a Google image search for dresses, says Yael Vizel, co-founder of Zeekit, a computer vision startup that works with Asos and Bloomingdales to digitally dress diverse models. She points out that most of the Google results show models that look similar, even if the brand offers items in a range of sizes.

If you ask Google what the average customer looks like, one out of 25 is plus-size or has dark skin, which is not reality, Vizel says. When a machine scans a catalogue, the system is biased from the beginning because of the way brands present their products. These are the visuals that represent the data set of the internet. Its our responsibility, as leaders of companies, to pay attention to the fact that we are biased.

Zeekit has a library of models that it can dress using a single product image. This is how Asos is able to show many different diverse models wearing the same garment. Because brands still end up making a biased selection, Zeekit has a Diversity Matrix that charts representations of body type and skin tone.

Bias in search is a challenge because user behaviour often reinforces biases, says Jill Blackmore Evans, community and editorial manager of stock photography library Pexels, which has intentionally created algorithms that generate diverse results. To combat this, Evans recommends that brands use human curators to review and adjust any automated content.

Otherwise, for example, computer vision may not accurately recognise gender in an image, and if searches for love only resulted in straight people, this would reinforce the notion that only heterosexual couples are normal, Evans says. Companies can be part of the push for change by diversifying the creators they work with and increasing the diversity of people depicted in the imagery they use.

Avoiding the explicit use of sensitive attributes, such as gender or skin tone in a content image, is known as fairness through unawareness, says Nadia Fawaz, the technical lead for fairness in AI at Pinterest, adding that this approach may not be sufficient because it ignores implicit correlations. The platform is investigating several approaches to improving the relevance and diversity of its pins, in addition to letting users customise their beauty searches by skin tone range. If the data we input is not diverse, the AI models may learn implicit biases, serve biased results, leading to the collection of more biased training data, and the creation of a biased feedback loop, Fawaz says.

It also means being smarter with the data. Data science firm Dstillery says brands often miss whole sets of customers because they look at the dominant signal, one which typically reinforces stereotypes. Dstillery chief data scientist Melinda Han Williams cites a soccer-based apparel client who knew that it had customers from Spanish-speaking countries but missed that these fell into two generations who were very culturally different. By looking more closely at the data, they saw that one customer group was a bit older and had moved to the US recently; another segment was younger and had grown up in the US.

Pexels updated its algorithm so that photos representing all races, genders and identities populate tags such as couple or holding hands.

Pexels

Racism plays out in how marketing departments target customers within certain demographics, says Jessica Graves, founder and chief data officer of Sefleuria, who advises fashion and luxury companies like Outlier and Fortune 500 brands on using algorithms. One brand, for example, might ask to target Black Gen Z customers because of assumptions they make about that demographic, like that they prefer a certain style of clothing, even if race is not a relevant data point. Or a brand might use location to distribute discount codes, giving different amounts to certain neighbourhoods, which happen to be communities of colour, even though that might not correlate with purchasing behaviour.

Instead, she advises that brands focus on customer behaviour, using algorithms that adapt to how people shop on a brands website, what they click on, what they never buy and which search terms they use. Its way more effective. You make so much more money if you do this, she says. Marketing based on demographics should just stop unless you can justify that customers want this incorporated.

Ultimately, technology cannot solve biased teams or products that do not serve certain demographics. Zena Digital Group founder and CEO Zena Hanna, who has worked with those including Versace, Creed, Farfetch and the Fashion Institute of Technology, advises brands to go against their own biases and test strategies even if they think they wont work. Often, she says, a team might design and plan media only for people who look like them, and perpetuate that assumption in photo shoots, influencer selection or marketing copy. A lot of people don't realise that they have segments not just in their minds, but also in the way they push [content] out, so ads are not going to be inclusive of who the actual audience is.

It comes down to who is behind these systems and who is building these systems. If the diversity around the table who is testing and training them is all homogeneous? Good luck. You are almost guaranteeing that you will end up with a biased system, says Falon Fatemi, founder of AI service platform Node, which helps companies like venture capital firm Clearbanc generate predictions from data.

Of course, racist missteps from brands like Prada, Gucci and Dolce & Gabbana likely wouldn't have been prevented by AI, so the first step is diversifying teams, Hanna says. She also suggests social listening algorithms to alert brands of any racist, transphobic, homophobic or sexist rhetoric being said about them online.

She adds that diverse representation builds loyalty. In this day and age, that is super important and diminishing quickly, so whatever brands can do on the tech side and on the human side ... is really important.

To receive the Vogue Business Technology Edit, sign up here.

Comments, questions or feedback? Email us at feedback@voguebusiness.com.

More on this topic:

Stores get smart about AI

Facebook experiments with AI-powered styling program

A top Silicon Valley futurist on how AI, AR and VR will shape fashions future

Continued here:
As fashion resets, its algorithms should too - Vogue Business

Related Posts