How AI is driving powerful new Photoshop features and shaping Adobes product strategy – The Next Web

Posted: November 1, 2021 at 6:40 am

This article is part of our series that explores the business of artificial intelligence.

Like every year, Adobes Max 2021 event featured product reveals and other innovations happening at the worlds leading computer graphics software company.

Among the most interesting features of the event is Adobes continued integration of artificial intelligence into its products, a venue that the company has been exploring in the past few years.

Like many other companies, Adobe is leveraging deep learning to improve its applications and solidify its position in the video and image editing market. In turn, the use of AI is shaping Adobes product strategy.

Sensei, Adobes AI platform, is now integrated into all the products of its Creative Cloud suite. Among the features revealed in this years conference is an auto-masking tool in Photoshop, which enables you to select an object simply by hovering your mouse over it. A similar feature automatically creates mask layers for all the objects it detects in a scene.

The auto-mask feature saves a lot of time, especially in images where objects have complex contours and colors and would be very difficult to select with classic tools.

Adobe has also improved Neural Filters, a feature it added to Photoshop last year. Neural Filters use machine learning to add enhancements to images. Many of the filters are applicable to portraits and images of people. For example, you can apply skin smoothing, transfer makeup from a source image to a target image, or change the expression of a subject in a photo.

Other Neural Filters make more general changes, such as colorizing black-and-white images or changing the background landscape.

The Max conference also unveiled some preview and upcoming technologies. For example, a new feature for Adobes photo collection product called in-between takes two or more photos that were captured at a short interval of each other, and it creates a video by automatically generating the frames that were in-between the photos.

Another feature being developed is on point, which helps you search Adobes huge library of stock images by providing a reference pose. For example, if you provide it with a photo of a person sitting and reaching out their hand, the machine learning models will detect the pose of the person and find other photos where people are in similar positions.

AI features have been added to Lightroom, Premiere, and other Adobe products as well.

When you look at Adobes AI features individually, none of them are groundbreaking. While Adobe did not provide any architectural or implementation details in the event, anyone who has been following AI research can immediately relate each of the features presented at Max to one or more papers and presentations made at machine learning and computer vision conferences in the past few years. Auto-masking uses object detection and segmentation with deep learning, an area of research that has seen tremendous progress recently.

Style transfer with neural networks is a technique that is at least four years old. And generative adversarial networks (GAN), which power several of the image generation features, have been around for more than seven years. In fact, a lot of the technologies Adobe is using are open source and freely available.

The real genius behind Adobes AI is not the superior technology, but the companys strategy for delivering the products to its customers.

A successful product needs to have a differentiating value that convinces users to start using it or switch from their old solutions to the new application.

The benefits of applying deep learning to different image processing applications are very clear. They result in improved productivity and lower costs. The assistance provided by deep learning models can help lower the barrier of artistic creativity for people who dont have the skills and experience of expert graphical designers. In the case of auto-masking and neural filters, the tools make it possible even for experienced users to solve their problems faster and better. Some of the new features, such as the in-between feature, are addressing problems that had not been solved by other applications.

But beyond superior features, a successful product needs to be delivered to its target audience in a way that is frictionless and cost-effective. For example, say you develop a state-of-the-art deep learningpowered neural filter application and want to sell it on the market. Your target users are graphic designers who are already using a photo-editing tool such as Photoshop. If they want to apply your neural filter, theyll have to constantly port their images between Photoshop and your application, which causes too much friction and degrades the user experience.

Youll also have to deal with the costs of deep learning. Many user devices dont have the memory and processing capacity to run neural networks and require cloud-based processing. Therefore, youll have to set up servers and web APIs to serve the deep learning models, and you also have to make sure your service will remain online and available as the usage scales. You only recoup such costs when you reach a large number of paying users.

Youll also have to figure out how to monetize your product in a way that covers your costs while also keeping users interested in using it. Will your product be an ads-based free product, a freemium model, a one-time payment, or a subscription service? Most clients prefer to avoid working with several software vendors that have different payment models.

And youll need an outreach strategy to make your product visible to its intended market. Will you run ads on social media, make direct sales and reach out to design companies, or use content marketing? Many products fail not because they dont solve a core problem but because they cant reach out to the right market and deliver their product in a cost-efficient manner.

And finally, youll need a roadmap to continuously iterate and improve your product. For example, if youre using machine learning to enhance images, youll need a workflow to constantly gather new data, find out where your models are failing, and finetune them to improve their performance.

Adobe already has a very large share of the graphics software market. Millions of people use Adobes applications every day, so the company has no problem in reaching out to its intended market. Whenever it has a new deep learning tool, it can immediately use the vast reach of Photoshop, Premiere, and the other applications in its Creative Cloud suite to make it visible and available to users. Users dont need to pay for or install any new applications; they just need to download the new plugins into their applications.

The companys gradual transition to the cloud in the past few years has also paved the way for a seamless integration of deep learning into its applications. Most of Adobes AI features run in the cloud. To its users, the experience of the cloud-based features is no different than using filters and tools that are directly running on their own devices. Meanwhile, the scale of Adobes cloud makes it possible for the company to run deep learning inference in a very cost-effective way, which is why most new AI features are made available for free to users who already have a Creative Cloud subscription.

Finally, the cloud-based deep learning model provides Adobe with the opportunity to run a very efficient AI factory. As Adobes cloud serves deep learning models to its users, it will also gather data to improve the performance of its AI features in the future. For example, the company acknowledged at the Max conference that the auto-masking feature does not work for all objects yet but will improve over time. The continued iteration will in turn enable Adobe to enhance its AI capabilities and strengthen its position in the market. The AI in turn will shape the products Adobe will roll out in the future.

Running applied machine learning projects is very difficult, which is most companies fail in bringing them to fruition. Adobe is an interesting case study of how bringing together the right elements can turn advances in AI into profitable business applications.

This article was originally published by Ben Dickson onTechTalks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech, and what we need to look out for. You can read the original articlehere.

See the rest here:

How AI is driving powerful new Photoshop features and shaping Adobes product strategy - The Next Web

Related Posts