Adobe is working on a camera app designed to take your smartphone photography to the next level.
Within the next two years, the company plans to release an app that combines the computing prowess of modern phones with the creative controls serious photographers often desire, said Marc Levoy, who joined Adobe two years ago as vice president to take the lead in the effort.
Levoy has impeccable credentials: He was previously a Stanford University researcher who coined the term computational photography and helped lead Google’s respected Pixel camera app team.
“What I did at Google was democratize good photography,” Levoy said in an exclusive interview. “What I’d like to do at Adobe is democratize creative photography, where there’s more of a conversation between the photographer and the camera.”
If successful, the app could extend the smartphone revolution of photography beyond the mainstream capabilities that companies like Apple, Google and Samsung are targeting. Computer photography has worked wonders in improving the image quality of small, physically limited smartphone cameras. And it has unlocked features like panorama stitching, portrait mode to blur backgrounds and night modes for better quality at night.
Camera app ‘dialogue’ with the photographer
Adobe doesn’t make an app for everyone, but instead for people who want to put in a little more effort upfront to get the photo they want, something that will suit the enthusiasts and professionals who are often already customers of Adobe’s Photoshop and Lightroom photography software. Such photographers have more experience playing with traditional camera settings such as autofocus, shutter speed, color, focal length and aperture.
Several camera apps, such as Open Camera for Android and Halide for iPhones, offer manual controls similar to those found on traditional cameras. Adobe itself has some of these in its own camera app, built into its Lightroom mobile app. But with its new camera app, Adobe is going in a different direction — more of a “dialogue” between the photographer and the camera app when snapping a photo to get the shot you want.
Adobe is targeting “photographers who want to think a little bit more carefully about the photo they’re taking and who are willing to interact with the camera a little more while they’re taking it,” Levoy said. “That just opens up a lot of possibilities. That’s something I’ve always wanted to do and something I can do at Adobe.”
By contrast, Google and its smartphone competitors don’t want to confuse their more mainstream audiences. “Every time I suggested a feature that would require more than a single button press, they said, ‘Let’s focus on the consumer and the single button press,'” Levoy said.
Adobe camera app features and ideas
Levoy won’t be pinned down on his app’s features just yet, though he did say Adobe is working on a feature to remove distracting reflections from photos taken through windows. Adobe’s approach adds new artificial intelligence methods to the challenge, he said.
“I’d like to remove specular reflections,” Levoy said. “I’d like to send that, because it ruins a lot of my photos.”
But there are plenty of areas where Levoy expects improvements:
- “Relighting” an image to solve problems such as hard shadows on faces. The iPhone’s lidar sensor or other ways to create a 3D “depth map” of the scene can help inform the app where to make such scene lighting decisions.
- A new approach to “super resolution”, the computational generation of new pixels to try to provide higher resolution or more detail photos when digital zooming. Google’s Super Res Zoom combines multiple shots to do this, as does Adobe’s AI-based image enhancement tool, but both the multiframe and AI approaches can be merged, Levoy said. “Adobe is working to improve it and I’m working with the people who wrote that,” he said.
- Merge multiple photos into one digital photo montage with the best elements of each photo, such as making sure everyone is smiling and no one is blinking in a group photo. It’s a tough technology to run reliably: “Google launched it in Google Photos a long time ago. Of course we launched it after people started posting all kinds of horrible creations,” Levoy said.
- The methods of computational video — applying the same tricks to video as are now common with stills — “barely scratched,” Levoy said. For example, he would like to see an equivalent to the Google Pixel Magic Eraser feature to remove distractions from videos as well. Video is only becoming more important, as the rise of TikTok illustrates, he said.
- Photos that adapt to the screens where people see them. People naturally prefer more contrast and richer colors when seeing photos on small phone screens, but that same photo on a laptop or TV can look gaudy. Adobe’s DNG file format could allow viewers to flip such adjustments up or down to suit their presentation, Levoy said.
- A mix of real images and synthetic images like those generated by OpenAI’s DALL-E AI system, a technology Levoy calls “great.” Adobe has a strong interest in creativity, and AI-generated images can be conjured up not only with text, but also with your own photos, he said.
Professional photographers can be picky
Adobe’s success is not guaranteed. A more discriminatory market of serious photographers will be less forgiving of computer errors that can occur when performing actions like merging multiple frames into one or artificially blurring backgrounds, for example.
At the same time, the mainstream camera apps that come with phones have been steadily improving, adding features such as computational raw image formats for greater editing flexibility. And Adobe doesn’t get as deep a level of access to camera hardware as a phone manufacturer does, increasing performance challenges.
But Levoy is clearly fascinated by what computational photography can bring.
“It just gets exciting,” Levoy said. “We are far from the end of this road.”