The Samsung Galaxy S9 camera lets in 28 per cent more light than the S8 (pictured)
Photo Credit: Magazine / Contributor / Getty
For the world’s largest mobile industry trade show, Mobile World Congress 2018 didn’t contain much in the way of new smartphone announcements. Huawei and HTC both held off on announcing any new phones, while LG opted to reveal the most minor of upgrades to its six-months-old V30 flagship.
By a long margin, the biggest launches of MWC 2018 were the Samsung Galaxy S9 and S9+, but even that didn’t contain much that we hadn’t already seen in the Galaxy S8. The S9 has the exact same 5.8-inch Quad HD Super AMOLED screen and 3,000mAh battery that we’d already seen in last year’s Galaxy S8. The processor, too, had only the tiniest of upgrades, while the design changes amounted to little more than shifting the fingerprint scanner half an inch to the left to sit beneath the camera.
Back in 2016, LG and Google’s experiments with modular phoneshinted that the future lay in more than just shaving off precious millimeters of bezel, but poor sales figures for the LG G5 and shifts in Google’s hardware strategy soon put an end to that dream. Instead, the industry has coalesced around a familiar idea of the perfect smartphone design: big screens and small bezels. As Rick Osterloh, senior vice president of hardware at Google, said in October, "the playing field for hardware components is levelling off". Manufacturers are racing to achieve parity with each other rather going out on a limb to introduce genuinely exciting features.
One of the few areas where innovation isn't slowing down is in smartphone cameras. With the release of the Galaxy S9, Samsung may have finally cracked the formula for detailed low-light photography. Combining two different aperture settings in a single-lens camera means that the S9’s 12 megapixel dual pixel sensor can soak up more light when conditions are dim. The camera also takes 12 separate snaps with every press of the shutter button, automatically combining them in post-production to reduce background fuzz by 30 percent when compared to the S8.
And the code behind cameras is becoming just as important as the lenses and sensors that make up the devices themselves. In February 2018, Google opened up its Visual Core chipset in its Pixel 2 flagship smartphone so it worked with third party apps such as Snapchat, WhatsApp, Instagram and Facebook. Before then, photos taken through those apps looked worse because they couldn’t access the extra post-processing heft that Google’s dedicated eight-core visual chipset brings. Now it’s switched on, the chipset enables HDR+ and increases the zoom quality on photos taken in any app that’s connected via the Visual Core API.
But the benefits of better software go way beyond sharper photos. The Google Translate app allows people to live translate foreign-language text through the smartphone camera, a feature that Samsung has now incorporated in Bixby Vision – the South Korean company’s attempt at bringing more AI into its camera app. As well as live camera translation, Bixby Vision also uses image recognition to detect a plate of food and estimate how many calories are in it. While this type of software and hardware combination is still in its infancy, it’s an indication of the kinds of things that our cameras might be able to do once you put some seriously powerful software capabilities behind them.
Augmented reality is another area of camera-centric smartphone innovation that is still only in its early stages. Taking the lead from Apple using AR to allow anyone to become the poop emoji, the S9 now lets you use AR to turn yourself into a slightly terrifying emoji version of yourself. Samsung says this is all part of building software “for the way we communicate today”, but really this slightly strange feature is a way of demonstrating the company’s growing AR skills.
On Monday, Google went public with its first official launch of ARCore, allowing Android developers to publish AR apps to the Google Play Store for the first time and signalling to Apple that Google also thinks that AR is going to become a big part of how we interact with our phones in the future.
While the shape and feel of our devices hasn’t changed much over the last half-decade or so, the growing role of artificial intelligence and augmented reality in our phones will mean that manufacturers can’t afford to get lazy when it comes to new camera technology. If AR does become as big as the likes of Apple and Google expect, then no manufacture will want to lag behind when it comes to having the cameras, and the software, that are capable of exploiting that technology.
MWC 2018 might have been a dud for new smartphone launches, but the new camera wars are only just getting started.