The iPhone XS still has a 12-megapixel sensor. Or rather, there are two of them: a TV and a wide-angle. But at the same time, the possibilities of photographing the flagship far exceed the iPhone XS.
The developers of the Halide application decided to understand how the iPhone XS and XS Max cameras work. And immediately noticed a significant difference in the algorithm of their work.
The fact is that when developing flagship smartphone cameras, Apple relied on software. Now the developers concentrate not on the hardware, but on the software, thereby opening the “era of computational photography.”
In the case of automatic improvement of self-pictures, the algorithm works in such a way as to minimize the noise level. At the same time, the reduction of several exposures leads to a decrease in sharpness and the elimination of sharp contrasts between light.
The essence of this treatment is to deceive the brain. To correct what has already been lost (speaking about the quality of the photo) is impossible. But you can make the brain think that everything is fine with the photograph. For this, you only need to play with contrasts.
Simply put, iPhone XS combines exposures and reduces brightness in particularly bright areas, while reducing the level of blackout. All the original details remain in the picture, but the person’s eye sees such a picture less sharp.
In fact, the iPhone XS uses more aggressive noise reduction than in previous models. The iPhone XS shoots at higher ISO values at higher shutter speeds.
As a result, the images are instantaneous, but the noise level is very high. To this end, the Smart HDR function was introduced, which is struggling with such shortcomings. [ 9to5 ]