IPhone 11 max pro camera

in #ihone4 years ago

your camera is a computer and your computer probably has a camera we're gonna talk today about computational photography where phones and software use processing to ring impressive images out of tiny smartphone sensors welcome to upscaled our show where we explain how your favorite tech works in our last episode we took on high dynamic range video and I'm super glad that peace helped a lot of you understand what's going on with HDR video especially the dreaded gamma curve and yes we didn't make that video in HDR because if I learned anything from researching it's that correctly mastering HDR is a pain and the last thing I wanted to do was to screw up how all of those beautiful black and white gradients looked on your screens one correction that came in from a disconcertingly senior person at NBC hlg the high dynamic range standard designed for backwards compatibility with older SDR TVs will actually only render properly on TVs that support the REC 2020 color space which you current SDR sets do and essentially none that are older than three or four years other TVs may get some pretty gnarly color shifts also a new broadcast standard called ATSC 3.0 would allow for 4k HDR over-the-air but it's been slow to deploy and you'll probably need either a new TV or an antenna adapter to use it if it ever does become popular but enough HDR video today we are here to talk about computational photography now technically all digital photography is computational the data coming off the sensor is just an array of voltage values almost like a spreadsheet and it takes software and processing to turn that into an image so computational photography is a pretty broad term but these days it's most often used to refer to any photography and it uses algorithms our processing to produce an image beyond the capabilities of just the lens and the sensor by themselves the new iPhone 11 and pixel 4 in particular have some impressive processing tricks to yield superior photos by boosting dynamic range capturing clear images in the dark and enhancing resolution one note this is not a review or comparison of these feel free to sound off in the comments with your pick for best smartphone camera I'd be curious to hear what you think but for now this is just about the tech smartphones in particular have challenges when it comes to photography namely their tiny tiny sensors you may have noticed the megapixel wars are pretty much over and most flagship smartphone sensors now are 8 to 16 megapixels and there's a reason for this for 112 megapixels is actually enough for most uses and two the more pixels you cram onto a sensor especially a tiny one the smaller those pixels have to be a smaller pixel is specially coupled with a tiny lens captures less light this means the camera ends up essentially building the scene from less information and it can lead to a noisier image digital noise is the colored static or fuzz that we see in some photos especially in low-light it's essentially the background electricity in the camera mistakenly getting recorded as image data the stronger the signal the camera can record ie the incoming light the less of the noise we see in the final image but this is especially a problem then in low-light where the camera has to increase the ISO to capture a bright image ISO is a measure of gain it amplifies the signal coming from the sensor but it also ends up amplifying the noise so if the signal isn't particularly strong ie the picture was too dark the noise will end up making the image fuzzy it's like turning up the volume on an old recording the song gets louder but so does the hiss and crackle of the tape for a 12 megapixel smartphone sensor the pixels are typically on the order about 2 micron square while Sony's a7s2 generally considered to be one of the best low-light cameras around has pixels that are a whopping 70 microns square so taking a photo of the same scene the a7s2 could in theory capture 35 times more light per pixel giving a huge advantage in signal-to-noise ratio in reality larger pixels also means the Sony can use a much larger lens to deliver way more light to the sensor versus the tiny lenses on the smartphone cameras but take all these pieces together and phone cameras can't capture as much light making noise a bigger issue and just degrading their performance in the dark larger sensors also tend to improve dynamic range which we talked about in the last episode on HDR video this is the difference between the brightest and the darkest parts of an image at low ISO ie with plenty of light smartphones actually do pretty ok they can still end up with noise and the shadows though if you try to brighten the dark parts of the image at higher ISO dynamic range typically goes way down and the sensor gets worse at handling contrast this means parts of the scene will either end up totally black or clip and go totally white either way losing a lot of color and detail so here's where phones have a great trick HDR photography in a high contrast scene like backlit against the sky or a bright lamp in a dark room or my cat in the window the camera takes multiple images and combines them into one improved photo this trick of combining images has actually been around since film photography photographers would take a series of images at different exposures from super dark all the way to super bright and combine them originally in the darkroom and these days in a program like Photoshop in recent decades this has been driven by real estate and architectural photography because by combining light and dark images you could show both an interior scene and the bright outdoors through the windows traditional HDR like this required the images to be nearly identical or the composite image would end up smeared or have ghosts smartphones started using this same process a while ago using software to automatically align multiple images taken at different exposures and correcting for any handshake while you're photographing but this method does have problems in low-light the long exposure times required can make for blurry images and the exposure for each different scene does need to be enough to capture all the details from the extreme brightest end to the darkest parts of the image usually want to see how this works the open camera app will let you not only save a final HDR image but all of the images that were taken and combined in the process the big change here came with Google's HDR plus mode released in 2014 HDR plus still merges multiple images but the big changes it doesn't vary exposure time and it tries to keep the shutter speed quick enough to eliminate motion blur the camera exposes for the brightest part of a scene taking a series of images that in other circumstances would be far too dark but here's where the magic comes in normally boosting the exposure of those dark parts of the image would increase the noise but because most noise is fairly random by using a bunch of images and averaging them together you end up a virgin out the noise and vastly reducing it it should be said again this is actually not a new trick but the magic is making it happen seamlessly and automatically other companies have adopted the same technique as well but just because it's become common that shouldn't downplay how impressive it actually is to do all this a phone has to set its exposure for the scene capture a burst of images align them together discard any blurry frames or ones that can't be matched identify the parts of the scene that are too dark boost the exposure in those regions averaged together the burst of images to reduce noise and you could do all of this in Photoshop with a normal camera but your phone does it automatically in about a second to speed things up the pixel and other phones now start capturing images the second you open the camera app and then you hit the shutter button the previous 9 to 15 images are saved and merged together into the final photo you see so in a way the big advancement here isn't traditional HDR with multiple exposures from dark to bright but it's using multiple images to improve the dark parts of a scene and it's not a stretch to see here how we get from hdr+ to nitesite on the pixel hdr+ won't use a shutter speed below one fifteenth of a second no matter what to keep the image sharp and eliminate that motion blur nitesite mode sets the shutter speed using not only the scene brightness but also how much the scene is moving and how much the phone is shaking handheld it'll actually go as slow as the third of a second put on a tripod with a perfectly still scene it'll shoot as slow as one second per photo these images are combined using similar tricks to boost exposure and average out the noise and presto a clean sharp image in the dark there's a bit more to the magic here though Google essentially built a custom machine learning algorithm to do white balance trained off hand corrected smartphone photos which keeps these night pictures from having weird color shifts again other companies have also adopted these techniques and the results are usually impressive this mode is more acceptable - ghosting though because any motion in the scene can end up smeared or distorted but in the right circumstances it looks like magic one odd side effect of nitesite that people noticed right away it was that if he used it during the day especially at the phone was steady your photos sometimes looked way sharper than normal in fact even hdr+ produced surprisingly detailed photos I remember comparing the pixel two to the iPhone 10 and galaxy s 9 + and I was blown away by how detailed the pixels HDR shots were it turns out these modes are getting a little boost to resolution from something called D mosaicing so here's what's going on almost all camera sensors use what is called a Bayer filter this is a screen over the sensors pixels that only let in one color of light blue red or green in a grid pattern because each pixel can only see one color after the image is captured the pixels are analyzed and averaged together to create the final true color of the image the algorithms that do this are seriously impressive but still some of the information has to be inferred or well made up and because of this depending on the scene your camera is actually only capturing around 50 to 75% of the resolution of the sensor the thing is if you could shift the sensor by just a pixel and capture multiple images you could get the full red green and blue values for each pixel and create a higher res image some cameras like the Panasonic s1r can do this automatically by moving their sensor minutely though only on a tripod HDR and Nightside modes benefited from this to a degree from the effect of stacking multiple images together but new super resume and super resolution modes on some phones take advantage of this effect to increase sharpness and detail even in digitally zoomed images they use the natural motion of your hands to create the pixel shift and will even use the phone's optical stabilizer if the phone is totally steady or on a tripod again the challenge is properly merging the frames and these techniques definitely work best on unmoving scenes well we've been talking a lot about Google here Apple has really taken things to the next level with its deep fusion mode we essentially just combines everything we've been talking about what it does it shoots nine images before you press the shutter button it's already shot for short images for secondary images when you press the shutter button it takes one long exposure up until this point most of these techniques have relied on traditional algorithms but Apple is saying deep fusion goes all-in on the a I've machine learning algorithms analyze each part of the scene using the fast shots to increase detail in sharpness in areas like fabric foliage or architecture and using the longer exposures to reduce noise and improve color in the shadows and the results here can be impressive but they're also sometimes kind of hard to see without zooming all the way in on the image this may actually be a testament to how good the normal HDR shots are on the iPhone 11 but diffusion didn't always seem like a giant improvement to me so why is this restricted to phones surely high-end cameras have some processing power too right well sort of most professional digital cameras do have ARM processors in them that aren't that different from smartphone processors but they lean much more heavily on their ISPs or image signal processors these are relatively inflexible processors that are really good at doing one thing turning sensor data into image files sony's a 7-3 can shoot 24 megapixel Raw photos at 10 frames per second with the help of a powerful isp but it's paired with a relatively puny four core ARM Cortex a5 processor it was hard to even find comparisons here because the a5 was actually designed a decade ago but the chip in the pixel 4 is conservatively 30 times faster than the one in that Sony camera now this isn't to say that smartphones don't have ISPs as well they definitely do but the point is they have a lot more general computing power available to them than the average pro camera however some companies are starting to bridge the gap fuji films just announced x pro 3 has an HDR mode that will combine 3 images into one shot in camera to boost dynamic range and we expect to see more companies following suit in the near future this is just a bit of what can be considered computational photography technically even things like instant panoramas are at type of computed photo and we didn't even talk about portrait mode most of the techniques that a phone can achieve here are doable in editing software but the real innovation is having these processes happen instantly and seamlessly within your device software and processing will only get better but so will the cameras themselves new nanomaterials could vastly improve the tiny lenses of smartphone cameras and curved sensors could increase light gathering and sharpness and some prototype sensors made from graphene or organic compounds need far less light to function on the totally weird side researchers have even demonstrated lensless cameras that focus light with interference patterns or so-called compressive imaging which can capture an image from a single pixel and as someone who still has to carry around a big heavy camera most days I can't wait let us know what you think in the comments do you like these new camera modes or do they kind of feel like cheating and what do you actually think about these phones I have been an Android user since my first smartphone but I got to admit the iPhone 11 pro is real real nice anyone else feel the same way let us know and be sure to tune in next timE

Coin Marketplace

STEEM 0.16
TRX 0.16
JST 0.030
BTC 59207.83
ETH 2464.61
USDT 1.00
SBD 2.43