Jump to content

Why didn't my experiment let me see phase detection pixels?

Recommended Posts

Ever the curious tinkerer, I just tried an experiment. I put my Fuji 50 mm f/1.0 lens (minus the lens hood) on my X-T4, manually focused on infinity, held a barrier over the left half of the front of the lens right up against the hood mount ring, and took a RAW photo of a smoothly illuminated surface just a few inches away. Then I looked at the RAW image using RawDigger, selecting just the green channel.

What I expected to see was that some pixels would be darkened, because they were phase detection pixels aimed to the left. But I could not find any standout pixels, only a gradation from lighter on the right to darker on the left. I also tried a photo holding the barrier over the bottom half, in case they only aimed phase detection pixels up and down, and tried the other color channels too. I zoomed way in to easily see individual pixels, and panned around in both dimensions for a while. Nothing.

Why didn't this show me a matrix of special pixels? Do they smooth over these pixels in the camera before storing the RAW image? Did I not look hard enough? Do I misunderstand how phase detection pixels work (I'm pretty sure I don't)???



Link to post
Share on other sites

If you are able to split the image into separate RGB channels, your software was not really giving you a view into the actual raw data, but was giving you a view of its version of the secret sauce fueling the Fujifilm magic decoder ring used to give us the beautiful images we get, i.e. de-mosaiced data (de-mosaic is the name given to translating Fujifilm X-Trans sensor data intio RGB values, similar to de-Bayering for Bayer sensor data). That process most likely takes into account methods for dealing with the phase detect and contrast detect focus pixels.

The actual raw data is linear, gamma = 1, 'grayscale <--> B&W'. The raw file image data contains the information raw convertors use to turn this into the color images we see.

Here is a recent image I posted in the Flowers thread over in the Nature section, to illustrate zooming into a linear (raw) image file. Keep in mind these are cropped, quick screen captures, no in-depth analysis or anything like that.

Processed jpeg file>

Welcome, dear visitor! As registered member you'd see an image here…

Simply register for free here – We are always happy to welcome new members!

Linear image>


Software like Raw Therapee may still give you the means to look at the linear data.

Edited by jerryy
Link to post
Share on other sites

I think I'm getting raw linear data. Here are the three color channels zoomed in; raw photosites are each only one or another channel. All the values are integers.

Welcome, dear visitor! As registered member you'd see an image here…

Simply register for free here – We are always happy to welcome new members!


I'm using this software:


Link to post
Share on other sites

This may help or not :)



The photosites record a certain amount of intensity, the filter array decides the "color". The camera records all of the associated information and the raw convertors / displayers / analyzers read that information and give us an image to work with. The very "rawest" data is grayscale, which then becomes raw when the filter information is applied to it as @BobJ says.

Link to post
Share on other sites

I'm only referring to "colors" here in the sense of primary color channels associated with the various filters in the array, and not visual colors deriving from a combination of these primaries. With respect, software can give an image of just the pixels, or perhaps better just the photo sites, having green filters in front of them, based on their locations, or just the ones associated with blue or with red filters. Look at the photos I posted -- that's what they are, and that's why so much of those photos is black, as pixels or photo sites in one channel associated with the other colors have no information and are plotted black. Look at the small table of statistics I posted; the only values that aren't integers are the averages and standard deviations, which are usually non integers when you average a bunch of integers. It would be a lot of work to dig through all the integers pixel by pixel searching for something special, even though the software lets you do exactly that if you want. I think it's most reasonable to look at a channel plotted as grayscale to find brighter or dimmer spots, and I think the green channel is the best candidate because there are more spots and green focus would matter more than the others for human eyes.

It IS raw data I am digging around in here. That's why they named the software RawDigger.

Of course they're going to associate the names R, G and B with the separate raw grayscale values, for example in your Cambridge cite when they say "This also explains why noise in the green channel is much less than for the other two primary colors (see "Understanding Image Noise" for an example)." Being associated with one or the other color is relevant for grayscale, individual pixels, the histograms of raw values, etc etc. It doesn't mean the thing being described necessarily has a color per se.

What I'm hunting for here is individual picture elements ("pixels") or photo sites that are brighter or dimmer because the tiny lens in front of that photo site is aimed more at one part of the lens or more at another part, while most of the lenses aim at the entire lens area. That's what phase detection picture elements or photo sites are. Or at least that is my understanding; can anybody educate me different?


For anybody interested, here are a few descriptions gleaned from the rawdigger.com web site:

RawDigger is a tool allowing to view, study, and analyze pure raw data as recorded by digital photo- and certain video cameras.
RawDigger is a microscope of sorts that lets you drill down into raw data both on Mac OS X and Windows.
RawDigger doesn’t alter the raw data in any way. RawDigger is not a raw convertor. Instead, it allows you to see the data that will be used by raw convertors.
In-depth Analysis of the Camera/Sensor:
Study of the white (fully over-exposed) and black frames to check the pixel variation
Calculating well depth
Visualize sensor stitches
Check the black level
Determining the geometry and analyzing the optically black frame
Check the accuracy of the histogram in RAW converters
Determine the amount of vignetting from the lens and from the sensor (digital vignetting), check for any skews in the lens mount and/or sensor mount
Dark Current
DSNU (Dark Signal Non-Uniformity)
PRNU (Photo Response Non-Uniformity)

Link to post
Share on other sites

For a description of the tiny lenses in general and on specific phase detection pixels see this patent, especially Image 2 which shows Fig. 1 and Fig. 2. "FIG. 1 illustrates a cross-sectional view of some embodiments of a backside illumination (BSI) image sensor with a phase detection auto focus (PDAF) pixel comprising a microlens for good angular response discrimination. FIG. 2 illustrates a graph of some embodiments of angular response curves (ARCs) for PDAF and image capture pixels in FIG. 1."

One thing that may be tripping things up here is the terminology "pixel". Originally this meant "picture element", but then it took on different specific meanings in color imaging systems depending whether one meant a single photo detection site, or a minuscule region of the image assigned a color that might be derived from separate red, green and blue filtered signals, or a minuscule region demosaiced from an array (and therefore inherently having explicit information about one of the primary colors and inferred information typically interpolated from the other two primary colors nearby, or perhaps some more complicated modeled information that might e.g. also accomplish antialiasing). RawDigger on their forum prefer "photo site" to refer to the tiny photosensitive region, and "pixel" to the software construct, whereas Chou et al use "pixel" to refer to the tiny photosensitive region.

Link to post
Share on other sites

Here is an older article that adds a new term “sensel”.


In the article, the person from Fujifilm says that in addition to focusing, sometimes the photo-site/pixel/sensel is also used for image data, sometimes it is ignored, which could be interpreted in different ways. If it is used for imaging, it would be difficult to determine from the surrounding photo-sites because it is doing the same as they are, if it is being ignored, then there is the possibility that instead of leaving a black dot — i.e. a cold pixel spot — the software blends the surrounding pixels together and uses that for the spot, similar to how hot pixel mapping is done.

Or something else entirely.

Perhaps try overlaying the choosable focus point map onto your image and see if you can determine which is the PDAF sensel.

Link to post
Share on other sites

Even with Rawdigger you are not seeing the output from the sensel. You are looking at the data after it has been converted into pixels. Each sensel has a photodiode which converts the amount of light it is exposed to to a voltage. It is an analogue device. This voltage is amplified and applied to an analogue to digital converter in order to get a digital value for that pixel. Two things come from this. Digital cameras are actually analogue at the start of the imaging chain and it is not possible to see the actual photodiode output. Rawdigger is looking at the output data - the only thing it can look at. It is showing you the digital output from the converter. If there is a loss of value associated with a phase detect sensel, this will most likely have been dealt with in the amplifier or during conversion. If that sensel is not being used for imaging at all, then yes, Rawdigger should see it as missing.Strangely, film is digital. Each grain of silver is either there or it is not. It is the number of grains that have been exposed that makes up the tonal value. Digital is analogue and film is digital!

Link to post
Share on other sites

I think the phase detection elements should show up. That is, I suppose they are being used for imaging. If they are not used for imaging, they might be missing, or they might still be there, in the raw channels I'm examining. That would be visible.

But if they are shown one way or the other, holding a card in front of half the lens should make the phase detection pixels aimed at that half darker. That's the principle I'm trying to exploit.

Like it says in the article jerryy cited, "Fujifilm's system provides a seemingly simple solution - masking-off half of a sensel means it only receives light from one side of the lens. By creating strips of these sensels, half 'looking' one way, half 'looking' the other, the camera gets the two distinct images necessary for phase detection AF." Though, "strips" doesn't sound right, I think they'd just be pairs, one pair at each focus point. Then again, they're talking about tens of thousands of these, not a few hundred (two for each focus point). Still trying to figure out what's up here....

Link to post
Share on other sites

It also depends on the camera, Fujifilm has changed the number of sensels and their locations in the X-T line as new generations of sensors are released, for example:


It would not be difficult to hide the masked photo-site bring darker, just boost the amplification that @BobJ is explaining, then make allowances for any noise being introduced and the resulting pixel will look the same as the surrounding ones. But bringing out how they are doing that may require reading their patent applications or white papers.

Edited by jerryy
Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Similar Content

  • Posts

    • Hi Oliver,Thank you for writing.X-H2S firmware 2.1 was the best.  It did not require two selections of record to get the camera to record for more than a few moments--one selection was enough.  Version 3.00 and 3.01 has broken this and now requires selecting record twice as did firmware version 2.0.Be careful what you wish for on DJI upgrading the firmware so that it allows of the X-H2S on the RS2.  In looking at DJI’s compatibility chart of the X-H2S with RS3 it does not do very much except start/stop record.  At least with the RS2’s current firmware, from the Ronin App I can also adjust the X-H2S' shutter speed, aperture, an ISO through RavenEye.Don
    • If you long press the disp/back button you should see which function is associated with each of the buttons - scroll down and see if Q is associated with Q
    • It appears that I didn't assign the front command dial to ISO.  Problem solved.
    • im also interested in getting the sigma trio v the fuji f2 primes .i can really afford the new fuji 1.4 . im only a hobby photographer .  so i think ill go the cheaper route .i think leaning towards the sigma's over the fuji f2   
    • Hi, I would like to use the app for remote view/shooting, but cannot get it to work.  I’ve tried deleting the app and reinstalling and can get a bluetooth/wifi connection but am getting an error message when i select “live view shooting” on the app.   It says “camera processing - switch to play menu or shooting menu” (see attached).   Neither of those worked, however, I don’t know what is meant by “shooting menu” as there are many shooting menus. Please advise of any advice on this topic.     thank you Tim

      Welcome, dear visitor! As registered member you'd see an image here…

      Simply register for free here – We are always happy to welcome new members!

  • Create New...