Jump to content


  • Posts

  • Joined

  • Last visited

Everything posted by bhu

  1. What uses are there from three megapixels between Sony's 43 Mpix and X-H2's 40 Mpix? 1. Cropping for lens quality? 2. Intellectual property gap? 3. Mosaic (X-trans) efficiency vs Bayer? 4. Adding 1.5 more Mpix of phase-detect elements or (O_O) time-of-flight array elements; pure fantasy, I know.
  2. 1. Integrate the "selfie" feature into the display's arm and limit its flexibility for people behind the camera or, 2. Create self-standing selfie monitors between 50 mm and 300 mm diagonal that plug into the video port, run off their own batteries and show all of the camera info and live image as if it were a viewfinder. People can use a remote or phone application to control the camera when facing it.
  3. Fan of X-Trans since P&S days Is RGB or RGGB better mosaics than Fujifilm's GBGGRGRB? How many bits of dynamic range and how few fractions of arc-seconds of resolution are needed for the more common pixel mosaics versus the more complicated X-Trans? Moire decreases with pixel size but photopic luminance for color channels and spacial frequency still matters. Debate, now, for your favorite mosaic!
  4. Looking over release dates and release order after today's X-E4 release and its specifications, I wondered if Fujifilm might be working on a new sensor and processor, what camera might get the next generation sensor/proc combination and thought an X-Pro4 might be in early development. Fall of 2021 might be too soon but I was curious. Thoughts?
  5. No, infrared cameras typically need at least the red, green and blue color filters not integrated into the sensor. Infrared photography and black and white photography are not the same and often confuse people, partly because of so many inaccurate representations in marketing materials intentionally trying to confuse customers into thinking their consumer product is a real IR camera. A "fake" or pseudo-infrared camera would be a standard sensor without the color filters and, instead, have a visible light filter over the lens. It would look black to the eye because it blocks colors but lets "heat" through. Typically, this is enough for consumer-grade equipment. Take off the black-looking visible light filter and you get a black and white sensor that needs an IR-blocking filter and you can now guess what other filters are needed on cameras designed for visible light. Visible light sensors often have IR filters. Then, people buy UV filters to block frequencies higher than blue. However, there are some products out there that advertise themselves as IR cameras when they are really only consumer-grade sensors that did not get either color filter or IR filter added. This kind of sensor is a mess because it mixes visible light with high-frequency IR light in a misleading black and white image. This kind of camera is what I think of as "trash" used for cheap game-cameras that require an IR LED flash to work. IR LEDs are not that far from red and are relatively high frequency compared to the rest of the IR spectrum. A "real" infrared camera would have a sensor specifically designed to receive wide-band or narrow-band infrared light adding that same visible light filter mentioned previously. These "camera" systems are more industrial, astronomy, military or aerospace grade and can be extremely expensive. Infrared light is between the color red so deep that it looks dim, but really is not, and microwaves. As the IR wavelength being targeted gets longer (toward microwave), the sensor pixel element size increases to receive it making the sensor size also increase or pushing the sensor to a lower resolution. Also, an IR camera's ability to see through dust, fog and other materials decreases the closer to red visible light the sensor is designed to receive.
  6. Be careful, here. The human brain is 50% image processor. Low noise can be the reason your images do not pop but film simulation can add graininess by introducing spacial and chromatic noise. Our brains pick up on this pointillism. Add temporal noise to the array for even more realism. Some things can be done to abnormally-perfect sensor scans to make them more real-life. That is what Fujifilm does with film simulation.
  7. Stevie, from your post you may not have as much background in the physics or electronics behind modern photography as some others, here. Autofocus may be (motion) comtrast-detect or phase-detect. Contrast, or motion-contrast, autofocus examines the highest spacial (contrast) frequency while phase-detect autofocus examines spacial contrast between two light rays at similar angles. (I am simplifying, somewhat.) A processor then computes and tracks a pattern across a large sensor area designating it an "object." The processor may look for "eyes" and assign a "face" to an image. . .. ... Selfie... This is a completely made-up term for a portrait with random garbage in the background. A selfie is a portrait. Treat it like one. Get a remote shutter button.
  8. Realistically, both the 50 and 56 will be stopped down a bit. For portraits, you should have enough light so it comes down to backdrop distance difference (δl or boke). For night shots, sensor generation has been most important. Start with the least expensive option and swap if you need more?
  9. Fujifilm is not good at vloging, IMO. I hope it gets there but will not hold my breath. The phone application is crude and limited. If you need something for work, get what others recommend and go with that. Video and remote camera control are not "meta" for Fujifilm products and they keep trailing the competition on this ever-more critical tech. Honestly, I am following light-field imaging for future photo/video tech and Fujifilm does not figure in it anywhere. That is how big I feel the gap is. Fujifilm's phone applications are way behind the technology curve. It does not matter what generation Fujifilm camera you have. The app's are not good.
  10. Why can I not connect to Kinko's or another national print chain directly from my camera through my phone and push a print button on my X-T? The print could go to a queue at a location of my choice for pickup of the drafts later depending on my data connection. It seems like an easy and profitable partnership between a Fujifilm cloud service and printing companies.
  11. It is the internal orientation sensor(s); not sure how many Ther are if there is more than one. Treat the camera like a phone and watch how you handle it. Your camera's sensor may be slightly off in how it calibrates orientation. Although it is not a solution, you may get better results starting it from level and sneaking up on where you want to hold it or giving it a gentle shake at level.
  12. This is a really difficult question! ( T_ T) I have the 23 1.4 myself but also have too many others. If you want an all-purpose, the 55-200 or 18-135 will get you some zoom at a fair price while the 50-140 is the high-quality version. If you want a fixed focal length, think about what kind of pictures you take the most often. For myself, I use the 14 2.8 indoors (architecture) and for outdoor landscape, 56 1.2 for portraits and other FF 80-ish stuff, and the 80 macro for a wide range of close and far shots. The 23 1.4 is in that 35mm full frame spot so you have to decide what kind of photos you will take a lot of that the 23 cannot do with a little walking; wide angle, portrait large aperture, macro, or zoomy one-lens. This is how people end up with a heavy bag.
  13. Maybe add the X-S10 in your candidate list. It has a nicer price-point. On the other hand, consider what IBIS does to battery life and make sure you are willing to have spares. I am waiting for the next generation sensor and processor and will switch back to an X-Ex or X-PROx but that is just me.
  14. The 18-135 at $250 is a good deal unless there is something wrong with it. If you need one lens to leave on the camera for casual use with a fair range of magnification, this lens is a good candidate. Quality of the 18-55 and 55-200 are both great but, if you do not like switching between lenses because you want something that works at 23, 40 and 80, the 18-135 is handy that way and it comes with a weather seal. We all become lens collectors or switch to another platform and do the same with the new one. Lenses last a lot longer than sensors, processors and battery styles. Fuji lenses are great, relatively speaking, and APS-C is a step down in size from full frame. I like to trail the sensor technology curve by a couple of years in exchange for the reduced weight of appropriate-sized glass. It is the image/video processor I worry most about. (Cell phones have amazing processors, GPUs and massive amounts of DRAM, along with fat batteries, that need integration into mirrorless cameras but I fear that would double the price point.)
  15. The XC15-45 is much smaller than the XC16-50, yet both are f3.5 - f5.6. Does that mean the XF zooms can receive similar size reductions or become faster for the same sizes? I hope someone will also compare images that are not corrected by the camera to see how much in-camera correction is being performed between the two XC zooms.
  16. Great reply, Tikus. Thank you. As to the price discrepancy on the XF kit zoom, that might be due to price elasticity for the X-E market. That is, X-E buyers may be less price conscious than X-T buyers, which allows Fujifilm to push the price up a bit, which is also more of my unfounded speculation. The X-T line seems like it is Fujifilm's spearhead into full-time photographers' kits so system price may be a larger factor in adoption.
  17. Any rumors on dimensions? For myself, the lens would have to be at least 10-20 mm shorter than the XF18-55 to even be considered. Also, as an XC lens, I would expect cost to be roughly the same as the XC-16-50 or half that of the XF-18-55. Then, there there is the crowd of lenses in this focal range; several fast primes, pancakes and the new compact primes, the kit zoom, and the constant aperture zooms. Designating the pancake zoom to the XC family helps differentiate the product but I am curious as to Fujifilm's roadmap strategy. If it were me, I would obsolete the XC16-50 and replace it with the "pancake" XC-15-45 as the XC kit for X-A cameras. I doubt Fujifilm expects the lens to be used much by the X-E/T crowd but this may be a first step in revamping the older zoom lenses for both XC and XF, though that thought is pure speculation.
  18. You know, Fujifilm could keep the x-mount and still offer mechanical IBIS by using a software crop to the image. If a lens is not equipped with ILIS, the camera can shave pixels off the four sides to prevent the luminance uniformity problem caused by the sensor moving near the edge of the mount's occlusion. Still, software-based IBIS is more likely, though I suspect it will reduce battery life as the sensor may have to capture multiple images and analyse each for the one with the least blurriness. Ideally, small-area sharpness and contrast (MTF) will be analyzed over the entire sensor for several exposures and the best areas from each will be re-composed into a single image. That will need a lot of processing power, though. Alternatively, low-res images, including PDAF elements, can be captured real-time until the processor declares a "Goldilocks" moment (with minimal motion blur) has been found and locks in the full-res image. For video, the camera would probably take a more traditional processing approach where edge-detection is used to line up consecutive images and crop the jagged edges.
  19. What if the XF18-55 was redesigned for compatibility with XF1.4X TC? That would allow conversion to a XF25-77 at a cost of, maybe, 1.5-2.5 stops making the kit zoom somewhere around f3.3 to f6.5 with the converter. Add in WR and newer OIS and focus linear motors for the mark 2 kit lens. Short of making another fast zoom between the current XF18-55 and XF-55-200, or just using the XF18-135, would the option to add the smaller tele-converter and roll in a few additional updates entice you to trade your current kit lens for a new version? Please pardon the odd question.
  20. Thanks for the link to the article, Konzy. It does a good job explaining how camera makers express lens quality to photographers.
  21. After following an article Patrick linked, I wondered what the community understood the term micro-contrast meant. MTF was mentioned once in the article and not well explained so I thought the Fuji-x-forum would be a good place to discuss the relationship between micro-contrast and MTF. It is an important measure of glass quality, polishing, coating, and lens design. Please share your thoughts on micro-contrast.
  22. For reference, circular polarizer is a bit more than an ND2 filter but less than ND3. Spectral transmission will be slightly different than a ND filter. ND filters may be neutral, yet have varying spectral power reduction/attenuation. Look for the power versus wavelength curves in the filter specification. That said, color reproduction can be fixed in post processing.
  23. Shot noise is an inherent limitation in all sensors and can be reduced or eliminated using software filtering. The focus peaking may have interfered with the SW filter or worsened the effect of the shot noise.
  24. "Shadow" settings adjust gamma near the dark end of the luminance range. This is typically used to artificially increase luminance in low-luminance areas so you can see more detail in dark areas when the image is reproduced. Shadow enhancement can be applied to any photograph, even photo's taken with "DR" enhancement. Those settings usually just re-map gray levels. For example, gray levels 1, 2, and 3 may be re-mapped to gray levels 2, 4, and 6 making them artificially brighter. Dynamic range is supposed to be an increase in the sensor's range of counting photons. That is, if a pixel element in the sensor needs 10, or more, photons to be not-black and 10,000, or fewer, photons to be not-white (saturated), the dynamic range is 1000:1. Now, say that some new sensor may be able to read only 5 photons for not-black and can accept 50,000 photons for not-white resulting in 10,000:1 dynamic range. True dynamic range is a function of the photo-receptors on the camera's sensor. Please note that dynamic range has nothing to do with bit-depth - the number of gray levels. Bit-depth simply divides the sensor's operating range of detectable luminance into gray levels. A sensor with 1,000,000:1 dynamic range can have an 8-bit per color conversion or a 14-bit per color conversion but you should expect the 8-bit to show gradations, or lines, where the photon count breaches the next gray level. Spacial dithering can help smooth "chunky" gray levels but the best way is, obviously, to add more bit-depth, which divides the sensor's luminance range into finer and finer slices. There are ways of fudging this, though, so beware of the difference between true dynamic range and "features" that call themselves Dynamic Range, or DR. Fujifilm has been adjusting gain in pixel elements so that, even if the element's range is small, the range can be moved, shifted, so that some elements are "tuned" to be more sensitive and some are "tuned" to be less. This technique effectively increases a picture's dynamic range at the cost of... spacial dithering; that is, half the pixels scattered throughout the image may be exposed more, leading to blooming in bright areas just to capture detail in dark areas, while the other pixels might be less exposed to preserve detail in bright areas at the cost of losing detail in shadows. Using a DR setting sprinkles darker pixels in with brighter ones to (theoretically) increase the dynamic range of the picture (but not individual pixel elements). (I said, "theoretically," because the increased dynamic range must be saved in the picture, then reproduced on some media capable of showing it.) There is nothing inherently wrong with this approach and it can actually aid in giving a photo film-like grain.\ Of course, there is another way to increase dynamic range: Take two photographs with different exposure settings and overlay them. That is the common "HDR" stuff you read about. Typical HDR photography increases dynamic range by using temporal dithering (in a very simplistic sense). Shadow enhancement can be applied to any photograph, even photo's taken with "DR" enhancement. Those settings usually just re-map gray levels. For example, gray levels 1, 2, and 3 may be re-mapped to gray levels 2, 4, and 6 making them artificially brighter.
  25. The electronic IBIS looks like it needs a new sensor design, hence the "s" instead of being a camera firmware update. I did not really have much hope of a software-only solution for my X-T2 but I can still dream of it. (Hmm... using the phase-detection (auto-focus) elements to aid eIBIS is also possible.)
  • Create New...