Jump to content

bhu

Members
  • Content Count

    87
  • Joined

  • Last visited

  1. The XC15-45 is much smaller than the XC16-50, yet both are f3.5 - f5.6. Does that mean the XF zooms can receive similar size reductions or become faster for the same sizes? I hope someone will also compare images that are not corrected by the camera to see how much in-camera correction is being performed between the two XC zooms.
  2. Great reply, Tikus. Thank you. As to the price discrepancy on the XF kit zoom, that might be due to price elasticity for the X-E market. That is, X-E buyers may be less price conscious than X-T buyers, which allows Fujifilm to push the price up a bit, which is also more of my unfounded speculation. The X-T line seems like it is Fujifilm's spearhead into full-time photographers' kits so system price may be a larger factor in adoption.
  3. Any rumors on dimensions? For myself, the lens would have to be at least 10-20 mm shorter than the XF18-55 to even be considered. Also, as an XC lens, I would expect cost to be roughly the same as the XC-16-50 or half that of the XF-18-55. Then, there there is the crowd of lenses in this focal range; several fast primes, pancakes and the new compact primes, the kit zoom, and the constant aperture zooms. Designating the pancake zoom to the XC family helps differentiate the product but I am curious as to Fujifilm's roadmap strategy. If it were me, I would obsolete the XC16-50 and replace it with the "pancake" XC-15-45 as the XC kit for X-A cameras. I doubt Fujifilm expects the lens to be used much by the X-E/T crowd but this may be a first step in revamping the older zoom lenses for both XC and XF, though that thought is pure speculation.
  4. You know, Fujifilm could keep the x-mount and still offer mechanical IBIS by using a software crop to the image. If a lens is not equipped with ILIS, the camera can shave pixels off the four sides to prevent the luminance uniformity problem caused by the sensor moving near the edge of the mount's occlusion. Still, software-based IBIS is more likely, though I suspect it will reduce battery life as the sensor may have to capture multiple images and analyse each for the one with the least blurriness. Ideally, small-area sharpness and contrast (MTF) will be analyzed over the entire sensor for several exposures and the best areas from each will be re-composed into a single image. That will need a lot of processing power, though. Alternatively, low-res images, including PDAF elements, can be captured real-time until the processor declares a "Goldilocks" moment (with minimal motion blur) has been found and locks in the full-res image. For video, the camera would probably take a more traditional processing approach where edge-detection is used to line up consecutive images and crop the jagged edges.
  5. What if the XF18-55 was redesigned for compatibility with XF1.4X TC? That would allow conversion to a XF25-77 at a cost of, maybe, 1.5-2.5 stops making the kit zoom somewhere around f3.3 to f6.5 with the converter. Add in WR and newer OIS and focus linear motors for the mark 2 kit lens. Short of making another fast zoom between the current XF18-55 and XF-55-200, or just using the XF18-135, would the option to add the smaller tele-converter and roll in a few additional updates entice you to trade your current kit lens for a new version? Please pardon the odd question.
  6. Thanks for the link to the article, Konzy. It does a good job explaining how camera makers express lens quality to photographers.
  7. After following an article Patrick linked, I wondered what the community understood the term micro-contrast meant. MTF was mentioned once in the article and not well explained so I thought the Fuji-x-forum would be a good place to discuss the relationship between micro-contrast and MTF. It is an important measure of glass quality, polishing, coating, and lens design. Please share your thoughts on micro-contrast.
  8. For reference, circular polarizer is a bit more than an ND2 filter but less than ND3. Spectral transmission will be slightly different than a ND filter. ND filters may be neutral, yet have varying spectral power reduction/attenuation. Look for the power versus wavelength curves in the filter specification. That said, color reproduction can be fixed in post processing.
  9. Shot noise is an inherent limitation in all sensors and can be reduced or eliminated using software filtering. The focus peaking may have interfered with the SW filter or worsened the effect of the shot noise.
  10. "Shadow" settings adjust gamma near the dark end of the luminance range. This is typically used to artificially increase luminance in low-luminance areas so you can see more detail in dark areas when the image is reproduced. Shadow enhancement can be applied to any photograph, even photo's taken with "DR" enhancement. Those settings usually just re-map gray levels. For example, gray levels 1, 2, and 3 may be re-mapped to gray levels 2, 4, and 6 making them artificially brighter. Dynamic range is supposed to be an increase in the sensor's range of counting photons. That is, if a pixel element in the sensor needs 10, or more, photons to be not-black and 10,000, or fewer, photons to be not-white (saturated), the dynamic range is 1000:1. Now, say that some new sensor may be able to read only 5 photons for not-black and can accept 50,000 photons for not-white resulting in 10,000:1 dynamic range. True dynamic range is a function of the photo-receptors on the camera's sensor. Please note that dynamic range has nothing to do with bit-depth - the number of gray levels. Bit-depth simply divides the sensor's operating range of detectable luminance into gray levels. A sensor with 1,000,000:1 dynamic range can have an 8-bit per color conversion or a 14-bit per color conversion but you should expect the 8-bit to show gradations, or lines, where the photon count breaches the next gray level. Spacial dithering can help smooth "chunky" gray levels but the best way is, obviously, to add more bit-depth, which divides the sensor's luminance range into finer and finer slices. There are ways of fudging this, though, so beware of the difference between true dynamic range and "features" that call themselves Dynamic Range, or DR. Fujifilm has been adjusting gain in pixel elements so that, even if the element's range is small, the range can be moved, shifted, so that some elements are "tuned" to be more sensitive and some are "tuned" to be less. This technique effectively increases a picture's dynamic range at the cost of... spacial dithering; that is, half the pixels scattered throughout the image may be exposed more, leading to blooming in bright areas just to capture detail in dark areas, while the other pixels might be less exposed to preserve detail in bright areas at the cost of losing detail in shadows. Using a DR setting sprinkles darker pixels in with brighter ones to (theoretically) increase the dynamic range of the picture (but not individual pixel elements). (I said, "theoretically," because the increased dynamic range must be saved in the picture, then reproduced on some media capable of showing it.) There is nothing inherently wrong with this approach and it can actually aid in giving a photo film-like grain.\ Of course, there is another way to increase dynamic range: Take two photographs with different exposure settings and overlay them. That is the common "HDR" stuff you read about. Typical HDR photography increases dynamic range by using temporal dithering (in a very simplistic sense). Shadow enhancement can be applied to any photograph, even photo's taken with "DR" enhancement. Those settings usually just re-map gray levels. For example, gray levels 1, 2, and 3 may be re-mapped to gray levels 2, 4, and 6 making them artificially brighter.
  11. The electronic IBIS looks like it needs a new sensor design, hence the "s" instead of being a camera firmware update. I did not really have much hope of a software-only solution for my X-T2 but I can still dream of it. (Hmm... using the phase-detection (auto-focus) elements to aid eIBIS is also possible.)
  12. Summer Savings sale ending in a few days; Fujifilm will be considering whether they need to launch another sale before the end of the first half of the fiscal year to move more product or wait until the holidays. Keep your fingers crossed.
  13. More exciting may be the application that uses the Bluetooth instead of Wi-fi.
  14. I use a PC with Windows 10 and it has not glitched on me. Apple always charges more, often a lot more, for their products but they do put extra effort into software compatibility, security, and user interface. The software interface for your favorite applications is what you should be weighing more heavily than price. For many people, familiarity with the interfaces and work-flow are more important than hardware specifications. If, for example, you use Photoshop for Mac, can you get comfortable going through Windows to launch it, use the PC version of Photoshop, find your photo archive through the PC's Windows Explorer, and link to your NAS via PC? If so, then a switch should not be a problem. If not, perhaps keeping with your current system and saving up for for a new Mac is the way to go. PCs are generally cheaper than Macs so raw performance for the price often means a shorter replacement cycle than Mac users but Mac users generally care less about having latest hardware and more about a consistent and trouble-free user interface to get tasks done with minimal mental stress. It is kind of similar to people who buy, or lease, new cars every 2-3 years compared to those who buy used cars or build/mod their own.
  15. A software-based IBIS is certainly possible. For video, this is "old" technology where a processor looks for edges (high luminance gradients) and ties to pixel-shift from one frame to the next to keep low-motion areas of the picture overlapping. To apply this technique in a photograph, the camera would "merely" have to snoop images before and after the shot, do a similar process as with video, and re-compose the image. There are difficulties, though, having to do with exposure time. The sensor exposure time is a limiting factor because, if the image is blurred during exposure, there is no good way to un-blur it. Sensor hardware may be able to drive past this limit by taking multiple, fractional exposures, processing the edge detection and alignment, then summing them for a properly-exposed image. This would be an image processing function for a very high speed electronic shutter. Another method might be to snoop live images and, after the shutter button is pressed, wait for when the focus area shows high luminance gradients and discard undesired previous images. The camera's processor might not need to examine image captures across the whole sensor if the region of interest were in the center or found through face-detection. Introducing a short (but varying) delay in image capture from when the button is pushed might be acceptable if the delay is tolerably short. Again, this requires a very fast sensor, processor, and everything in between. To make IS work on a body/lens without it, having a fast, sharp lens helps to ensure the sensor gets enough photons, quickly enough, for the two methods mentioned above. None of the non-mechanical IBIS methods I know of do not rely on a very fast sensor and software to simulate mechanical IBIS or lens-based OIS. Fujifilm could be relying on a sensor maker to make a software solution viable. If Sony does IBIS on a camera without mechanical motion compensation, then Fujifilm should be able to get that, too, though I suspect Fujifilm will be doing their own software.
×
×
  • Create New...