mjh
Members-
Posts
76 -
Joined
-
Last visited
-
Days Won
1
Content Type
Forums
Gallery
Store
Everything posted by mjh
-
There are lots of things I refrain commenting on, either because it is hearsay, because I don’t care, or just because I don’t have anything valuable to add to the debate. In this case I could add some facts about the different ways (and their pros and cons) film simulations can be implemented, as that falls within my field of expertise. And so I did; this should suffice to dispel some misconceptions. I haven’t seen the source code of the firmware so I cannot say what exactly Fuji has done and cannot comment on it. With regard to firmware updates, I understand that I buy a camera with a certain specification and that I can expect it to conform to that specification. If it does not I expect the vendor to issue a firmware update to make good on the promised specs. Everything else is a bonus, not something I’m entitled to. ‘Kaizen’ is largely misunderstood, I believe. Striving for a ‘change for the better’ doesn’t mean that manufacturers should continuously add new features by firmware updates (free of charge). If a product gets replaced by a successor with new and/or improved features this is a perfect example of ‘kaizen’ but probably not what many people using the term have in mind.
-
See above – if film simulations are implemented using look-up tables then it is ‘just’ a matter of adding another table – i.e. data – while the code stays the same. There would be code to apply a look-up table to each pixel and that code would work with any number of look-up tables (read: film simulations). On the other hand, film simulations could also be implemented programmatically and I don’t know which one it is. In the latter case the firmware would probably be more compact (the additional code required for each film simulation would most likely take up less space than a look-up table achieving the same result) and be faster, but look-up tables are simpler to implement. Adobe’s DNG camera profiles (.dcp files) which are used for emulating film simulations are mostly based on look-up tables. Such a look-up table typically comprises thousands (23,040 being a typical value) of points within the HSL (Hue, Saturation, Lightness) colour model, specifying some shift in hue, saturation, and lightness to be applied at that point within the colour space. With about 16.8 million colours and just 23,040 entries in the look-up table, the required shift needs to interpolated between the nearest colours within the look-up table. This is how Adobe does it; Fuji may be doing it differently.
-
As a matter of fact I was arguing you should take care to expose each shot correctly, but that spot metering might not be the best way to achieve that (the reasons for both are outlined in my original post). I also argued that metering is for making sure the sensor captures all the tonal values worth capturing, something that cannot possibly be done in post-processing. Only that is just the first part of a two-part process.
-
While I agree that implementing Classic Chrome would probably be possible, this would require much more code than you could cram into just one line. The simplest case would be if film simulation modes were implemented as 3D look-up tables; in that case the code would be the same for all the modes but you still had to store another look-up table. There is much more to this than just setting a few parameters.
-
Olympus supports spot metering for either highlights or shadows which comes in handy in situations as these, and this could make for a nice addition to the existing metering options. Still these days I don’t think spot metering is as useful as it used to be. The original idea of spot metering was that you metered for 18% gray (or whatever) and all the other zones would take care of themselves. Now with sensors clipping harshly at some point it should be obvious that the highlights do emphatically not take care of themselves; they require the photographer’s special attention. Or the camera’s – which implies using matrix metering, evaluating highlights and shadows individually. Or you just check the histogram to make sure you have captured all the tonal values worth capturing. It doesn’t matter if they don’t come out right at first; that is what raw development is for. This isn’t that different from the original zone system where developing the film and printing from the negative were just as important as getting the metering right.
-
Now this would be relatively easy to implement (but still require some resources) but how many photographers would want to fiddle with a thumb wheel or whatever whenever they change the f-stop with the aperture ring? As usual it is a matter of evaluating (Usefulness of a feature * Number of users for whom it would be useful) / Resources required for implementing it. It doesn’t look too promising to me.
-
Because it was invented elsewhere (by Minolta; the first camera using sensor-based stabilisation was the DiMAGE A2 introduced in 2004), at a time when both Canon and Nikon had already developed and perfectioned their lens-shift systems and implemented those in lots of lenses. For many years lens-based stabilisers were widely believed to be superior. Now sensor-based stabilisers not only did catch up; some have even added a third stabilised axis, namely the optical axis, which is impossible to replicate with lens-based stabilisers.
-
Isn’t meter-and-recompose – the standard approach to spot metering – much faster and more flexible than shifting the spot? Since there is no recomposition error as there is with focusing, what would be the downside? Note that the spot you are focusing on may not be suitable for spot metering which assumes an 18% reflectivity.
-
The Leica M does this by guesstimating the f-stop, based on the brightness measured by the TTL meter and the ambient light sensor. The guesstimated f-stop is usually within ±1 EV of its actual value but can be further off the mark if filters are used, for example, so it is far from reliable. And as the X-E2 doesn’t feature an ambient light sensor, it wouldn’t be possible to adapt this solution anyway.
-
Wouldn’t that be an argument for zone focusing rather than hyperfocal focusing? Under some circumstances it may amount to the same thing but again: when is sharpness at infinity something to worry about? When photographing children or pets I would always go for an additional 1 metre in front and gladly sacrifice sharpness at infinity if that is what it requires.
- 11 replies
-
- X-T1
- Hyperfocal distance
-
(and 3 more)
Tagged with:
-
This popular ‘rule’ is generally wrong, safe for one distance (depending on both focal length and f-stop, so it’s not even always the same distance). Quite obviously it is wrong for the case in question, namly the hyperfocal distance, as the depth of field then reaches from half that distance all the way to infinity. What’s 1/3 of infinity supposed to be? Having said that, I have rarely found a use for hyperfocal focusing. With street photography, for example, you are usually better off focusing for the typical distance your subjects will be in; who cares about infinity with this kind of shots? And as Harold P. Merklinger has argued decades ago, for landscape shots it is usually preferable to just focus at infinity rather than fiddling with formulas for the hyperfocal distance.
- 11 replies
-
- X-T1
- Hyperfocal distance
-
(and 3 more)
Tagged with:
-
Of course it is possible. Owners of Olympus cameras with in-body stabilisation have been known to occasionally use Panasonic lenses with built-in stabilisation for quite some time. One can also use stabilised lenses on a Pentax DSLR with IBIS. But as it is now, Fuji doesn’t have a sensor-shift based image stabiliser. Olympus, Sony (originally Minolta), and Pentax are years ahead in this game. Fuji would have to catch up, navigate the quagmire of existing patents or pay the licensing fees (if licensing is even an option). Successfully competing against Olympus’ excellent 5-axis IBIS would be no mean feat. Personally I would prefer sensor-shift to lens-shift, but I am not at all convinced that Fuji will make the switch.
-
Fuji never manufactured Hasselblads (except the X-Pan of course). Fuji’s contribution to H series cameras is the prism viewfinder, also they have designed (to Hasselblad’s specs) and are manufacturing most, but not all of the H system lenses. The 28 mm, for example, is Hasselblad’s own design, as is the PC adapter. The central shutter and aperture assembly of each lens is Hasselblad’s design and manufactured in Gothenburg, Sweden, then shipped to Japan for the lens assembly. By the way, the Fuji-branded version of the Hasselblad H was also mostly manufactured in Gothenburg and Copenhagen.
-
No, the nice thing about employing apodization to improve the bokeh of a sharp, well corrected lens is that it will still be quite sharp within the plane of sharpness, yet produce a rather pleasant, creamy bokeh both in front and behind that plane.
-
My wish list for the successor of the X-Pro1!!
mjh replied to oblomov's topic in Fuji X-Pro 1 / Fuji X-Pro 2 / Fuji X-Pro 3
Leica says even they couldn’t build lenses that small if these were to be equipped with state-of-the-art electronics and AF, so there you are … -
X-T1 ... autofocus works different with different buttons ...
mjh replied to Adam Woodhouse's topic in Fuji X-T1 / Fuji X-T10
For many years now, Instant AF has been a feature of (many) Fuji cameras, and it always behaved differently from focusing by half-pressing the shutter release button. This is by design. -
Generally the ISO setting increases the analogue amplification up to a certain point – say ISO 1600. From that point onward the amplification stays the same but the camera will multiply the digitised signal to account for the higher ISO setting. The reason for the switch from analogue to digital amplification (i.e. multiplication) is that analogue amplification reduces quantisation noise in the eventual image, but only up to a certain point; when the ISO setting is higher than that there is nothing to be gained from amplifying the analogue signal and one could just as well multiply the digital values. For example, let’s assume that base ISO is 200, the amplification is increased up to ISO 1600, and the chosen ISO setting is 6400. In that case the amplification factor would be 8 times higher than at base ISO, the amplified signal would be digitised, and the digital value multiplied by 4: ISO 200 * 8 * 4 = ISO 6400. This is how the majority of cameras work. Fuji’s X models are different in that they don’t use multiplication. Like other models they increase the analogue gain up to a certain point (ISO 1600), but above that point they don’t multiply the values by a factor accounting for the increased ISO. The problem with both analogue amplification and multiplication is that it reduces dynamic range. Doubling the values, be it analogue or digital, results in a one stop reduction in dynamic range. Now analogue amplification is still a good idea as it reduces quantisation noise but multiplication has nothing going for it. Now of course if you don’t crank up the gain above ISO 1600 and you don’t multiply either, the images would look underexposed. This can be rectified by applying a gamma curve brightening the midtones but preserving the highlights. This way one gets a correctly exposed image but without the usual loss of dynamic range.
-
Dxomark is unable to deal with cameras that don’t increase amplification in proportion to the ISO setting so according to their measurements, the sensitivity of the X100 at ISO 3200 and 6400 is just the same as the sensitivity at ISO 1600.
-
As explained in #9, a lower native sensitivity doesn’t improve dynamic range. If Sony could manage to increase the native sensitivity to ISO 160 or 200 the dynamic range would still be as high. The reduced sensitivity is the price to pay for the increased resolution but that may change in the future.
-
Given that this is common knowledge, why should I? Ask Fuji if you are doubtful. This reluctance to increase amplification (not just analogue amplification but also digital multiplication) above a certain ISO level has been the hallmark of Fuji cameras ever since the X100.
-
Sure, but the lower sensitivity of that sensor doesn’t confer any advantages. It is a fine sensor but it isn’t its lower native sensitivity that is responsible for that.
-
This characteristic isn’t specific to the X-Trans sensor; it is a property of all recent Sony sensors (and some others). I think the discussion started with an article by Guillermo Lujik (http://translate.google.com/translate?u=http%3A%2F%2Fwww.guillermoluijk.com%2Farticle%2Fsnr%2Findex.htm&langpair=es%7Cen&hl=EN&ie=UTF-8) in 2011. This was about Nikon, Pentax, and Sony cameras originally but much the same applied to other cameras using similar sensors. When the X100 was introduced, some people wondered why it didn’t increase amplification over ISO 1600 – whether you choose ISO 1600, 3200, or 6400, the camera stores the same data in the raw file, only the ISO value in the Exif metadata changes. This way Fuji was exploiting the ISO-lessness of the sensor but back then, few people were aware of the concept of an ISO-less sensor.
-
In that case we would have to wait for Sony to develop a sensor with the desired specs … Sensors, as a rule, have the same native sensitivity regardless of the pixel size – somewhere between ISO 100 and 200. Larger pixels receive more photons so more electrons can be collected, but they also store a larger electric charge before they overflow. As a result their ISO figure is roughly the same as that of a sensor with smaller pixels. Now if you wanted to reduce the native sensitivity you could make the pixels less sensitive, say by reducing the light-sensitive area or by doing away with microlenses. But all that would give you was a less sensitive sensor, without any gains in signal-to-noise ratio or dynamic range. Alternatively you could try to increase the pixels’ capacity for electric charges so more electrons could be stored, resulting in less noise and more dynamic range. But how would that be achieved? There is only so much space on the silicon chip. There are a couple of technologies looking promising, the quantum film sensor for example, but as these technologies could also increase the efficiency of converting light into electricity, the ISO number wouldn’t necessarily go down – it might just as well go up. In the analogue era we have learned to associate low ISO figures with high resolution, less grain (noise), and high dynamic range. With sensors these rules don’t apply anymore, not in exactly the same way anyway.
-
Reducing the sensor’s native sensitivity from ISO 200 to 100 would render the sensor one f-stop less sensitive, increasing noise at every ISO setting. I rather doubt anyone would want that.
