Jump to content

Astigmatism

Members
  • Posts

    139
  • Joined

  • Last visited

  • Days Won

    9

Everything posted by Astigmatism

  1. No rumors to report but I've long been surprised at the lack of telephoto primes. The 200 f/2 looks fantastic, but it's so costly I'm not getting it. Otherwise I think there's nothing beyond the 90 mm.
  2. Yes, I saw that after I posted. Well, not surprising. I'd have thought over $1000 was surprising, or under $400. Somewhere I found an article about it that said it would be a great all around lens, a general purpose prime. I've been pondering the wonder of that statement ever since. My take: the person who said that does not specialize in portraits.
  3. The new XF 8mm F3.5 lens is supposed to be announced tomorrow, May 24. I want one, and decided when I first saw it on an updated lens map that I would buy it (though rude surprises are always a possibility). Presuming there will be a price announced too, what do you think it will be? I'll start the guessing. I'm thinking maybe $600. It's not a zoom. And it's not fast, by modern standards. But it will be the widest Fuji prime. And it will be one of the widest X mount prime lenses out there by any maker (excepting fisheyes of course). What do you say, am I thinking too high or too low?
  4. I take it the announcement announcement was on May 19. Was there an announcement announcement announcement?
  5. What is the GFX line like, for an X System user? I could research it and haven't, and it's only mild curiosity because I don't think I'd pay the price of entry. But I am curious. If you feel like addressing it: Is it like an X System camera, but bigger and heavier and way more resolution? Or are there many differences? For example, do controls work differently? Is the lens lineup more limited? Does it necessarily bump you up in how much computer you need to handle the images? Is it a RAW + JPG world, or do the image formats change? Is it more work, is doing everything a bit harder? If somebody bought me a nice body and a half dozen nice lenses and gave them to me, what would surprise me most?
  6. Welcome to the forum! I've been thinking about this exact question lately. I did a LOT of amateur photography in about 1978-1985, including my own darkroom with some simple color processes. Much of my attention was on how to do the wet chemistry and using the enlarger. Polycontrast paper, which involved purple and yellow filters on the enlarger, was new, and I tried a lot with that, including burning and dodging with different filters to do local increase or decrease of the contrast. For a while I was on a sepia toning kick. On the camera side of things, I liked macrophotography including a bellows and special bellows lenses, and I worked pretty hard to make depth of field work. Generally I tried to practice better focusing technique, and had about 4 or 5 different focusing screens. I tried to practice better holding technique, too, using tips from archery to control my breath and get less blurry pictures when struggling with long shutter times. I got into Fuji X cameras within the last couple years. This was my introduction to digital cameras with interchangeable lenses. What evolved the most was that all the wet chemistry went away, including a lot of work that had nothing to do with controlling the images I made. Do I need to improve my temperature control? How fresh are all my batches of chemicals and how fresh do they need to be? Do I need to add a fan because the fumes are bothering me? Can I make a homemade vacuum easel to keep the paper from curling under the enlarger? Can I load some more cartridges today or is it so hot I will sweat inside the changing bag and ruin them all? ALL of that stuff just went away. Lots more evolved. Autofocus mostly made focusing technique go away, or reduced it to thinking about what part of the image I wanted sharp. Rather than having to decide whether to accept the grain of Tri-X or the speed of Pan-X or compromise on Plus-X, and having to stick with that for the whole session, I get sensitivity that is somewhere between better and way way better. Handheld shots can be so much slower now without shake. And the lenses are faster -- I used to have one lens that went to f/1.4, and now I have several that can do that, and one that incredibly goes to f/1.0. Not only that, I can do focus stacking now, and get what used to be flat out impossible shots. Long story short, mostly, the hard stuff went away, or at least got several stops better. I guess the downside is that now I struggle sometimes with software, installations that don't go right, needing to track updates, and camera instructions that are 10 or 100 times more complex. Before automatic exposure and other microprocessor driven stuff came along, there just weren't that many details. My favorite camera, the Canon F-1, did have a battery for the light meter, but other than the meter not functioning it was the same user experience if I left the battery out. Sunny 16 and I was good to go.
  7. It's hard to make a lens sharp over its entire focusing and aperture range. Extending the focusing range way down, for a fast lens that wasn't designed to do that (at significant extra cost), is likely to bring out a lot of aberrations. Also, the 56 f/1.2 is one of those lenses designed to create smoother bokeh, right? That has to be done either by introducing aberrations, or a plate that shades the edge of the light path, or both. After all, the visible bokeh is just bright spots that are out of focus, so if you want those to look different from a sharp image of the aperture, you have to introduce aberrations for the images at extreme focus positions. Just a general comment. I have never tried the lens you mention.
  8. I think it'd be very neat. High pixel count and no demosaicing. I'd want one! But I agree it might be hard to make a go of it, a niche product. And of course I'd want it configured like the X-T cameras with traditional controls on top! Wow, I shot a lot of Pan-X, Plus-X and Tri-X. So much I was winding my own cartridges, and developing them two at a time back-to-back on the wire spool, the way Minor White taught.
  9. Interesting! I'm mystified by the statement "A 35mm equivalency would give you something like a 60mm at f1.2, except, it’s still going to be 1.5 stops brighter than an f1.2.". Why does a 35 mm equivalent imply a specific f/ number? This equivalency is simply about focal lengths that would give the same angle of view.
  10. I already have the Fuji 50 mm f/1.0, which I like very much. It's a straightforward lens, not one of those that apply filters or odd optical corrections to modify the bokeh for artistic effect. In an ideal rectilinear lens, the bokeh would have uniform illumination and take on the shape of the iris diaphragm, and I think they can only modify that by introducing aberrations (primarily spherical) or vignetting filters. There's no accounting for taste, the heart wants what the heart wants. My personal taste is for minimized aberrations of all types, and my very rare toying with modification away from that only happens in post on long winter evenings. Given that I already own the lens I own, I don't think I'm buying -- but it is interesting! And I'd love to learn about even faster lenses....
  11. Pardon the lengthy backstory? Nay, I appreciate it all! About the rapidly moving water confusing the IS, I had the strong impression that IS is based entirely on sensing accelerations around 3 axes. I didn't think it used any image information at all. This would be why, when using a non-Fuji lens that doesn't communicate its focal length, you have to explicitly tell the camera what the focal length is, to set the scaling between angular acceleration and counteracting motion in the IS mechanism. How sure are you that the camera pays attention to motion in the image when doing stabilization? Anybody?
  12. I noticed this today on Amazon: "Handevision HVIB4085FX IBELUX 40mm f/0.85 High-Speed Lens for Fuji X Digital Cameras". I know nothing about the lens or the maker, save what it says on the page on Amazon. And of course I'm not promoting it. I just thought it was interesting that there was a lens this fast, and don't think I've seen its equal for our mount. I did have a single element asphere that was just for one wavelength, for collimating light, so its optical performance would deteriorate rapidly away from the center. It was f/0.67. And I have a paper about a microscope objective design that gave an NA of 0.92 which corresponds to f/0.2, which could be a world record as far as I know. The paper is here: https://arxiv.org/pdf/1611.02159.pdf But not anything for mounting on Fuji X cameras. Just thought it was interesting.
  13. The cards I have are 170 MB/s, so I ordered a 300 from the list elsewhere on Fuji X Forum. Bet that fixes it.
  14. I'm just now trying to learn about movies, on an X-T4 I've been using for still photography for a few months now. But most of the movies I try to shoot give me a "WRITE ERROR" during "STORING", and leave nothing on the card. Sometimes I get a movie on the card in spite of the error message. Some of the movies I do get have weird spots in them, maybe the whole screen mostly goes a single color with some different looking areas here and there. I've experimented with different movie modes, formatting the card in the camera, and power cycling the camera, but have not found a way to make most of the movies work. I'm using SanDisk Extreme PRO SDXC 128 GB cards that I bought specifically because something from Fuji recommended them, and not using them for anything else. The cards go back and forth between my cameras (I also have an X-T30 ii) and my iMac. I haven't had any problem like this when doing still photography, or when using the cards to do camera body or lens firmware updates (which I think are all current). Any suggestions for how I can figure out whatever's going wrong? Thank you!
  15. I think the phase detection elements should show up. That is, I suppose they are being used for imaging. If they are not used for imaging, they might be missing, or they might still be there, in the raw channels I'm examining. That would be visible. But if they are shown one way or the other, holding a card in front of half the lens should make the phase detection pixels aimed at that half darker. That's the principle I'm trying to exploit. Like it says in the article jerryy cited, "Fujifilm's system provides a seemingly simple solution - masking-off half of a sensel means it only receives light from one side of the lens. By creating strips of these sensels, half 'looking' one way, half 'looking' the other, the camera gets the two distinct images necessary for phase detection AF." Though, "strips" doesn't sound right, I think they'd just be pairs, one pair at each focus point. Then again, they're talking about tens of thousands of these, not a few hundred (two for each focus point). Still trying to figure out what's up here....
  16. https://patents.google.com/patent/US10002899B2/en
  17. For a description of the tiny lenses in general and on specific phase detection pixels see this patent, especially Image 2 which shows Fig. 1 and Fig. 2. "FIG. 1 illustrates a cross-sectional view of some embodiments of a backside illumination (BSI) image sensor with a phase detection auto focus (PDAF) pixel comprising a microlens for good angular response discrimination. FIG. 2 illustrates a graph of some embodiments of angular response curves (ARCs) for PDAF and image capture pixels in FIG. 1." One thing that may be tripping things up here is the terminology "pixel". Originally this meant "picture element", but then it took on different specific meanings in color imaging systems depending whether one meant a single photo detection site, or a minuscule region of the image assigned a color that might be derived from separate red, green and blue filtered signals, or a minuscule region demosaiced from an array (and therefore inherently having explicit information about one of the primary colors and inferred information typically interpolated from the other two primary colors nearby, or perhaps some more complicated modeled information that might e.g. also accomplish antialiasing). RawDigger on their forum prefer "photo site" to refer to the tiny photosensitive region, and "pixel" to the software construct, whereas Chou et al use "pixel" to refer to the tiny photosensitive region.
  18. I'm only referring to "colors" here in the sense of primary color channels associated with the various filters in the array, and not visual colors deriving from a combination of these primaries. With respect, software can give an image of just the pixels, or perhaps better just the photo sites, having green filters in front of them, based on their locations, or just the ones associated with blue or with red filters. Look at the photos I posted -- that's what they are, and that's why so much of those photos is black, as pixels or photo sites in one channel associated with the other colors have no information and are plotted black. Look at the small table of statistics I posted; the only values that aren't integers are the averages and standard deviations, which are usually non integers when you average a bunch of integers. It would be a lot of work to dig through all the integers pixel by pixel searching for something special, even though the software lets you do exactly that if you want. I think it's most reasonable to look at a channel plotted as grayscale to find brighter or dimmer spots, and I think the green channel is the best candidate because there are more spots and green focus would matter more than the others for human eyes. It IS raw data I am digging around in here. That's why they named the software RawDigger. Of course they're going to associate the names R, G and B with the separate raw grayscale values, for example in your Cambridge cite when they say "This also explains why noise in the green channel is much less than for the other two primary colors (see "Understanding Image Noise" for an example)." Being associated with one or the other color is relevant for grayscale, individual pixels, the histograms of raw values, etc etc. It doesn't mean the thing being described necessarily has a color per se. What I'm hunting for here is individual picture elements ("pixels") or photo sites that are brighter or dimmer because the tiny lens in front of that photo site is aimed more at one part of the lens or more at another part, while most of the lenses aim at the entire lens area. That's what phase detection picture elements or photo sites are. Or at least that is my understanding; can anybody educate me different? For anybody interested, here are a few descriptions gleaned from the rawdigger.com web site: RawDigger is a tool allowing to view, study, and analyze pure raw data as recorded by digital photo- and certain video cameras. RawDigger is a microscope of sorts that lets you drill down into raw data both on Mac OS X and Windows. RawDigger doesn’t alter the raw data in any way. RawDigger is not a raw convertor. Instead, it allows you to see the data that will be used by raw convertors. In-depth Analysis of the Camera/Sensor: Study of the white (fully over-exposed) and black frames to check the pixel variation Calculating well depth Visualize sensor stitches Check the black level Determining the geometry and analyzing the optically black frame Check the accuracy of the histogram in RAW converters Determine the amount of vignetting from the lens and from the sensor (digital vignetting), check for any skews in the lens mount and/or sensor mount Dark Current DSNU (Dark Signal Non-Uniformity) PRNU (Photo Response Non-Uniformity)
  19. I think I'm getting raw linear data. Here are the three color channels zoomed in; raw photosites are each only one or another channel. All the values are integers. I'm using this software: https://www.rawdigger.com
  20. Ever the curious tinkerer, I just tried an experiment. I put my Fuji 50 mm f/1.0 lens (minus the lens hood) on my X-T4, manually focused on infinity, held a barrier over the left half of the front of the lens right up against the hood mount ring, and took a RAW photo of a smoothly illuminated surface just a few inches away. Then I looked at the RAW image using RawDigger, selecting just the green channel. What I expected to see was that some pixels would be darkened, because they were phase detection pixels aimed to the left. But I could not find any standout pixels, only a gradation from lighter on the right to darker on the left. I also tried a photo holding the barrier over the bottom half, in case they only aimed phase detection pixels up and down, and tried the other color channels too. I zoomed way in to easily see individual pixels, and panned around in both dimensions for a while. Nothing. Why didn't this show me a matrix of special pixels? Do they smooth over these pixels in the camera before storing the RAW image? Did I not look hard enough? Do I misunderstand how phase detection pixels work (I'm pretty sure I don't)??? Thanks!
  21. I thought for a while about how to do it, but it's complicated. If there's a huge difference it would become apparent, but if at all subtle no. I thought about how to reproduce a level of camera shake, and how to measure it in an image. Any thoughts? Do you have any lenses already that would let you test this? It sounds like you wouldn't, in any case, have that Tamron and the Fuji 16-80 simultaneously, so you'd have to try doing tests on whichever you have first that will turn out to pick up the difference when you do the same tests on whichever you have second.
  22. The way I remember them, the camera body would have a little window above the lens looking down on the top of the barrel, and in the viewfinder you could see the aperture. But this was strictly optical and didn't couple to the exposure control system in any way. If you put your finger in the wrong place, you'd see that in the viewfinder instead. I had an F1 and an AT1 and don't remember if one or both of them worked this way. Anybody know more specifically?
  23. Are the screws there so you can rotate whatever you're mounting to put the right things (such as aperture or focus scale) on top? The M42 thread itself doesn't have a means to do that -- you screw the lens in and when it's tight, it's tight, no matter where it stops. Maybe you can fudge it a bit, but not a whole rotation's worth. Loosen the screws, orient your mounted lens the way it should be, tighten the screws.
  24. Yes, same here, mine does this and it works fine. I think there is a lot moving around in there to autofocus a big lens over such a big range.
  25. I understand your friend's logic. And I have the 10-24 and like it. The ability to go wider is important -- you can always crop narrower afterwards, but if you can't fit the whole subject when shooting, it's lost. However - you are clearly asking about "when you only have a single lens", and the 16-80 does sound better for that (I've never had it). For what you say you want, I think the 10-24 plus the glass you already have isn't much of an option. I do have the 18-135 and I really like it. There's so much it can do. I guess it's pretty big and heavy. It may or may not be worth your consideration. Again, you can always crop -- maybe travel photos don't mind cropping?
×
×
  • Create New...