It’s not just Canon. I’ve been testing cameras and lenses for years, and testing lens IS as well as camera IBIS. I do it with informal real-world testing because there are no reproducible tests for stabilizers unless you are a testing body like CIPA. More on this shortly.
The lens that’s prompted this article is the RF 35mm F1.8 Macro IS STM. I own one, so I’ve had plenty of time to test it. And while Canon claims shake compenation of 5 stops – and that’s a figure tested independently by CIPA – I can’t even get close to that.
I reckon, at best, that I maybe get 1-2 stops of useful shake compensation. And that’s the biggest gap between practical experience and claimed figures I’ve yet experienced. With the IS off, I can expect to get most shots sharp at 1/60sec. With it on, 1/30sec is pretty safe, and shots taken at 1/15sec can be sharp if I’m careful. No shots are sharp at 1/8sec or slower.
I also own the RF 24mm F1.8 Macro IS STM, which also claims 5 stops of shake compensation and, in my tests achieves around 1-2 stops.
Other Canon RF lenses fare better. I think the RF 24-50mm F4.5-6.3 IS STM retracting kit lens might be good for around 2-3 stops, and the underrated RF 24-105mm F4-7.1 IS STM maybe as much as 3-4 stops. This is how they perform for me. I know the stated figures are higher.
It’s not those zoom lenses I have an issue with, even though they still don’t quite deliver on the claimed stabilization performance (I’ve got used to that). It’s the underperforming RF 35mm F1.8 Macro IS STM and RF 24mm F1.8 Macro IS STM which are bothering me.
So that set me thinking. Just how does CIPA test these stabilization systems, and why are my results so different?
How CIPA tests stabilization
I’m going to summarise and paraphrase a lot of technical information in the CIPA testing documentation. The bottom line is that using humans to hand-hold cameras for testing introduces too many variables, so CIPA uses mechanical vibration inducing machinery using waveforms developed to mimic the kind of shake introduced by human hand-holding.
In other words, it’s a machine mimicking a human, so it’s not going to be quite the same as when you or I shoot handheld. But it is consistent, so it eliminates all the variations from different human testers with different skill (or caffeine) levels.
So that got me thinking. If the test is the same every time, and the camera makers know what the test procedure is, it’s inevitably going to drive technical development towards hardware that delivers the best test results.
CIPA does have this to say in its documentation:
“Also, it is technically possible to produce a camera that is intentionally designed to generate favorable measurement results without regard to practicality by giving special settiings that are adapted to the characteristics of the camera vibration waveforms specified in this standard.”
CIPA
By ‘practicality’, CIPA means that we aren’t all equally good at handheld photography, we haven’t drunk equal amounts of coffee and that some of us snap and grab instead of gently squeezing the shutter release.
So can we trust official camera and lens stabilization figures?
Well I don’t, but that’s just based on testing a lot of cameras and knowledge of my own hand-holding shortcomings. I’m used to cameras delivering about half their claimed stabilizing potential in my hands. The higher the claimed stabilization figure, the better I expect the stabilization to be, allowing for my reduced expectations.
But the stabilization performance gap of my RF 35mm F1.8 Macro IS STM and RF 24mm F1.8 Macro IS STM lenses was just a little too large for me to ignore.
Now I’m not suggesting for one moment that camera makers are behaving dishonestly, partly because I genuinely believe they are not and partly because I don’t want to get sued.
But if you are developing stabilization systems for which there is a recognized independent test with results which will be measured against rival makers using the same tests, then you’ve got to imagine that camera makers will want to do well in these tests and to some degree that is bound to drive development.
And since these tests are only a laboratory simulation of human variations and weaknesses, we perhaps shouldn’t be too surprised that real-world results are different. Disappointed? Oh yes, definitely disappointed.