2014/09/30

What makes a photograph?

If you were looking for a recipe for an outstanding image, the right address is here.

One of the other possible answers to the question from the title is of course: a camera.
As simple as it seems though, below the surface there is much more about process of a camera making a photograph .
It appears every autumn, on top of flood of photos taken during summer holiday, I write something about a device I no longer use for that purpose (Nikon 1 J1 in 2012, Olympus OM-D E-M5 in 2013). Actually if I wrote about each of them it could be quarterly, not annual cycle, thus this time a little bit of my thoughts based on my quest for the best camera for me, as well as some predictions and recomendations from potential customer to the manufacturers.
Photokina has just finished and we know pretty much where we are heading for over next months, so that also feels as the best moment to sum up what I learnt over last two years.

In the film days a camera was all about usability: with image quality to much extent dictated by the loaded roll itself, what differentiated producs was how you operated them. Then came digital era, with varying sensor sizes, rapid progress in lens design, increase of computing power and finally connectivity.

This is a challenging period for photographic industry, as smartphones on many levels redefined (and keep influencing) it. First of all, integration of a tiny, fixed focal length imaging module in every of those ubiquitous devices caused the whole species of compact point-and-shoot (small sensor, small zoom range, simple control) cameras to extinct. On the other hand, many people got drawn into taking photos thanks to that, and many of them found those devices limiting, causing them to look for a dedicated camera. Multimedia and social media on one hand drag users off traditional specialized cameras, but equally draw toward new, more multifunctional devices. Big part of this functionality comes from tremenduous increase of microprocessor capabilities, changes in user interface, mobile networking - all thanks to phones.

On top of that progress in sensor and lens technology caused many compact and reasonably priced devices to be sufficient enough to not to be forced to look for traditional large sensor DSLR offerings, causing trouble at this very established sector of photographic industry.

Where we are and where do we go from here? There is no doubt digital cameras will become cheaper to produce and more capable. Some signs of those changes are visible already, especially in sensors department:
- further progress in manufacturing process to increase efficiency and reduce noise: BSI approach used on 1" sensors (Sony), moved to APS-C (Samsung), will likely force to up the game by other players and make the field equal again, but at higher level. Photoorganic (rather than semiconductor) detectors and other ways of colour discrimination (to replace RGB filtering) already in works
- new ideas to overcome Bayer filter limitatons (other than already introduced Sigma Foveon and Fuji Trans-X)
- curved/sferical sensors, with new designs of optics (already in works - Sony)
New lens designs benefitting from computer aided desing, material and process engineering are also popping up, so let's focus on not so obvious things.

Modularity - this actually is also happening right now, although isn't that visible. Take Sony A7x: one body, three cameras. To large extend - thanks to internal design. Sony actually made one step further with QX camera modules. They are being marketed as smartphone companion, but in principle anything comunicating with them will work, which opens door of remote and automation applications. Olympus followed with Open Platform project. Modularity is natural from production efficiency point of view, but in specific cases can lead to an unexpected success (see IBM-PC story)

4K - because heavily driven by TV manufacturers, approached by many as another way to convince to part with hard earned money after project '3D' failed. It can backfire for TV makers (lack of UHD content and technical problems with delivery), but brings obvious benefits for cinema industry (thus will be present in movie and hybrid cameras for sure), and also greatly improves classic HD quality thanks to full frame readout and consequent downscaling. See it and you'll believe it. What about photos? Panasonic (did I mention TV manufacturers?) leading the pack of 4K equipped digital cameras introduces in the newest frimware updates a '4K Photo Mode', further blending video and stills taking.
The opportunity I can see though is the end of megapixel war: so far with pretty much every generation, for given sensor size its pixel population grew. With 4K in mind (whether UHD 3840x2160 or Cinema 4096 x 2160) two approaches seem obvious for 3:2 proportioned sensor: 12MPix (for direct read) and 48Mpix (2x downsampling). That would nicely divide cameras into high sensitivity and high resolution group. The former has actually already materialized in form of Sony A7s, as for the latter - for last 2 years we're at 36Mpix, 48 is not far, it makes technical sense and marketing (likely forcing it to go just above 50Mpix) will love it. There are of course other options, but with consequences, like cropping (GH4), or downsampling not based on simple factor 2 (like the new 28Mpix sensor of Samsung NX1).
Necessarily increased readour rate will also benefit in form of reduced distortion when using electronic shutter, and that method of exposing gains popularity proportionally to influence of internal vibrations on sharpnes.

Handling - that seems obvious, but it is actually shocking that many designers/manufacturers clearly do not realize that a camera is a specific device in respect that is often held in human hand to serve its purpose. This is one of the areas where DSLR manufacturers (Canon, Nikon, Pentax) excel - each new model is externally pretty much a clone of previous. Because that works! Competitors seem to be wanting to differentiate so much, that experiment with all BUT ergonomic approach. It took Panasonic three generations of GH line to get the grip depth right and setle on body shape (sadly GH4 is still not perfect camera for holding and controlling). Things are even worse with small cameras: for whatever reason they lack grip or it is really symbolic (and useless). Lens, being the most protruding part of a camera defines how a grip could stick out, but there is some anxiety to use that space. Marketing is still going to quote dimension in the thinnest place in their materials, so no difference there. Maybe they worry it will not be 'pocketable'? Well, if the lens won't fit pocket, it doesn't really matter. Unless keeping camera in one's pocket thin end in, with lens sticking out is only simulated on computer model. Because in real life it won't stay there for long

External controls - that is very important area: dials and buttons is what differenciates from touchscreen only phones, and allows to control the process better in non-optimal conditions (where phone modules and automatic control suffer) or for specific results.
Control buttons layout to much extent crystalized by now, but it doesn't mean that some approaches, even if 'traditionally' established or based on 'classic' solutions are good from practical perspective, and a lot depends on the execution. Examples of good ideas made usless could be spongy D pad and inaccesible buttons on E-M5 or slippery dials on Panasonic G/GH.
Analysing pros and cons of approach to control by each manufacturer is a separate subject, let's talk about future possibilities:
- Haptic touchscreen response: by miniaturizing cameras manufacturers save money and users save their backs, necks and shoulders, but area available for control buttons dramatically shrink. The bigest advantage of physical buttons is their tactile response. When that is offered by touchscreens themselves (possibly in conjuction with other feedback method), current interfaces involving semicurcular motion of right thumb might succesfully replacesome of less crirtical buttons
- Focus point selection - 4 way controller was cool with 9 AF point modules, but it struggles with 49 AF areas of adjustable size and customizable patterns. It struggles especially since cameras are shrinking and especially left-eye shooters end up puting their thumbs in the right eye. Out of solutions I can see there are: joystick type controller located around grip, controlled with index finger; dedicated touchpad; or a dial with pad integrated around shutter button. The touchpad could be a surface of top LCD, in case where a camera is equipped with one.
The coolest solution however would be an eye tracking mechanism, activated with a button, allowing to point the AF area by just looking at the EVF
- Dynamic button information: have you ever wondered what did you set Fn5 to? Integrating tiny display on the button surface would help with that (especially with more dynamic camera setup, see below) as well as provide illumination. Idea already executed on virtual Fn buttons on touchscreens, so again haptic touchscreen technology could serve here as well.
- Multifunctional selection wheels: I love drive mode wheel, but on many occasions it only saves half of a hassle diving into menus, as it selects default value of a setting. I'd love to see option to spin to select mode, but then if necessary press and spin again to adjust its setting (eg. spin for self counter, press to activate adjustment, spin for 2s, in case 10s is the default value). Dynamic display would be again nice, but even EVF preview of adjustments should be sufficient.

Internal controls: from simple microcontrollers, electronics in modern digital cameras grew to respectable computing machines, but it is hardy reflected in their interface. New options being added to the existing menu systems, make it clogged and unintuitive (of course I'm taking about Olympus, but the rest isn't much better). On top of that some settings work in conjunction with others, making setup possible only with a manual. Many approaches (horizontal/vertical menus, specific button combination requirements) limit accesibility. The list is so long, that it might be easier to postulate what it should be:
  • Fully customizable menus. The camera cares what value is being fed, not where from. The manufacturer should provide a basic set of tools and setup, with a method (eg external application, see below) of reposition those options within a menu, add new options (within hardware limitations), format user interface etc. Pretty much XML/CSS in a camera. 
  • Named presets, with option to transfer to/from a camera (for modification, backup, or use on other devices
  • Simple, Guided, Expert modes (effetively manufacturer prepared presets) on some camera lines: not everybody needs access to all the options, not in every case exposing them as beneficial. 
Communication with other devices: smartphone control seems now to be a must in all WiFi equipped cameras, but tethering so far was domain of Canon and Nikon only. Olympus and Panasonic were loud to use all the 'pro' words, but only now seemed to realize that some studio setups are laptop based. I'm not sure if there is anything to thank for, as it seems they want to give their custom control applications (efectively wired versions of their phone wireless control apps), rather than API or protocol details, allowing integration in software (Lightroom, Helicon etc).
Data exchange should not be limited to camera control though, but allow for the setup (customizing menus, defining presets)

Image processing feedback: at the moment there are 2 choices: in-camera processing or manual development from raw. With increasing computing power and more sophisticated algoritms (localized denoising and sharpening, multiparameter lens corrections) camera output could produce results technically better than 'hand-made', yet still satisfying photographer's perception preference. The issue is the fact that in-camera processing can be controlled only up to modification of some values, with results visible on small display only, and the setup remains static for all images processed with that setting. The solution I propose is 'intelligent' mechanism analyzing modifications applied by a photographer 'to taste' to the default output, and feeding them back to in-camera processing pipeline, kind of personalized Auto Awesome.

Some already provide raw processing tools (Olympus Viewer, Nikon NX2), together with improved connectivity that creates option for an ecosystem of a device and software.

Scene depth sensor: I'm actually surprised than no camera manufacturer (so far) picked the idea of HTC. And I actually don't have three-dimentional output in mind, but ability to produce scene depth map and use it to adjust depth of field. That would make for a great weapon for small sensor cameras against bigger competitors to overcome one of physical limitations (very often quoted at the decive reason for choosing specific format). It is of course easier to implement on fixed focal camera, and additional sensor doesn't have to be of very good (= pricey) parameters.


Enough of wishful thinking :) Let's see, where competion, technology development and customers choices (I specifically do not mention customer feedback, as it seems the companies know better...) will drive the products, and then I'll be able to judge whether I need new crystal ball :D

No comments:

Post a Comment