A Little Ludwig Goes a Long Way

A smattering of opinions on technology, books, business, and culture. Now in its 4th technology iteration.

41megapixel camera! Where does it end -- gigapixel cameras? Terapixel?

29 February 2012

So, a “41 megapixel camera phone from Nokia”:http://www.tomsguide.com/us/Nokia-808-PureView-41-megapixel-Camera-Phone,news-14288.html, pretty amazing. The improvement in camera phones over the last 5 years has been amazing. Moore’s law has driven the cost of camera chipsets into the ground, and their performance has continued to increase. Just like the earlier digital camera wave destroyed the film/processing/prints business, now the smartphone+software combo is destroying the digital point-and-shoot camera market. Moore’s law is a powerful force.

Higher-end cameras are being transformed as well. DSLRs are under assault by the new breed of mirrorless camera bodies. Sensors are getting good enough as are the LED/LCD viewfinders, permitting a shift to these new smaller platforms. This shift will take a little longer because of people’s investments in lenses, but it is underway.

Both of these shifts are about software and silicon, driven by Moore’s Law, eating away the mechanics of the camera. I suspect that we are in for even more dramatic changes, Moore’s Law is still hard at work. There are still a lot of mechanical parts in these cameras, and a lot of error-prone human involvement in composing, aiming, and timing image capture. As the cost of processing and memory continue to drop, how else might be picture-taking be transformed?

* The Lytro (supposed to arrive this month) is attacking some of the lens mechanism via silicon. Rather than having a complex mechanism to direct just the photons you want to the capture surface, the Lytro captures a broader set of photons and does all the focusing post-capture. It is early days but we seem to be heading for cameras that capture all the incident photons (frequency, phase, angle of incidence) and let you assemble the photo you want later. * Photo timing still requires a lot of human involvement, and is a source of many lost photos for exposure reasons and mistiming of the photo. This seems to be great opportunity area – the camera could use the shutter button as a hint, continually grab an image stream, save the couple seconds around the hint, and use software to find the best one. The realities of battery life may be the limiting factor here. * Cameras can also take a hint from computers. Rather than making bigger and faster processors, we’ve moved to 4-core and 8-core and beyond. At the whole system level, we get better graphics performance by using SLI or other techniques to do use multiple GPUs. Rather than having bigger and bigger sensors, it seems likely that cameras will move to multiple sensors. Bonded together to create one image, or spread around the camera body. Why? Well this could be used for 3d cameras – Fuji has some commercial 3D cameras, and there are a lot of “research efforts”:http://adsabs.harvard.edu/abs/2010ITEIS.130.1561N. Or to create HDR cameras – cameras that capture multiple exposure images at once. Or crazy “spider eye-inspired 3d and focus”:http://www.petapixel.com/2012/01/27/jumping-spiders-eyes-may-inspire-new-camera-technologies/. * Maybe cameras can eliminate the whole sighting and composition step, you could just kind of point your camera in the broad direction you want and snap. Maybe the camera can have sensors on all sides, you could just kind of wave your camera cube around. We are headed for a point where sensors are basically free, so I’d expect a lot of innovation in placement and number of them.

So if a future camera is taking kaboodles of images in all directions all the time because sensors and local memory and processing power is free, what will be the constraining factors in taking and using pictures? Well battery life and bandwidth will still be realities. And software. We will need software that can deal with an explosion of photo and video content. I have a lot of photos today, 50K or so, it is a management struggle. What if I have 500K? 5M? What if a business has billions of photos, billions of minutes of video? How do people find their way thru the flood to find the best pictures, to stitch together pictures and videos from different sources into a coherent whole? What post-processing takes place to clean up the pictures, fix up composition, correct errors, etc? And how do you search across everyone’s gigantic photo streams to find the photos you really want to see? Investing in “big data for pictures/video” should be a durable investment thesis.

I’m not clear how it all plays out, but I feel pretty certain that Moore’s law will insure that the way we take and use pictures will be dramatically different in 20 years. A gigapixel camera might be nice but I suspect the silicon and software will be used not to just crank up resolution, but to address the other steps in taking pictures – composition, timing, exposure, aiming, post-processing, finding, sharing, etc.