41megapixel camera! Where does it end — gigapixel cameras? Terapixel?

So, a “41 megapixel camera phone from Nokia”:http://www.tomsguide.com/us/Nokia-808-PureView-41-megapixel-Camera-Phone,news-14288.html, pretty amazing. The improvement in camera phones over the last 5 years has been amazing. Moore’s law has driven the cost of camera chipsets into the ground, and their performance has continued to increase. Just like the earlier digital camera wave destroyed the film/processing/prints business, now the smartphone+software combo is destroying the digital point-and-shoot camera market. Moore’s law is a powerful force.

Higher-end cameras are being transformed as well. DSLRs are under assault by the new breed of mirrorless camera bodies. Sensors are getting good enough as are the LED/LCD viewfinders, permitting a shift to these new smaller platforms. This shift will take a little longer because of people’s investments in lenses, but it is underway.

Both of these shifts are about software and silicon, driven by Moore’s Law, eating away the mechanics of the camera. I suspect that we are in for even more dramatic changes, Moore’s Law is still hard at work. There are still a lot of mechanical parts in these cameras, and a lot of error-prone human involvement in composing, aiming, and timing image capture. As the cost of processing and memory continue to drop, how else might be picture-taking be transformed?

* The Lytro (supposed to arrive this month) is attacking some of the lens mechanism via silicon. Rather than having a complex mechanism to direct just the photons you want to the capture surface, the Lytro captures a broader set of photons and does all the focusing post-capture. It is early days but we seem to be heading for cameras that capture all the incident photons (frequency, phase, angle of incidence) and let you assemble the photo you want later.
* Photo timing still requires a lot of human involvement, and is a source of many lost photos for exposure reasons and mistiming of the photo. This seems to be great opportunity area — the camera could use the shutter button as a hint, continually grab an image stream, save the couple seconds around the hint, and use software to find the best one. The realities of battery life may be the limiting factor here.
* Cameras can also take a hint from computers. Rather than making bigger and faster processors, we’ve moved to 4-core and 8-core and beyond. At the whole system level, we get better graphics performance by using SLI or other techniques to do use multiple GPUs. Rather than having bigger and bigger sensors, it seems likely that cameras will move to multiple sensors. Bonded together to create one image, or spread around the camera body. Why? Well this could be used for 3d cameras — Fuji has some commercial 3D cameras, and there are a lot of “research efforts”:http://adsabs.harvard.edu/abs/2010ITEIS.130.1561N. Or to create HDR cameras — cameras that capture multiple exposure images at once. Or crazy “spider eye-inspired 3d and focus”:http://www.petapixel.com/2012/01/27/jumping-spiders-eyes-may-inspire-new-camera-technologies/.
* Maybe cameras can eliminate the whole sighting and composition step, you could just kind of point your camera in the broad direction you want and snap. Maybe the camera can have sensors on all sides, you could just kind of wave your camera cube around. We are headed for a point where sensors are basically free, so I’d expect a lot of innovation in placement and number of them.

So if a future camera is taking kaboodles of images in all directions all the time because sensors and local memory and processing power is free, what will be the constraining factors in taking and using pictures? Well battery life and bandwidth will still be realities. And software. We will need software that can deal with an explosion of photo and video content. I have a lot of photos today, 50K or so, it is a management struggle. What if I have 500K? 5M? What if a business has billions of photos, billions of minutes of video? How do people find their way thru the flood to find the best pictures, to stitch together pictures and videos from different sources into a coherent whole? What post-processing takes place to clean up the pictures, fix up composition, correct errors, etc? And how do you search across everyone’s gigantic photo streams to find the photos you really want to see? Investing in “big data for pictures/video” should be a durable investment thesis.

I’m not clear how it all plays out, but I feel pretty certain that Moore’s law will insure that the way we take and use pictures will be dramatically different in 20 years. A gigapixel camera might be nice but I suspect the silicon and software will be used not to just crank up resolution, but to address the other steps in taking pictures — composition, timing, exposure, aiming, post-processing, finding, sharing, etc.

I just ordered my Lytro camera.

Available February/March next year. The “Lytro”:http://www.Lytro.com features a technology they call “light field” — they grab sufficient photon data at capture time to allow refocusing, zooming, etc as a post-capture option. The Lytro is a simple step on the way to a full software-defined lens — I first wondered about such a lens in 2003, should have filed a bunch of patents. Other people are pushing the idea ahead, see for instance “Software Defined Lensing”:http://www.creative-technology.net/CTECH/SDL.html.

As the writeup points out, you can view a traditional glass lens as a kind of quantum computer with a single fixed purpose, established at manufacture time. The lens captures all the incident photons, does some photonic/quantum computation, and spits an answer out on the CCD. But if we can replace the lens with something that has much more dynamic, programmable behaviour, well very cool things could be done — arbitrary refocusing and zooming being just the simplest example. A much broader set of incident radiation could be captured, spectral analysis of the image could be performed, filtering of the image, incredible levels of zoom, etc.

The Lytro is a very modest step in this direction but exciting.

Photostream is cute, but what I really want is Aperture/iPhoto in the cloud

So “I don’t really get iCloud storage yet”:http://theludwigs.com/2011/10/im-struggling-to-understand-why-i-would-ever-use-icloud-storage/, and “Photostream doesn’t really accomodate all my DSLR pictures well”:http://theludwigs.com/2011/10/icloud-photostream-and-dslrs-dont-seem-to-be-a-great-fit/. So rather than just whine about what I don’t have, what do I really want?

First — I have a 203G (gigabyte) Aperture library today, that is where my primary photo storage is. Digging into this a little:

* 54G is thumbnails, previews, cache of various sorts. 27G of thumbnails alone! Impressive use of disk space, Aperture. Clearly the team has embraced the idea that disk space is cheap and is getting cheaper. There are probably some settings I could tweak to trim the size of all this at the cost of performance, but whatever, disk space IS cheap, 30% overhead is probably not a ridiculous design objective. This is all derived data tho and could be trimmed, dropped, whatever, as I think about cloud storage.
* My masters are 149G. A mix of RAW and JPG depending on which camera/scanner I used and how long ago I took — tending towards more RAW over time.
** 19G from this year
** 34G from 2010
** 25G from 2009
** 71G from earlier years.

Lets assume I continue to take pictures at the last 3 year average rate for some time, that is about 25gig of new photos every year, not accounting for inflation in photo size due to better quality capture chips, “light field cameras”:http://www.lytro.com/cameras, etc. OK so you probably have to assume some growth in that 25gig of new storage a year.

Cloud storage of photos — is it important? Hugely so, if my house is burning down, I do not want to be running back in to save a hard disk, photos are emotionally very important. And I do NOT want to have to pick and choose which photos I store in the cloud — too many photos, not enough time, I just want the entire set up in the cloud. I really just want my entire Aperture (and iPhoto) collection replicated to the cloud automagically. And then I need some modest access control features on the folders in the cloud so that I can share selected photo sets with family members, etc.

So I want a cloud storage solution that gives me ~200gig of storage today at a reasonable price, and if I think about the next couple years, a clear path to 300-400gig. And with good web access with some security. What are my choices today?

* iCloud doesn’t begin to work. Aperture doesn’t really talk to it except for Photostream. The max storage I can buy is 55gig. There are no access controls. Doesn’t work along almost every dimension.
* Dropbox. I can get 100G for $240 a year with a nice web interface and some sharing controls. I could even get the team license, store up to 350G, but for $795 a year. If I had this, I could just move my Aperture library into my dropbox folder and voila, it would be in the cloud, on my other machines, etc. However — the Aperture library folder is not really meant to be browsed by humans, the masters are chopped up into some funky balanced tree of directories. Seems like Aperture needs to learn how to work with shared storage. But I could get everything in dropbox, with a very easy UI for me, but at a high price, and probably the ability to share folders with family members would be hard to realize.
* Box.net. Well I get 50G free with their iPad offer, so they pretty much trump iCloud. I could get up to 500G in a business plan for $180/year per user. Similar pros and cons as with Dropbox, but pricing seems better.
* “Smugmug”:http://www.smugmug.com. This is what I use today. There is an Aperture plugin, I can save from Aperture. The bad part about this is that it is not automagic — I have to intentionally move folders up there, not happy about that. But — unlimited storage, at $40-150 per year for jpg, some extra cost but still cheap if you want RAW. A great interface for sharing, completely customizable, printing integration, etc.

For now …. Smugmug is the way to go, but as storage costs drop, I can see flipping to box.net or dropbox at some point. I’d give up some of smugmug’s great interface for admin control but that is overkill for me anyway. If Apple made this all work natively in Aperture at a competitive cost, that would be fine too. For people with a more modest set of photos, the Box.net 50G free offer for iPad/iPhone users seems like an awesome option.

My Current Digital Photography Workflow

Rich summarized his “current photography workflow”:http://www.tongfamily.com/archives/2010/05/panoramas/, lots of good stuff here. My flow is different, it is interesting how much divergence there is between our solutions. We have similar camera gear and take similar numbers of photos I suspect, but the way we process is radically different. I bet our workflow for other digital tasks is not nearly as divergent; the photo software and storage market is very diverse.

* I also shoot in RAW and JPEG but I don’t do much with the RAW. It has been hard to find consistent RAW support in tools and so I have tended to ignore the RAW. Tho that may change…
* Aperture is the core of my process. I import all photos off my storage cards into Aperture, I manage everything as Aperture libraries. I organize libraries in a Year/Month/Event hierarchy which seems to work well. Aperture exposes this structure in the file system and thru the common dialogs on the Mac so I tend to be able to get at photos easily from any app.
* My first line of backup defense is BackBlaze. It trickle backups constantly in the background transparently and so if I fail to do more explicit backup operations, I have this protection.
* I also dump photo albums to smugmug using the aperture plugin on an irregular basis. This gives me another level of backup and a way to share with family.
* Finally I copy the aperture libraries to a usb drive every once in a while for additional protection.
* Aperture is pretty fast at previewing photos and has fine basic editing tools for cropping, touchup, color and exposure correction, etc. Good enough that I never feel the need for Photoshop or other expensive tools. And there are a ton of plugins available if I really felt like more photo munging.
* Aperture 3.0 also has RAW support which I have yet to play with but need to try.
* I don’t do any HDR or panorama or other deep processing today. No time.

That is pretty much it. My solution is a little more expensive than Rich’s, I pay for Aperture, Smugmug, and Backblaze. But I find it all to be pretty fast. It does demand a reasonable MacBook, I just updated to the new i7 Macbooks with 8M ram and the biggest hard disk I could get.