One primary issue that we've seen with using smartphones as a primary camera is that you're forced to always shoot with a large depth of field, which means that you miss out on achieving that sought after bokeh for your close up or portrait shots.
I've actually been working with a team of former post-doc researchers who've been using SIFT flow techniques to estimate depth information in an image and apply filters to emulate different depths of field - effectively allowing for an image captured with a smartphone to look like it was shot with a different lens (see https://www.dropbox.com/s/7qhfgnwl08vtk63/compare.png?dl=0). We've got some pretty cool demos if anyone's interested.
Google's approach requires multiple images to be captured of the same subject from different angles in order to triangulate depth - our SIFT flow based approach can achieve similar performance using only a single image. This makes it as simple as capturing a normal image to capture one with an editable depth of field.
I assume that only goes for those situations where you are reducing depth of field. As in, sharper backgrounds or foregrounds can become blurry but not the other way around.
We've only looked at reducing the depth of field as the narrow aperture on smartphone cameras leads to an extended depth of field (meaning that everything is usually in focus within the frame).
That's fine, unless the last 5+ years of bokeh-fetishism has turned it into the modern-day equivalent of solar flare photoshops. It's basically a big "look how much lens I can afford!" dick-waving contest.
Since you're doing research, I'm sure you've seen the Lytro cameras. Maybe if they manage to get that technology into phones, it could really change the world. Samsung is already doing some stuff like recording photos several ms before/after the button was pressed, so you can do minor time-stream corrections. It's not long before DOF corrections through hardware or software comes along.
Bokeh can be overused, sure, but it still has its place. It helps immensely with separating the subject from the background and some (much better than me) photographers can do great things with narrow DoF. Beyond that, you don't even need that expensive a lens to get good narrow DoF. A nifty fifty is enough to get you there.
Nice, but I don't think the problem with soft edges are ever going to be solved satisfactorily. For instance, check the halo on the guy's hair next to the cabinet in your example. Or the shirt/cabinet edge. Besides, it looks like gaussian, not lens blur, in the background.
Even manually Photoshoped images often don't look perfect.
I think we simply need more data, it can't be done convincingly with a single lens/aperture shot. Maybe with multiple shots/focus points, but even then, I'm skeptical.
We're primarily targeting providing depth of field adjustment tools for video as having multiple frames of data to work with obviously makes the task easier (although we can still get great results with a single image).
We're also experimenting with other filters, not just Gaussian - it's the depth extraction part that we've been trying to innovate on.
Right, because it's the depth extraction part that's really hard. Particularly, the transitions. I haven't seen anything that struck me as natural, but I'd love to be proven wrong.
I'm pretty leery of faking depth of field since you can't blur in pixels that aren't there. What this really shows me is that sensor size is almost irrelevant when it comes to image quality now. Certainly a 1" sensor should be enough to satisfy most needs. It's all down to lenses and ergonomics now.
Even my six year old 5DMII shoots far better than my iPhone 6. Lenses, yes, but sensors too. With my 5DMII I've been able to pull the shop name off of a reflection of a guy's eye that I was doing a normal portrait for. There is far less photo AI magic going on in DSLRs to fake high quality results. Don't get me wrong, I'm very happy with the 6 for casual shots, but I still think it's much easier to get high quality from larger sensors.
If you're shooting with post-processing in mind, then that's another thing entirely.
> I've been able to pull the shop name off of a reflection of a guy's eye
Most needs, not occasional attempts to prove a point ;-)
We're at a point now where the latest M43 cameras can match the previous generation APS-C sensors (which exceed the 5dMk II, for example, in dynamic range and color depth, if not low light performance) -- if you're willing to lose a stop, that gets you below a 1" sensor.
I think Nikon's decision to make the 1-series was actually very good strategically (skating to where the puck is heading). I just think they've totally wasted their first mover advantage by failing to provide good bodies or lenses for enthusiasts. (I don't know why Nikon uses Aptina sensors instead of Sony -- it may be for the fast readout, since the first gen 1-series cameras could shoot full resolution at 60fps.)
I remember how satisfying it was to get that demo to work back in 2011 (thanks to the help of Dave Emett). Feels like an incredibly long time ago now! I wonder what framerate people will be able to achieve...
DoF Adjustment: https://www.dropbox.com/s/7qhfgnwl08vtk63/compare.png?dl=0