Vlad Savov, writing for The Verge:

I’ve been reading about Gcam, the Google X project that was first sparked by the need for a tiny camera to fit inside Google Glass, before evolving to power the world-beating camera of the Google Pixel.

Well, ok. So you think Google’s camera is better than Apple’s. It’s a free country, so we’ll move on.

What if your phone knew when, where, and what you’re pointing it at; what if it had a library of trillions of images; and what if it could intelligently account for things like weather and time of day? Would it need to have eyes to see the scene you’re trying to capture? […]

Tell your phone how tall you are and, with the help of the same orientation sensors it uses to know its place on a map, the device will be able to calculate both your subject and your point of view when you want to shoot a photo. And if it’s something as ubiquitously photographed as, say, the Roman Forum, all a future Googlephone would need to know are the cloud formations and Sun position at the particular time of the shot, and it’d have enough data to synthesize a photo.

I don’t know where to begin with this, other than to say that it’s never ever going to happen. Ever. Phones aren’t going to lose their cameras because of artificial intelligence. You can already do what Vlad is suggesting by going to Google Images and pasting in people’s faces with Adobe Photoshop. All he’s talking about is an automated way to do this. Using this exclusively in lieu of a camera is madness. How is it anything other than a huge step back from where we currently are? The whole point of a camera is to say, “I was here, and this happened.” Neither one of those things happens with Vlad’s idea. You don’t have to be anywhere, and you won’t be able to capture anything special. This Verge piece doesn’t seem to comprehend the basic reason for mobile photography.