The consumer electronics giant has explored putting a camera in its device, which could come in the form of a smart speaker like Amazon’s Echo, according to people familiar with Apple’s plans. The device would be “self aware” and detect who is in the room using facial recognition technology. That would let the device automatically pull up a person’s preferences, such as the music and lighting they like, the sources said.
I have a bone to pick with this idea. I think it’s highly unlikely for Apple to ship its omni-present device for the room with a camera.
First, there’s the lighting issue. What if I want to talk to it (Siri? VocalIQ?) at night? Second, the inclusion of a camera also means I should be looking at the device for said personalisation features to work — doesn’t sound ideal at all. Third, Apple can tell me it doesn’t constantly collect data from this hypothetical always-on camera all they want but I don’t think I would be comfortable with an internet-equipped camera constantly pointed towards my general direction.
The only way the problem of ‘you must look at it’ is solved is by putting the camera in an Apple TV’s remote. That raises the chances of the camera always pointed at you. But if I’m required to hold a device to make the thing work, why should I not just use my iPhone instead?
Apple already validates users based on their voice with ‘Hey, Siri’ on the iPhone — ideally, ‘Hey, Siri’ is supposed to respond only to the voice it’s trained for. Sure it’s incredibly hit-or-miss but at least you see the direction Apple’s laying their foundation in. I think voice-validation is undoubtedly the way to go.
(Remember the MacBook Pros with TouchID rumour a few days ago? I talked about the positioning of the TouchID sensor and figured the trackpad is the most likely place — albeit with a few quirks. What if the face-recognition abilities are for the MacBook Pro instead? MacBooks already have a camera in the right place and the Pros have better cameras than the Retina MacBook.)