Introducing Google Glass’s inevitable face recognition API

Introducing Google Glass’s inevitable face recognition API
Editor at TechForge Media. Often sighted at global tech conferences with a coffee in one hand and laptop in the other. If it's geeky, I'm probably into it.

Have you ever forgotten someone’s name? How about slightly more detailed information such as their birthday? … Maybe a few have even forgotten you and your significant other’s anniversary?

What if you could pull in all this data automatically just by looking at someone’s face? It’s a creepy prospect, but a seemingly inevitable reality thanks to Lambda Labs’ API for Google Glass.

Currently, Google has no rules against such usage, but there are rules about live streaming to a remote server. Therefore, users would have to snap a photo first before sending onwards to the server and pulling the resulted data in this way, causing some amount of delay.

Whilst we’re getting further and further into Terminator-style AR (if the military isn’t already working on it, there’s sure to be some targeting system in development) the API – currently in beta – appears to have some recognition issues.

You can test out the web demo here, but I’ve had mixed results. This ranges from not detecting faces wearing glasses, to thinking Arnold Schwarzenegger is Jennifer Aniston.

Part of the limitations is Lambda Labs cannot access any random individual’s personal information, therefore it requires some source data to go by.

Google+ offers similar functionality for tagging photos; suggesting people to tag groups of photos with at once. Perhaps a future extension to the service will hook into your own “Circles” on the service to pull in information for the people you actually know.

Can you think of any innovative uses for facial recognition in your future Google Glass apps?

View Comments
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *