Google has released a beta for its latest cloud-based application program interface, which can detect faces, signs, landmarks, objects, text and even emotions within a single image.
Google Cloud Vision API, powered by much of the same tech that goes into Google’s world-renowned search engine, allows developers and users alike to sort through photos using a variety of criteria.
Vision API can pick out objects in a picture, from text on a sign to food on a plate to faces in a crowd. Using Google’s SafeSearch technology, the API can also be programmed to identify images that have inappropriate content, or filter other images based on their subject matter.
The API can also detect facial features, allowing it to find images that display certain emotions. This means that, technically speaking, Google’s cloud platform can sense fear just as well as it can identify a taco or a goldfish.
While Cloud Vision could be of use to shutterbugs looking to clean up a backlog of photos save to their Google Cloud Storage, the API is mostly intended for developer use.
That said, we’re already imagining the ways apps could utilize this image-searching tech: imagine an app that can identify a brand of clothing just by snapping a pic of a mannequin, or being able to list the exact make and model of a car or computer you’re selling on Craigslist just by uploading the picture.
Google pointed out some other potential uses for its Vision API, such as helping image hosting sites flag copyrighted or offensive images, or developing machines that identify objects or faces on sight, as seen with the Raspberry Pi-powered robot in this video below:
YouTube : https://www.youtube.com/watch?v=eve8DkkVdhI&feature=youtu.be
Developers can begin working with Google Cloud Vision API starting today, with the first 1,000 uses of each feature free per month. Those looking to sift through even more photos than that can expect to pay between $0.60 to $5 a month per feature, depending on usage.