Google Announces New API That Can Detect and Identify Objects Using Images
This API could lead to advancement in facial recognition, landmark detection, as well as the most obvious — object identification.
Attention all developers, researchers, and enthusiasts: Google has announced that they will be releasing a new . API is, simply put, a set of rules and tools to help build software. Google’s new TensorFlow object detection API is designed to make it easier to identify objects using images. The API includes models that are designed to work on even on comparatively simple devices, like smartphones.
Simplifying machine learning models is proving to be essential for advancing API and machine learning technologies. We don’t all have massive desktop setups with our own servers capable of handling just about anything. While it’s possible to run them through the cloud, that usually proves to be abysmally slow, and also requires an internet connection. That means that in order to make these models more accessible to the average consumer, they’ll need to be simplified.
Keeping that in mind, Google intends for this new API to be extremely user-friendly, allowing anyone and everyone with a basic computer or smartphone to explore the world of machine learning.
Applying the API
We know that this new API can be used to identify objects by using images, but beyond being amusing, could that actually be useful in our everyday lives? As it turns out — yes, it likely could be. This type of API could lead to advancement in facial recognition, landmark detection, as well as the most obvious — object identification. These seemingly basic tools will continue to become essential in many different fields. From information services to law enforcement and even just daily digital tasks, these seemingly small strides in the progression and simplification of machine learning will only continue to push us forward.
Aside from Google’s development of the API and launch of TensorFlow lite, a streamlined version of the machine learning framework, other companies have been creating mobile models, too: Facebook has used the tech to build its Caffe2Go framework and subsequently Facebook’s Style Transfer, and Apple released CoreML, which aims to help run these models on iOS devices. Piece by piece, machine learning is moving closer to individual accessibility.