Google has revealed a machine-learning system that can automatically create captions after recognizing the content in images, by using Natural Language Processing (NLP). Now it will be easy for computers to summarize a complex scene in few words.
Google aims to make a system that could eventually help visually impaired people understand pictures, or provide alternate text for images in parts of the world where mobile connections are slow, and make it easier for everyone to search on Google for images.
Though there are many examples of Computer Vision Software System that can auto-tags photos, this takes things a step further by providing full descriptions. However, it’s still not entirely accurate all time as above snapshots shows, but this early-stage research project holds some significant promises for future of machine learning and artificial intelligence.
Google said in its blog that the recent research has improved object detection, classification, and labeling. But accurately describing a complex scene requires a deeper representation of what’s going on in the scene, capturing how the various objects relate to one another and translating it all into natural-sounding language.
NLP is related to the area of human–computer interaction, it is a way to enable computers to derive meaning from human or natural language input. It is a component of artificial intelligence, computer science and linguistics concerned with processing texts and making information accessible to computer applications.
Natural Language Processing algorithms have been created with an aim to design and build software that will analyze and understand human speech as it is spoken. The developed software programs include various common tasks like: Sentence segmentation, Deep analytics, Named entity extraction, Co-reference resolution, part-of-speech tagging and parsing.
Current algorithms to NLP are based on machine learning, that uses patterns and examine data to improve the program’s own understanding.
To check more details related to this technology, click here.