10-15-2020, 05:22 AM
Latest AI breakthrough describes images as well as people do
<div style="margin: 5px 5% 10px 5%;"><img src="https://www.sickgaming.net/blog/wp-content/uploads/2020/10/latest-ai-breakthrough-describes-images-as-well-as-people-do.jpg" width="1024" height="538" title="" alt="" /></div><div><div><img src="https://www.sickgaming.net/blog/wp-content/uploads/2020/10/latest-ai-breakthrough-describes-images-as-well-as-people-do.jpg" class="ff-og-image-inserted"></div>
<h2><strong>Novel object captioning</strong></h2>
<p>Image captioning is a core challenge in the discipline of computer vision, one that requires an AI system to understand and describe the salient content, or action, in an image, explained <a href="https://www.microsoft.com/en-us/research/people/lijuanw/">Lijuan Wang</a>, a principal research manager in Microsoft’s research lab in Redmond.</p>
<p>“You really need to understand what is going on, you need to know the relationship between objects and actions and you need to summarize and describe it in a natural language sentence,” she said.</p>
<p>Wang led the research team that <a href="https://aka.ms/MSRBlogImageCap">achieved – and beat – human parity</a> on the novel object captioning at scale, or <a href="https://nocaps.org/">nocaps</a>, benchmark. The benchmark evaluates AI systems on how well they generate captions for objects in images that are not in the dataset used to train them.</p>
<p>Image captioning systems are typically trained with datasets that contain images paired with sentences that describe the images, essentially a dataset of captioned images.</p>
<p>“The nocaps challenge is really how are you able to describe those novel objects that you haven’t seen in your training data?” Wang said.</p>
<p>To meet the challenge, the Microsoft team pre-trained a large AI model with a rich dataset of images paired with word tags, with each tag mapped to a specific object in an image.</p>
<p>Datasets of images with word tags instead of full captions are more efficient to create, which allowed Wang’s team to feed lots of data into their model. The approach imbued the model with what the team calls a visual vocabulary.</p>
<p>The visual vocabulary pre-training approach, Huang explained, is similar to prepping children to read by first using a picture book that associates individual words with images, such as a picture of an apple with the word “apple” beneath it and a picture of a cat with the word “cat” beneath it.</p>
<p>“This visual vocabulary pre-training essentially is the education needed to train the system; we are trying to educate this motor memory,” Huang said.</p>
<p>The pre-trained model is then fine-tuned for captioning on the dataset of captioned images. In this stage of training, the model learns how to compose a sentence. When presented with an image containing novel objects, the AI system leverages the visual vocabulary to generate an accurate caption.</p>
<p>“It combines what is learned in both the pre-training and the fine-tuning to handle novel objects in the testing,” Wang said.</p>
<p>When evaluated on nocaps, the AI system created captions that were more descriptive and accurate than the captions for the same images that were written by people, according to results presented in a <a href="https://arxiv.org/abs/2009.13682">research paper</a>.</p>
</div>
https://www.sickgaming.net/blog/2020/10/...people-do/
<div style="margin: 5px 5% 10px 5%;"><img src="https://www.sickgaming.net/blog/wp-content/uploads/2020/10/latest-ai-breakthrough-describes-images-as-well-as-people-do.jpg" width="1024" height="538" title="" alt="" /></div><div><div><img src="https://www.sickgaming.net/blog/wp-content/uploads/2020/10/latest-ai-breakthrough-describes-images-as-well-as-people-do.jpg" class="ff-og-image-inserted"></div>
<h2><strong>Novel object captioning</strong></h2>
<p>Image captioning is a core challenge in the discipline of computer vision, one that requires an AI system to understand and describe the salient content, or action, in an image, explained <a href="https://www.microsoft.com/en-us/research/people/lijuanw/">Lijuan Wang</a>, a principal research manager in Microsoft’s research lab in Redmond.</p>
<p>“You really need to understand what is going on, you need to know the relationship between objects and actions and you need to summarize and describe it in a natural language sentence,” she said.</p>
<p>Wang led the research team that <a href="https://aka.ms/MSRBlogImageCap">achieved – and beat – human parity</a> on the novel object captioning at scale, or <a href="https://nocaps.org/">nocaps</a>, benchmark. The benchmark evaluates AI systems on how well they generate captions for objects in images that are not in the dataset used to train them.</p>
<p>Image captioning systems are typically trained with datasets that contain images paired with sentences that describe the images, essentially a dataset of captioned images.</p>
<p>“The nocaps challenge is really how are you able to describe those novel objects that you haven’t seen in your training data?” Wang said.</p>
<p>To meet the challenge, the Microsoft team pre-trained a large AI model with a rich dataset of images paired with word tags, with each tag mapped to a specific object in an image.</p>
<p>Datasets of images with word tags instead of full captions are more efficient to create, which allowed Wang’s team to feed lots of data into their model. The approach imbued the model with what the team calls a visual vocabulary.</p>
<p>The visual vocabulary pre-training approach, Huang explained, is similar to prepping children to read by first using a picture book that associates individual words with images, such as a picture of an apple with the word “apple” beneath it and a picture of a cat with the word “cat” beneath it.</p>
<p>“This visual vocabulary pre-training essentially is the education needed to train the system; we are trying to educate this motor memory,” Huang said.</p>
<p>The pre-trained model is then fine-tuned for captioning on the dataset of captioned images. In this stage of training, the model learns how to compose a sentence. When presented with an image containing novel objects, the AI system leverages the visual vocabulary to generate an accurate caption.</p>
<p>“It combines what is learned in both the pre-training and the fine-tuning to handle novel objects in the testing,” Wang said.</p>
<p>When evaluated on nocaps, the AI system created captions that were more descriptive and accurate than the captions for the same images that were written by people, according to results presented in a <a href="https://arxiv.org/abs/2009.13682">research paper</a>.</p>
</div>
https://www.sickgaming.net/blog/2020/10/...people-do/