Video indexer detail page

The Microsoft Azure Video Indexer details page of a video asset contains two tabs: Insights and Timeline.

Insights

On the Insights tab, you can see all the metadata extracted from the video.

The following table describes the types of metadata that the video AI analysis can extract from a video.

Metadata

Description

People

List of people identified in the video, a timeline of their appearance, and a percentage of their presence. Azure Video Indexer detects and groups faces; detects, groups, and identifies animated characters; detects and identifies celebrities with a brief biography; automatically extracts the best images for thumbnails of the faces caught.

Keywords

Keywords mentioned in each segment of the video.

Topics

Main topics of the video, sorted by category.

Labels

Tags describing identified objects (for example, cat, table, car, or ball) with a timeline of when they appear in the video.

Language

The language identified in the video.

Emotions

List of emotions detected in speech expressions, vocal signals, and facial expressions.

Sentiments

Azure Video Indexer compares levels of positive and negative sentiments throughout the audio and video, displaying where these sentiments appear in a clickable timeline.

Scenes

The video, segmented by semantic scenes.

Keyframes

List of detected stable keyframes.

Timeline

The Timeline tab splits the video using audio analysis. An automated transcription feature converts the speech into text on a timeline. It also identifies speakers, maps them, and prints their corresponding spoken words on the timeline.

Do you have some feedback for us?

If you have suggestions for improving this article,