Aqua Phoenix
     >>  Research >>  VAST MM  
 

Navigator
   
 
       
   

3. User Interface Reference

The user interface is divided into a number of sections:

User Login Category
Browser
List of
Videos
Personal
Bookmarks
Text
Search
Video Text
Comparison
Loaded
Video
Help

Figure 1.1
Use your username and password to log into the tool. While there may be publically accessible videos that do not require logging in, most content is viewable only by selected users. Logging in also makes available features of bookmarking and annotating videos.

Figure 1.2
The Category Browser can be used to navigate through the hierarchy of video categories over time. A 3D cylinder represents time in the horizontal dimension in the form of one cylinder slice per year and categories around the perimeter in the form of labels. Categories that extend throughout several years are visually linked by curves.

To navigate through the cylinder, press the mouse button anywhere on the cylinder, then move the mouse vertically to rotate the 3D object and horizontally to pan the cylinder to the left/right. Upon clicking on a label, either a new cylinder with sub-categories appears, or, if the selected category has no sub-categories, the User Interface view changes to the list of videos for this category.

Figure 1.3
The Video List represents a time-ordered list of videos. If a video category was selected from the Category Browser, then the list shows only videos from within this category. Otherwise, the list represents the exhaustive selection of all available videos. If public annotations are available for a video, the video entry is marked appropriately.

Figure 1.4
Videos in the list are color-coded as follows: Light Blue represents visited videos; Dark and Light Brown represents the presently selected video; Light Grey represents all other videos that have not been viewed during this session.

Figure 1.5
Once a video is selected, its streaming content and its browseable indices are loaded in this view. Some indices, such as text, are loaded instantaneously. Other indices, such as high-quality snapshots, are loaded while the interface can already be used. Snapshots are loaded in the order of visual distinctiveness - most distinct snapshots are loaded first. A loading bar in the left-lower corner of the tool informs of the loading status.

Figure 1.6
The interface features a streaming video player, which embeds browseable snapshots. Regardless of whether or not the video is playing, the time slider can be used to skim the entire video - snapshots of distinct visual content are displayed in the video player window.

Figure 1.7
Browseable indices include:

Thumb: snapshots as thumbnails

Time: a timeline with timestamps at points of visual change

Audio: an audio track (colored green) with audio activity

Video: a video track (colored red) with markers of varying intensity representing visual change

Content: a text track with keywords and phrases taken from the speech transcript - intensity of the text blips hints at descriptiveness and recurrence of terms

Bookmark: personal annotations that are only viewable by the logged-in user

Annotation: publically viewable annotations, marked with the annotator's username

Figure 1.8
When moving the mouse over snapshot thumbnails, a high-quality version of the image appears in the interface.

Figure 1.9
The Video Content interface can be customized to better suit the needs of the user. Three sliders dictate various degrees in which the amount of content is displayed.

Figure 1.10
The Segmentation slider changes the number of visual scenes, and at the same time the number of snapshot thumbnails. To the left extreme of the slider, the number of thumbnails increases. While this setting shows more of the video content, it also makes the screen more cluttered.

Figure 1.11
To the right extreme of the segmentation setting, fewer visual scenes with most significant changes are displayed.

Figure 1.12
The Zoom slider increases or decreases the duration of video content displayed in the interface. To the right extreme of the slider, the content is zoomed out, revealing information for a long period of the video at once (about 40-60 minutes on one screen). Such a setting is useful to get a quick overview of the video.

Figure 1.13
To the left extreme of the zoom setting, a very short duration of video content is displayed (on the order of seconds). This setting is useful for more precise browsing and digesting text, annoations, and bookmarks, if they are plentiful.

Figure 1.14
The Text Context slider groups and ungroups keywords and phrases over a variable duration of time. Grouping similar terms is useful for viewing the most recurring terms in a video. To the left extreme of the slider, no grouping occurs, and each keyword/phrase is represented by an individual blip.

Figure 1.15
To the right extreme of the text context setting, recurring terms within 300 seconds of one another are grouped and are represented by one blip. Should a term be repeated throughout the entire video in short intervals, one blip for this term will span the text track.

Figure 1.16
Not all of the content indices may be helpful for viewing/reviewing a video. To remove content tracks from the interface, click on the respective gray tabs on the left. Once closed, the tracks can be opened by clicking on their horizontal tabs in the same interface.

Figure 1.17
The bookmark view lists all personal bookmarks of the logged-in user.

Figure 1.18
The presently selected bookmark is colored with light/dark brown.

Figure 1.19
The Search interface can be used to query the video database using text terms. The availability of searchable text is highly dependent on the accuracy of Automatic Speech Recognition and Optical Character Recognition, and is far from perfect. If no results are found for your search query, try using fewer terms and alternative terms.

Figure 1.20
Search results are ordered by their match score and are accompanied by a distribution of each query term over all videos in the database, including videos that are not available to the logged-in user. Matched terms are marked in green and their individual recurrence over all videos is presented as a percentage. Terms that are not matched in any videos are marked in red.

Figure 1.21
Search results are also marked in the video's content interface as a separate text track. This track can be used to locate the exact positions of the query terms.