Video examples 
of TAPe features
This page showcases the examples of how the reverse video search technology works and what it is powered by. All these examples have been made possible due to the TAPe methods we use. But TAPe application scenarios are much more than reverse video search, video comparison, and computer vision as such — TAPe can be applied to any information type.
Movies
searching for movies by video fragments and automatically rating the most popular scenes
MOVIES swiftly recognizes what fragment of a movie, TV series, or show is being used. With the help of that fragment, you’ll be able to find the full version of a movie or series. Also, MOVIES allows watching what comes before/after a particular video fragment — just find the one that interests you, upload it to the system, and let the system detect the source along with the fragment’s exact time and place (TV series, season, and episode). See what comes before/after the fragment or watch the full version on an official resource (streaming service, online cinema, etc.)
Movies
searching for movies by video fragments and automatically rating the most popular scenes
MOVIES swiftly recognizes what fragment of a movie, TV series, or show is being used. With the help of that fragment, you’ll be able to find the full version of a movie or series. Also, MOVIES allows watching what comes before/after a particular video fragment — just find the one that interests you, upload it to the system, and let the system detect the source along with the fragment’s exact time and place (TV series, season, and episode). See what comes before/after the fragment or watch the full version on an official resource (streaming service, online cinema, etc.)
Search for a story broadcasted
on TV
Video search technology allows you to find out when and on which TV channels the episode you need was broadcasted, whether it is news about an event, advertisement, show, film, etc. The feature is suitable for any purpose including PR, marketing, audit, research needs (problems, goals) for any reason, authority and government. In the demo version the depth of search is limited by 7 days and 20 most popular world TV channels.
Search for a story broadcasted
on TV
Video search technology allows you to find out when and on which TV channels the episode you need was broadcasted, whether it is news about an event, advertisement, show, film, etc. The feature is suitable for any purpose including PR, marketing, audit, research needs (problems, goals) for any reason, authority and government. In the demo version the depth of search is limited by 7 days and 20 most popular world TV channels.
Technology to find similar rather than identical videos
This demo showcases the examples of how the TAPe-based reverse video search technology works and demonstrates what videos can be considered similar according to TAPe. And it is exactly about similarity rather than identicalness we’re talking here, as if the decision of whether some scenes are similar (for example, games of pool) was made by a human. Such similarity is recognized by the system on its own rather than by identifying or searching for special properties, for example, searching for objects. This is a feature of the video comparison technology based on TAPe
Technology to find similar rather than identical videos
This demo showcases the examples of how the TAPe-based reverse video search technology works and demonstrates what videos can be considered similar according to TAPe. And it is exactly about similarity rather than identicalness we’re talking here, as if the decision of whether some scenes are similar (for example, games of pool) was made by a human. Such similarity is recognized by the system on its own rather than by identifying or searching for special properties, for example, searching for objects. This is a feature of the video comparison technology based on TAPe
Testing TAPe methods on NVIDIA GPUs
The video features 30 experiments indexing videos to TAPe format with two NVIDIA graphics cards. Through experiments, we show that decoder load tends to 100%, GPU load decreases with the increase in the number of card cores, while the time required to index videos to TAPe format does not change when the number of cores is increased. All of that is achieved by maximizing the informative value of TAPe attributes, while their number remains minimal and requires few computational resources.
breakthrough opportunities for computer vision technology
Testing TAPe methods on NVIDIA GPUs
The video features 30 experiments indexing videos to TAPe format with two NVIDIA graphics cards. Through experiments, we show that decoder load tends to 100%, GPU load decreases with the increase in the number of card cores, while the time required to index videos to TAPe format does not change when the number of cores is increased. All of that is achieved by maximizing the informative value of TAPe attributes, while their number remains minimal and requires few computational resources.
breakthrough opportunities for computer vision technology
more detailed technical description for this Demo see under the cut
In this case, video indexing is the process of building a TAPe index by obtaining TAPe attributes from a video file. This is the most resource-intensive task accounting for 99.9% of the process, so we started looking for ways to do it faster. In search of an optimal solution, we compared the video indexing process on different devices and looked at the efficiency of NVIDIA's CPU and GPU, among others.

Through experiments, we show that decoder load tends to 100%, GPU load decreases with the increase in the number of card cores, while the time required to index videos to TAPe format does not change when the number of cores is increased. If we were to load all those thousands of cores at 100%, we would need thousands of encoders/decoders to provide the cores with the volume of information enough to keep them running at full capacity. But such video cards don't exist. All of that is achieved by maximizing the informative value of TAPe attributes, while their number remains minimal and requires few computational resources.

Experiments with other types of video cards have shown that accelerating the process of indexing videos to TAPe format is only possible through the increase in the number of decoders embedded in the motherboard. And still, GPU load remains at small values and drops as the number of cores on the motherboard increases. So the bottleneck of the motherboards is the number of decoders rather than the number of cores. Indexing a file to TAPe format requires very few resources in terms of the number of cores as compared to the resources required for video decoding.

A built-in, efficiently programmed decoder returns so little data (frames, pictures, bmp) that only a few GPU cores are enough to build a TAPe index based on a video. At the same time, most operations related to videos (processing, editing, gaming) even require as few as one decoder. That’s why today’s ‘arms race’ is being waged precisely around the number of cores. However, TAPe, with its ‘innate’ meaningful attributes, needs as many decoders as possible, not cores. That means the architecture of any device tailored to TAPe, with its capabilities, must be fundamentally different.

In tests, we used GeForce GTX 1660 and GeForce RTX 3090 graphics cards.

Experiments were conducted using 3 different hour-long video fragments at different resolutions of 240p, 360p, 480p, 720p, and 1080p in mp4 format.
In the demo video, the experiments are grouped as follows: 10 experiments at 5 different resolutions for the 1st hour-long video fragment and then the same for the 2nd and the 3rd video fragments, with 10 experiments for each of them.
Key metrics demonstrated in the demo
  • GPU load
  • decoder load
  • total decoding time
Subsidiary metrics
  • GPU model
  • GPU temperature
  • memory load
  • resolution and container of the decoded video
Extract video keyframes.
Detect scenes


Automatically create shorter versions (résumés, overviews, previews) of large-sized videos without losing significant elements.
Using TAPe, any video can be automatically converted to its shorter version without any loss of meaning, with all the key elements kept intact. This feature is especially handy for processing videos that have many static scenes (for example, space rocket launch videos). This technology allows processing videos irrespective of the number of cameras used, as all meaningful events will be reflected in the shorter version.

Extract video keyframes.
Detect scenes

Automatically create shorter versions (résumés, overviews, previews) of large-sized videos without losing significant elements.
Using TAPe, any video can be automatically converted to its shorter version without any loss of meaning, with all the key elements kept intact. This feature is especially handy for processing videos that have many static scenes (for example, space rocket launch videos). This technology allows processing videos irrespective of the number of cameras used, as all meaningful events will be reflected in the shorter version.

Making a compilation of videos based on the user’s request
Using TAPe, our app can make a compilation of videos based on the user's request (semantic kernel), delete all duplicates, group fragments based on how often they are reproduced, give links to all videos that are used in the compilation — and it will do so automatically. You can make any types of requests for any number of videos falling under any topic. In the future, we are going to add a feature that allows tracking when new videos fitting your criteria are uploaded and adding them to the compilation video.
semantic kernel
Making a compilation of videos based on the user’s request
Using TAPe, our app can make a compilation of videos based on the user's request (semantic kernel), delete all duplicates, group fragments based on how often they are reproduced, give links to all videos that are used in the compilation — and it will do so automatically. You can make any types of requests for any number of videos falling under any topic. In the future, we are going to add a feature that allows tracking when new videos fitting your criteria are uploaded and adding them to the compilation video.
semantic kernel