 Recently, we wrote a blog post about an algorithm called Scene Detection that takes a video and returns the timestamps of scenes along with subclips that are associated with the subclip’s timestamps.

You can use this information to find appropriate scene lengths for creating video trailers or you can use the timestamps of scenes to dictate where YouTube can place advertisements so it doesn’t occur during an important scene.

Sometimes though, you want more than just the scene’s timestamps. With Python 3.4 and up you can use the statistics module to determine the average length of a scene, the variance of the data and other information to easily edit your videos or garner insights from the scene lengths. Although you can perform statistical calculations manually or by using the libraries Numpy or Pandas, in Python 3.4 and up you can easily find detailed information of your subclip data without importing a bunch of heavy libraries.

In this recipe we’ll look at how to use the Scene Detection algorithm’s timestamps in order to find useful statistical information on them, create a basic graph and upload it to Algorithmia’s Hosted Data using the Data API

## Step 1: Install the Algorithmia Client

This tutorial is in Python. But, it could be built using any of the supported clients, like Scala, Ruby, Rust, Java, Node and others. Here’s the Python client guide for more information on using the Algorithmia API.

Install the Algorithmia client from PyPi, the pytimeparse library, and the matplotlib library (only if you want to create a graph at the end):

pip install algorithmia

You’ll also need a free Algorithmia account, which includes 5,000 free credits a month – more than enough to get started with scene detection in videos.

## Step 2: Call Scene Detection

Like most algorithms on the Algorithmia platform, Scene Detection takes input in the form of a JSON object. While there are many parameters for Scene Detection, this recipe requires only a few basic ones, but be sure to check out next week’s post where we’ll cover more on Scene Detection in combination with other algorithms spotlighted that week.

import statistics
from pytimeparse.timeparse import timeparse
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
import Algorithmia

client = Algorithmia.client("YOUR_API_KEY")

def scene_detection():
"""Extract scenes from videos and return timestamps."""
input = {
"output_collection": "testSceneDetection"
}
algo = client.algo('media/SceneDetection/0.1.2').set_options(timeout=3000)
scene_timestamps = algo.pipe(input).result
print(scene_timestamps["scenes"])
return scene_timestamps["scenes"]

This code shows the imports of the required libraries that you installed during the first step. You then create the Algorithmia client object by passing in your API key.

The function scene_detection has an input object with the video URL as the first key-value pair.  Your video can be a YouTube video (make sure it’s either yours or a Creative Commons video) or a Data API URL pointing to your content in Dropbox, Amazon S3, or Algorithmia’s Hosted Data.

Next notice the output_collection field where you’ll pass in the name of your data collection. Note that this is different than most algorithms that ask for your entire path.

Check out the algorithm description page for Scene Detection to see all the other parameters available such as minimum scene length and others that determine the minimum pixel density a scene must be in order to trigger a fade in or fade out.

Now that we have our input sorted we can call our algorithm. Always make sure you include the version number (the latest is always on the algorithm’s description page).

Next our data is piped into the algorithm and then we get the result back, pulling only the timestamp data called scenes.

## Step 3: Extract Subclip Metadata

Now let’s get some basic descriptive statistics using Python’s statistics module. We’ll pull the mean, the variance and other information from the timestamps to get an idea of how the scene durations are distributed.

def get_stats():
"""Get statistics from scene timestamps and return time duration."""
scenes = scene_detection()
# Turn time string into a float
timestamps = [timeparse(timestr) for timestr in scenes]
print(timestamps)
# Find the difference between timestamps
scene_duration = [round(abs(t - i), 1)
for i, t in zip(timestamps, timestamps[1:])]
print(scene_duration)
maximum_scene_length = max(scenes)
minimum_scene_length = min(scenes)
median = statistics.median(scene_duration)
mode = statistics.mode(scene_duration)
mean = round(statistics.mean(scene_duration), 1)
variance = round(statistics.variance(scene_duration, mean), 1)
print("The mean {0}, The median is {1}, the mode is {2}, and the variance is {3}. The shortest length of a clip is {4} while the longest is {5}".format(
mean, median, mode, variance, minimum_scene_length, maximum_scene_length))
return scene_duration

Above you’ll notice that before we get any information from the timestamps, we have to turn them into floats. That way we can get all sorts of useful metadata.

Next we’ll want to find the difference between timestamps because that will dictate scene length which is what we are really after. Notice the  abs function which gives us the absolute value to avoid negative values.

Now we’ll find the maximum and minimum values of the scene lengths in our video.

Then we’ll check out measures of central tendency such as the mean, the median and the mode. This can be useful to understand the average length of the subclips (note the mean is sensitive to outliers such as extremely long or short videos), the median (where the lengths are sorted by magnitude and the middle value chosen), and the mode which tells us the scene duration that occurs with the highest frequency.

Finally we’ll check out variance which tells us the variation between each data point and the mean. This allows us to understand the spread of data points from the average of the data.

This function returns the information and if you used the sample video URL, it will show:

The mean is 4.1, the median is 2.2, the mode is 2.2, and the variance is 13.0. The shortest length of a clip is 00:00:03.753 while the longest is 00:01:21.998

## Step 4: Visualize the Data

If you want to visualize some of your data, go ahead and install matplotlib if you didn’t already with pip install matplotlib and then we’ll create a plot, save it to our local machine and then save it to Algorithmia Hosted Data.

def create_plot():
"""Create a plot using scene duration."""
data = get_stats()
fig = plt.figure()
ax = plt.subplot(111)
ax.plot(data)
plt.xlabel('Subclip index')
plt.ylabel('Scene Length')
# Create data plot
fig.savefig('your_local_file_path/plot.png')
# Save plot to Algorithmia Hosted Data
"your_local_file_path/plot.png")

create_plot() All we did here was make a super simple line graph showing how long our scene lengths were (on the y-axis) and their position in the film (on the x-axis). There are other plots you can do in Matplotlib such as plotting frequencies in a box-plot or plotting the probability density with a best fit line and histogram. While Matplotlib is shown here for simplicity, there are a ton of other great graphing libraries in Python.

Notice after we create the file using Matplotlib, we send it to a hosted data collection. If you haven’t used our Data API to work with files saved in your Dropbox, S3, or Algorithmia Data Collections, check out the docs here.

References:

So that wraps up our recipe on Scene Detection in Videos. Stay tuned for next week where we play more with Scene Detection, Video Transform, and Censorface.

For ease of use check out the full code here or on GitHub:

import statistics
from pytimeparse.timeparse import timeparse
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
import Algorithmia

client = Algorithmia.client("YOUR_API_KEY")

def scene_detection():
"""Extract scenes from videos and return timestamps."""
input = {
"output_collection": "testSceneDetection"
}
algo = client.algo('media/SceneDetection/0.1.2').set_options(timeout=600)
scene_timestamps = algo.pipe(input).result
print(scene_timestamps["scenes"])
return scene_timestamps["scenes"]

def get_stats():
"""Get statistics from scene timestamps and return time duration."""
scenes = scene_detection()
# Turn time string into a float
timestamps = [timeparse(timestr) for timestr in scenes]
print(timestamps)
# Find the difference between timestamps
scene_duration = [round(abs(t - i), 1)
for i, t in zip(timestamps, timestamps[1:])]
print(scene_duration)
maximum_scene_length = max(scenes)
minimum_scene_length = min(scenes)
median = statistics.median(scene_duration)
mode = statistics.mode(scene_duration)
mean = round(statistics.mean(scene_duration), 1)
variance = round(statistics.variance(scene_duration, mean), 1)
print("The average {0}, The median is {1}, the mode is {2}, and the variance is {3}. The shortest length of a clip is {4} while the longest is {5}".format(
mean, median, mode, variance, minimum_scene_length, maximum_scene_length))
return scene_duration

def create_plot():
"""Create a plot using scene duration."""
data = get_stats()
fig = plt.figure()
ax = plt.subplot(111)
ax.plot(data)
plt.xlabel('Subclip index')
plt.ylabel('Scene Length')
# Create data plot
fig.savefig(
'your_local_file_path/plot.png')
# Save plot to Algorithmia Hosted Data
create_plot() 