Video Metadata Extraction

No algorithm description given

Video Metadata Extraction This algorithm uses an image processing algorithm to extract metadata from each frame of a video, and returns a json file containing each frames timestamp and the resulting metadata. note: This algorithm takes longer than most other algorithms as it recursively calls an image algorithm over each frame of a video file, make sure to add a large custom timeout to your algorithm call before using videos longer than 3 minutes. note: This algorithm is royalty free, but the image processing algorithms it can call might not be, check your account's credit balance before running this algorithm on a custom video. Changelog 0.5.0 - May 23, 2017: Added Smart Video Downloader support to resolve a compressed gzip request problem with Smart Video Downloader added, input_file may now point to any web hosted URL, including youtube, vimeo, etc. Table of Contents Input Output Default Options Examples credits Algorithm Console Input input_file: String, output_file: String, algorithm: String, advanced_input: Json, fps: Double input_file - (required) - The input url of the video file, can be any standard video container format (mp4, mkv, avi, etc), can be an http/https url or a data connector uri( data:// , s3:// , dropbox:// , etc). output_file - (required) - The output url of the json file, must be a data connector uri ( data:// , s3:// , dropbox:// , etc). algorithm - (required) - The image processing algorithmia algorithm uri, if no default mode is found for that algorithm, advanced_input must be defined. advanced_input - (optional) - if you have advanced requirements or want to use an algorithm with no default parameters, See Advanced Input . fps - (optional) - If you have a desired fps sampling rate, input it here. defaults to the input video's fps. Advanced Input If advanced_input is found, this algorithm will parse it as a json object; finding and replacing keywords relating to uploading and downloading files in both batch and single form with paths that the Video Metadata Extraction algorithm uses. The keywords are: $SINGLE_INPUT - whenever you see a single image input path for an algorithm, this keyword will work $BATCH_INPUT - similar to single input keyword, but works for arrays of input files (Nuditydetectioni2v for example) Example: IllustrationTagger algorithm default input { "image": "", "tags": ["1boy", "male", "sky", "water", "safe"] } Video Metadata Extraction with advanced Illustration Tagger input { "input_file": "data://path/to/file.mp4", "output_file": "data://save/data.json, "algorithm": "algo://deeplearning/IllustrationTagger/0.2.3", "advanced_input": { "image": "$SINGLE_INPUT", "tags": ["1boy", "male", "sky", "water", "safe"] }, "fps": 12.5 } Output { "output_file": String } output_file - the complete algorithmia URI for where the output json file has been uploaded to, it's format can be seen below. formatting The resulting output json file has the following format: { "frame_data":[ { "timestamp":Double, "data":Json } ... ] } frame_data - the main json key, contains an array of json objects containing timestamps and data. timestamp - the timestamp of the frame in seconds from the start (frame 0 is 0.0, etc). data - the resulting output json data from the image processing algorithm for that particular frame, can be any json parsable format. for example, output from nuditydetectioni2v: { "frame_data":[ { "data":{ "confidence":1, "nude":false }, "timestamp":0.0 }, { "data":{ "confidence":1, "nude":false }, "timestamp":0.0666446023989338 }, { "data":{ "confidence":1, "nude":false }, "timestamp":0.1332892047978676 }, { "data":{ "confidence":1, "nude":false }, "timestamp":0.1999338071968014 } ] } Default Options This algorithm has default parameters set up for a number of algorithms, this list will grow as new image altering algorithms are released: Nudity Detection - uses version 0.2.4 , in batch mode (sfw/NudityDetectioni2v) illustration Tagger - uses version 0.2.3 , using default tags and threashold (deeplearning/SalNet) Examples Nuditydetectioni2v default input { "input_file":"data://media/videos/kenny_test.mp4", "output_file":"data://.algo/temp/kenny_test_nudity.json", "algorithm":"algo://sfw/nuditydetectioni2v" } Places365 Classifier advanced input { "input_file":"data://media/videos/kenny_test.mp4", "output_file":"data://.algo/temp/kenny_test_places.json", "algorithm":"algo://deeplearning/places365classifier/0.1.9", "advanced_input":{ "image":"$SINGLE_INPUT", "numResults": 15 } } IllustrationTagger default input with fps { "input_file":"data://media/videos/kenny_test.mp4", "output_file":"data://.algo/temp/kenny_illustrated.json", "algorithm":"algo://deeplearning/IllustrationTagger", "fps": 14.24 } Credits All video processing is handled by FFMpeg , which is a fantastic video/image manipulation tool that can be found here

(no tags)

Cost Breakdown

0 cr
royalty per call
1 cr
usage per second
avg duration
This algorithm has permission to call other algorithms which may incur separate royalty and usage costs.

Cost Calculator

API call duration (sec)
API calls
Estimated cost
per calls
for large volume discounts
For additional details on how pricing works, see Algorithmia pricing.

Internet access

This algorithm has Internet access. This is necessary for algorithms that rely on external services, however it also implies that this algorithm is able to send your input data outside of the Algorithmia platform.

Calls other algorithms

This algorithm has permission to call other algorithms. This allows an algorithm to compose sophisticated functionality using other algorithms as building blocks, however it also carries the potential of incurring additional royalty and usage costs from any algorithm that it calls.

To understand more about how algorithm permissions work, see the permissions documentation.

1. Type your input

2. See the result

Running algorithm...

3. Use this algorithm

curl -X POST -d '{{input | formatInput:"curl"}}' -H 'Content-Type: application/json' -H 'Authorization: Simple YOUR_API_KEY'
View cURL Docs
algo auth
algo run algo://media/VideoMetadataExtraction/0.5.6 -d '{{input | formatInput:"cli"}}'
View CLI Docs
import (
  algorithmia ""

input := {{input | formatInput:"go"}}

var client = algorithmia.NewClient("YOUR_API_KEY", "")
algo, _ := client.Algo("algo://media/VideoMetadataExtraction/0.5.6")
resp, _ := algo.Pipe(input)
response := resp.(*algorithmia.AlgoResponse)
View Go Docs
import com.algorithmia.*;
import com.algorithmia.algo.*;

String input = "{{input | formatInput:"java"}}";
AlgorithmiaClient client = Algorithmia.client("YOUR_API_KEY");
Algorithm algo = client.algo("algo://media/VideoMetadataExtraction/0.5.6");
AlgoResponse result = algo.pipeJson(input);
View Java Docs
import com.algorithmia._
import com.algorithmia.algo._

val input = {{input | formatInput:"scala"}}
val client = Algorithmia.client("YOUR_API_KEY")
val algo = client.algo("algo://media/VideoMetadataExtraction/0.5.6")
val result = algo.pipeJson(input)
View Scala Docs
var input = {{input | formatInput:"javascript"}};
           .then(function(output) {
View Javascript Docs
using Algorithmia;

var input = "{{input | formatInput:"cs"}}";
var client = new Client("YOUR_API_KEY");
var algorithm = client.algo("algo://media/VideoMetadataExtraction/0.5.6");
var response = algorithm.pipe<object>(input);
View .NET/C# Docs
var input = {{input | formatInput:"javascript"}};
           .then(function(response) {
View NodeJS Docs
import Algorithmia

input = {{input | formatInput:"python"}}
client = Algorithmia.client('YOUR_API_KEY')
algo = client.algo('media/VideoMetadataExtraction/0.5.6')
print algo.pipe(input)
View Python Docs

input <- {{input | formatInput:"r"}}
client <- getAlgorithmiaClient("YOUR_API_KEY")
algo <- client$algo("media/VideoMetadataExtraction/0.5.6")
result <- algo$pipe(input)$result
View R Docs
require 'algorithmia'

input = {{input | formatInput:"ruby"}}
client = Algorithmia.client('YOUR_API_KEY')
algo = client.algo('media/VideoMetadataExtraction/0.5.6')
puts algo.pipe(input).result
View Ruby Docs
use algorithmia::Algorithmia;

let input = {{input | formatInput:"rust"}};
let client = Algorithmia::client("YOUR_API_KEY");
let algo = client.algo("media/VideoMetadataExtraction/0.5.6");
let response = algo.pipe(input);
View Rust Docs
import Algorithmia

let input = "{{input | formatInput:"swift"}}";
let client = Algorithmia.client(simpleKey: "YOUR_API_KEY")
let algo = client.algo(algoUri: "media/VideoMetadataExtraction/0.5.6") { resp, error in
View Swift Docs
  • {{comment.username}}