Video Metadata Extraction

No algorithm description given

Video Metadata Extraction This algorithm uses an image processing algorithm to extract metadata from each frame of a video, and returns a json file containing each frames timestamp and the resulting metadata. note: This algorithm takes longer than most other algorithms as it recursively calls an image algorithm over each frame of a video file, make sure to add a large custom timeout to your algorithm call before using videos longer than 3 minutes. note: This algorithm is royalty free, but the image processing algorithms it can call might not be, check your account's credit balance before running this algorithm on a custom video. Table of Contents Input Output Default Options Examples credits Algorithm Console Input input_file: String,
output_file: String,
algorithm: String,
advanced_input: Json,
fps: Double
 input_file - (required) - The input url of the video file, can be any standard video container format (mp4, mkv, avi, etc), can be an http/https url or a data connector uri( data:// , s3:// , dropbox:// , etc). output_file - (required) - The output url of the json file, must be a data connector uri ( data:// , s3:// , dropbox:// , etc). algorithm - (required) - The image processing algorithmia algorithm uri, if no default mode is found for that algorithm, advanced_input must be defined. advanced_input - (optional) - if you have advanced requirements or want to use an algorithm with no default parameters, See Advanced Input . fps - (optional) - If you have a desired fps sampling rate, input it here. defaults to the input video's fps. Advanced Input If advanced_input is found, this algorithm will parse it as a json object; finding and replacing keywords relating to uploading and downloading files in both batch and single form with paths that the Video Metadata Extraction algorithm uses. The keywords are: $SINGLE_INPUT - whenever you see a single image input path for an algorithm, this keyword will work $BATCH_INPUT - similar to single input keyword, but works for arrays of input files (Nuditydetectioni2v for example) Example: IllustrationTagger algorithm default input {
 "image": "https://upload.wikimedia.org/wikipedia/commons/thumb/9/9a/Trudeaujpg.jpg/348px-Trudeaujpg.jpg",
 "tags": ["1boy", "male", "sky", "water", "safe"]
}
 Video Metadata Extraction with advanced Illustration Tagger input 
{ 
 "input_file": "data://path/to/file.mp4",
 "output_file": "data://save/data.json,
 "algorithm": "algo://deeplearning/IllustrationTagger/0.2.3",
 "advanced_input": {
 "image": "$SINGLE_INPUT",
 "tags": ["1boy", "male", "sky", "water", "safe"]
},
 "fps": 12.5
}
 Output { 
 "output_file": String
}
 output_file - the complete algorithmia URI for where the output json file has been uploaded to, it's format can be seen below. formatting The resulting output json file has the following format: { 
 "frame_data":[ 
 { 
 "timestamp":Double,
 "data":Json
 } ...
 ]
}
 frame_data - the main json key, contains an array of json objects containing timestamps and data. timestamp - the timestamp of the frame in seconds from the start (frame 0 is 0.0, etc). data - the resulting output json data from the image processing algorithm for that particular frame, can be any json parsable format. for example, output from nuditydetectioni2v: { 
 "frame_data":[ 
 { 
 "data":{ 
 "confidence":1,
 "nude":false,
 "url":"data://.my/ProcessVideo/formatted-0000001.png"
 },
 "timestamp":0.0
 },
 { 
 "data":{ 
 "confidence":1,
 "nude":false,
 "url":"data://.my/ProcessVideo/formatted-0000002.png"
 },
 "timestamp":0.0666446023989338
 },
 { 
 "data":{ 
 "confidence":1,
 "nude":false,
 "url":"data://.my/ProcessVideo/formatted-0000003.png"
 },
 "timestamp":0.1332892047978676
 },
 { 
 "data":{ 
 "confidence":1,
 "nude":false,
 "url":"data://.my/ProcessVideo/formatted-0000004.png"
 },
 "timestamp":0.1999338071968014
 }
 ]
}
 Default Options This algorithm has default parameters set up for a number of algorithms, this list will grow as new image altering algorithms are released: Nudity Detection - uses version 0.2.4 , in batch mode (sfw/NudityDetectioni2v) illustration Tagger - uses version 0.2.3 , using default tags and threashold (deeplearning/SalNet) Places365 Classifier - uses version 0.1.9 , using default number of predictions (5) (deeplearning/Places365Classifier) Examples Nuditydetectioni2v default input { 
 "input_file":"data://media/videos/kenny_test.mp4",
 "output_file":"data://.algo/temp/kenny_test_nudity.json",
 "algorithm":"algo://sfw/nuditydetectioni2v"
}
 Places365 Classifier advanced input { 
 "input_file":"data://media/videos/kenny_test.mp4",
 "output_file":"data://.algo/temp/kenny_test_places.json",
 "algorithm":"algo://deeplearning/places365classifier/0.1.9",
 "advanced_input":{ 
 "image":"$SINGLE_INPUT",
 "numResults": 15
 }
}
 IllustrationTagger default input with fps { 
 "input_file":"data://media/videos/kenny_test.mp4",
 "output_file":"data://.algo/temp/kenny_illustrated.json",
 "algorithm":"algo://deeplearning/IllustrationTagger",
 "fps": 14.24
}
 Credits All video processing is handled by FFMpeg , which is a fantastic video/image manipulation tool that can be found here

Tags
(no tags)

Cost Breakdown

0 cr
royalty per call
1 cr
usage per second
avg duration
This algorithm has permission to call other algorithms which may incur separate royalty and usage costs.

Cost Calculator

API call duration (sec)
×
API calls
=
Estimated cost
per calls
for large volume discounts
For additional details on how pricing works, see Algorithmia pricing.

Internet access

This algorithm has Internet access. This is necessary for algorithms that rely on external services, however it also implies that this algorithm is able to send your input data outside of the Algorithmia platform.


Calls other algorithms

This algorithm has permission to call other algorithms. This allows an algorithm to compose sophisticated functionality using other algorithms as building blocks, however it also carries the potential of incurring additional royalty and usage costs from any algorithm that it calls.


To understand more about how algorithm permissions work, see the permissions documentation.

1. Type your input

2. See the result

Running algorithm...

3. Use this algorithm

curl -X POST -d '{{input | formatInput:"curl"}}' -H 'Content-Type: application/json' -H 'Authorization: Simple YOUR_API_KEY' https://api.algorithmia.com/v1/algo/media/VideoMetadataExtraction/0.1.5
View cURL Docs
algo auth
# Enter API Key: YOUR_API_KEY
algo run algo://media/VideoMetadataExtraction/0.1.5 -d '{{input | formatInput:"cli"}}'
View CLI Docs
import com.algorithmia.*;
import com.algorithmia.algo.*;

String input = "{{input | formatInput:"java"}}";
AlgorithmiaClient client = Algorithmia.client("YOUR_API_KEY");
Algorithm algo = client.algo("algo://media/VideoMetadataExtraction/0.1.5");
AlgoResponse result = algo.pipeJson(input);
System.out.println(result.asJsonString());
View Java Docs
import com.algorithmia._
import com.algorithmia.algo._

val input = {{input | formatInput:"scala"}}
val client = Algorithmia.client("YOUR_API_KEY")
val algo = client.algo("algo://media/VideoMetadataExtraction/0.1.5")
val result = algo.pipeJson(input)
System.out.println(result.asJsonString)
View Scala Docs
var input = {{input | formatInput:"javascript"}};
Algorithmia.client("YOUR_API_KEY")
           .algo("algo://media/VideoMetadataExtraction/0.1.5")
           .pipe(input)
           .then(function(output) {
             console.log(output);
           });
View Javascript Docs
var input = {{input | formatInput:"javascript"}};
Algorithmia.client("YOUR_API_KEY")
           .algo("algo://media/VideoMetadataExtraction/0.1.5")
           .pipe(input)
           .then(function(response) {
             console.log(response.get());
           });
View NodeJS Docs
import Algorithmia

input = {{input | formatInput:"python"}}
client = Algorithmia.client('YOUR_API_KEY')
algo = client.algo('media/VideoMetadataExtraction/0.1.5')
print algo.pipe(input)
View Python Docs
library(algorithmia)

input <- {{input | formatInput:"r"}}
client <- getAlgorithmiaClient("YOUR_API_KEY")
algo <- client$algo("media/VideoMetadataExtraction/0.1.5")
result <- algo$pipe(input)$result
print(result)
View R Docs
require 'algorithmia'

input = {{input | formatInput:"ruby"}}
client = Algorithmia.client('YOUR_API_KEY')
algo = client.algo('media/VideoMetadataExtraction/0.1.5')
puts algo.pipe(input).result
View Ruby Docs
use algorithmia::*;

let input = {{input | formatInput:"rust"}};
let client = Algorithmia::client("YOUR_API_KEY");
let algo = client.algo('media/VideoMetadataExtraction/0.1.5');
let response = algo.pipe(input);
View Rust Docs
Discussion
  • {{comment.username}}