Illustration Tagger

No algorithm description given

1. Introduction Illustration Tagger is a classification algorithm that tries to give you the best possible matching tags possible. It extracts features from the image and find the similarity between categories of images. Input: (Required) Image Data API Url' Web (http/https) Url /binary image or a base64 encoded JPEG String. (Optional) Confidence interval threshold* (default threshold=0.2) (Optional) Desired tags* * You cannot provide both the threshold & desired tags at the same time. *  As of version 0.2.0 base64 encoded strings are a valid image input format. Output: Tagging categories: (When desired tags are not given) rating character copyright general Confidence for desired tags. Note: The first call to this algorithm will take a bit longer than sequential calls to due algorithm initialization. All following calls will be significantly faster. 2. How Tagging Works The algorithm looks at the given image and calculates the similarity between the image and available tags. It returns 4 categories of predicted tags when available. The rating category tells you how NSFW the image is. It gives confidence intervals of how safe ,  questionable or  explicit the image is. The character category tries to guess the character the person is in the photo. The copyright category tries to predict who owns the copyright to to corresponding image. Because the model was trained on a Japanese Illustration dataset, it may not return for all other copyright owners. The general category has several hundred tags. This category gives you a good understanding of what tag your image belongs to. Alternatively if you want to only get the confidence values for specific tags you can pass the tags parameter to the algorithm. The algorithm will only return confidence for the specified tags only. A list of all available tags can be viewed here . 3. Examples Example 1. Parameter 1: Data API Url {
 "image": "data://deeplearning/example_data/trudeau.jpg"
} Output {
 "rating": [
 {"safe": 0.9924461245536804},
 {"questionable": 0.006749290972948074},
 {"explicit": 0.0001874923618743196}
 ],
 "character": [],
 "copyright": [{"real life": 0.38196513056755066}],
 "general": [
 {"1boy": 0.93906170129776},
 {"solo": 0.9158311486244202},
 {"male": 0.6592674255371094},
 {"black hair": 0.42818406224250793},
 {"necktie": 0.23634621500968933},
 {"formal": 0.23075371980667114}
 ]
} Example 2. Parameter 1: HTTP Url {
 "image": "https://upload.wikimedia.org/wikipedia/commons/thumb/9/9a/Trudeaujpg.jpg/348px-Trudeaujpg.jpg"
} Output: {
 "rating": [
 {"safe": 0.9924461245536804},
 {"questionable": 0.006749290972948074},
 {"explicit": 0.0001874923618743196}
 ],
 "character": [],
 "copyright": [{"real life": 0.38196513056755066}],
 "general": [
 {"1boy": 0.93906170129776},
 {"solo": 0.9158311486244202},
 {"male": 0.6592674255371094},
 {"black hair": 0.42818406224250793},
 {"necktie": 0.23634621500968933},
 {"formal": 0.23075371980667114}
 ]
} Example 3. Parameter 1: Data API Url Parameter 2: Threshold (Value=0.5) {
 "image": "data://deeplearning/example_data/trudeau.jpg",
 "threshold": 0.1
} Output; {
 "rating": [
 {"safe": 0.9924461245536804},
 {"questionable": 0.006749290972948074},
 {"explicit": 0.0001874923618743196}
 ],
 "character": [],
 "copyright": [{"real life": 0.38196513056755066}],
 "general": [
 {"1boy": 0.93906170129776},
 {"solo": 0.9158311486244202},
 {"male": 0.6592674255371094},
 {"black hair": 0.42818406224250793},
 {"necktie": 0.23634621500968933},
 {"formal": 0.23075371980667114},
 {"suit": 0.1762586236000061},
 {"black eyes": 0.1671484112739563},
 {"smile": 0.15806205570697784},
 {"facial hair": 0.15715354681015015},
 {"brown hair": 0.15507465600967407},
 {"short hair": 0.1326742023229599},
 {"cosplay": 0.12354501336812973},
 {"bust": 0.11704712361097336},
 {"photo": 0.10584855079650879}
 ]
} Example 4. Parameter 1: HTTP Url Parameter 2: Desired tags ("1boy", "male", "safe", "sky") {
 "image": "https://upload.wikimedia.org/wikipedia/commons/thumb/9/9a/Trudeaujpg.jpg/348px-Trudeaujpg.jpg",
 "tags": ["1boy", "male", "sky", "water", "safe"]
} Output: {
 "all_tags": [
 {"water": 0.0006849411875009537},
 {"1boy": 0.93906170129776},
 {"safe": 0.9924461245536804},
 {"male": 0.6592674255371094},
 {"sky": 0.0015039071440696716}
 ]
} 4. Credits For more information, please refer to http://illustration2vec.net/ or Saito, Masaki and Matsui, Yusuke. (2015). Illustration2Vec: A Semantic Vector Representation of Illustrations. SIGGRAPH Asia Technical Briefs. 2015. Demo image was retrieved from :  http://flickr.com/photos/25480181@N06/4929681007  CC BY License used.

Tags
(no tags)

Cost Breakdown

0 cr
royalty per call
1 cr
usage per second
avg duration

Cost Calculator

API call duration (sec)
×
API calls
=
Estimated cost
per calls
for large volume discounts
For additional details on how pricing works, see Algorithmia pricing.

Internet access

This algorithm has Internet access. This is necessary for algorithms that rely on external services, however it also implies that this algorithm is able to send your input data outside of the Algorithmia platform.


To understand more about how algorithm permissions work, see the permissions documentation.

1. Type your input

2. See the result

Running algorithm...

3. Use this algorithm

curl -X POST -d '{{input | formatInput:"curl"}}' -H 'Content-Type: application/json' -H 'Authorization: Simple YOUR_API_KEY' https://api.algorithmia.com/v1/algo/deeplearning/IllustrationTagger/0.2.3
View cURL Docs
algo auth
# Enter API Key: YOUR_API_KEY
algo run algo://deeplearning/IllustrationTagger/0.2.3 -d '{{input | formatInput:"cli"}}'
View CLI Docs
import com.algorithmia.*;
import com.algorithmia.algo.*;

String input = "{{input | formatInput:"java"}}";
AlgorithmiaClient client = Algorithmia.client("YOUR_API_KEY");
Algorithm algo = client.algo("algo://deeplearning/IllustrationTagger/0.2.3");
AlgoResponse result = algo.pipeJson(input);
System.out.println(result.asJsonString());
View Java Docs
import com.algorithmia._
import com.algorithmia.algo._

val input = {{input | formatInput:"scala"}}
val client = Algorithmia.client("YOUR_API_KEY")
val algo = client.algo("algo://deeplearning/IllustrationTagger/0.2.3")
val result = algo.pipeJson(input)
System.out.println(result.asJsonString)
View Scala Docs
var input = {{input | formatInput:"javascript"}};
Algorithmia.client("YOUR_API_KEY")
           .algo("algo://deeplearning/IllustrationTagger/0.2.3")
           .pipe(input)
           .then(function(output) {
             console.log(output);
           });
View Javascript Docs
var input = {{input | formatInput:"javascript"}};
Algorithmia.client("YOUR_API_KEY")
           .algo("algo://deeplearning/IllustrationTagger/0.2.3")
           .pipe(input)
           .then(function(response) {
             console.log(response.get());
           });
View NodeJS Docs
import Algorithmia

input = {{input | formatInput:"python"}}
client = Algorithmia.client('YOUR_API_KEY')
algo = client.algo('deeplearning/IllustrationTagger/0.2.3')
print algo.pipe(input)
View Python Docs
library(algorithmia)

input <- {{input | formatInput:"r"}}
client <- getAlgorithmiaClient("YOUR_API_KEY")
algo <- client$algo("deeplearning/IllustrationTagger/0.2.3")
result <- algo$pipe(input)$result
print(result)
View R Docs
require 'algorithmia'

input = {{input | formatInput:"ruby"}}
client = Algorithmia.client('YOUR_API_KEY')
algo = client.algo('deeplearning/IllustrationTagger/0.2.3')
puts algo.pipe(input).result
View Ruby Docs
use algorithmia::*;

let input = {{input | formatInput:"rust"}};
let client = Algorithmia::client("YOUR_API_KEY");
let algo = client.algo('deeplearning/IllustrationTagger/0.2.3');
let response = algo.pipe(input);
View Rust Docs
Discussion
  • {{comment.username}}