bkyan

bkyan / StyleThief / 0.2.13

README.md

StyleThief

This is an adoptation of Anish Athalye's Neural Style Transfer for Algorithmia:

This is the tweakable version of neural style where you could use your own style image and play with the weights, rather than fast (feed-forward) neural style.  The tradeoff is that it takes much longer to process images.  (See below for estimated run times.)

The in-browser client in Algorithmia has a timeout of 300 seconds, so you'll need to run this script in one of the other clients for larger images.  For instance, in the Algorithmia CLI, use " --timeout 1500 " to set the timeout to 1500 seconds.  Note that there is a hard limit of 3000 seconds in Algorithmia.


Input Parameters

The required source parameter is the url to a jpeg image that supplies the geometry for the output image, and the required style parameter is the url to a jpeg image that supplies the style for the output image.

The required output parameter sets the name of the output jpeg file.  It will go into the bkyan/StyleThief folder within the Algorithm Data section of the Data Portal, which you could get to by clicking on Data link of the main Algorithmia navigation.

The required iterations parameter is the number of iterations to run for.  It generally takes 800-2000 iterations to get decent results.  Here are some time estimates for 800 iterations:

  • 180x180 source image: 200-250 seconds
  • 512x512 source image: 600-700 seconds
  • 640x640 source image: 900-1000 seconds

The optional style_layer_weight_exp parameter (default: 1) could be used to tweak how abstract the transfer should be.  For instance 0.2 would be for a finer features style transfer and 2.0 would be for a coarse features style transfer.

The optional content_weight_blend parameter (default: 1) specifies the coefficient of content transfer layers.  The value should be between 0 and 1.  For instance, with the default value, 1, the style transfer tries to preserve finer grain content details whereas 0.1 would mean a more abstract picture.

The optional initial_image parameter (default: 0) specifies whether to start with the content image or to start with random noise.  Setting to 0 (the default) means the process starts with random noise.  Setting to 1 mean the process starts with the content image.

The optional pooling parameter (default: max) selects whether to use avg or max pooling.   In general, max pooling tends to generate more artsy images whereas avg pooling stays truer to the original image.

The optional preserve_colors parameter (default: 0) tells the script whether or not the transfer colors.  Setting to 1 will allow the source image to keep its original colors.

The optional style_weight parameter (default: 500) tells the script how important the style image's artistic style is, in the loss function to calculate the output image.  The optional content_weight parameter (default: 5) tells the script how important the source image geometry is, in the loss function to calculate the output image.  The optional tv_weight parameter (default: 100) tells the script how important denoising is, in the loss function to calculate the output image.  The optional learning_rate parameter (default: 10) tells the script how much to change on each iteration.

If you enter a non-empty value for the optional log_channel parameter, you could monitor script progess in your browser at the following status page url, replacing random_string_token with the value you enter for the log_channel parameter:

  • https://upflow.space/channel/random_string_token
Please note that upflow.space exists outside of Algorithmia, so you'll want to choose a hard-to-guess random_string_token to keep your log data separate from other people's log data.

The optional log_interval parameter designates how often intervals are reported to the above status page.  It is ignored if the log_channel parameter is empty.


Example:

Default Source Image:

Default Style Image:

Time Estimate:

With these default images running for 800 iterations, the script takes 125-150 seconds to complete.


Citation


@misc{athalye2015neuralstyle,
  author = {Anish Athalye},
  title = {Neural Style},
  year = {2015},
  howpublished = {\url{https://github.com/anishathalye/neural-style}},
  note = {commit xxxxxxx}
}