Today’s blog post is brought to you by one of our community members @sprague. Thanks for submitting !
The same simple Algorithmia API that works with Python, Java, Scala, and others, is easy for iOS programming too.
This short lesson assumes you have a basic knowledge of iOS programming: enough to write a simple single-view app using the Storyboard. (If not, start with Apple’s documentation here). To run this example, all you need is a Mac and a copy of Apple’s (free) XCode development environment.
The Algorithmia API works through simple HTTP POST commands. Fortunately, iOS already provides several powerful networking object classes that make that very easy:
- NSMutableURLRequest sets up the HTTP request with some straightforward and obvious methods, like setHTTPMethod:@“POST” to tell the server that you want to post some data.
- NSURLSession is a powerful class that lets you download the content via HTTP, including in the background or even while the application is suspended. Fortunately, the methods to control the download behavior are pretty straightforward.
- NSURLDataTask is a related class specifically for getting data via HTTP. Pass it an instance ofNSURLSession, a NSMutableURLRequest and handler that describes what to do when it receives the a response from the server.
- Finally, NSJSONSerialization is a handy class that will convert between JSON and native iOS dictionary or array types. Read Apple’s class reference documentation to see how much work this will save you!
Example: POST a string to the Algorithmia API
The best way to understand is with a simple example. Here’s a short program to send a string representation of an integer to the Algorithmia isPrime API, to find whether an input number is prime or not.
Here’s the opening screen:
Just a normal text input box (UITextEdit), a button you push to submit the text to the server, and a few labels to show what’s happening. That’s all there is to the UI, and you should be able to whip this up quickly yourself in Storyboard.
As for the guts of the app, we need to specify three things:
- The Algorithm path/uri, in this case it is ‘diego/isPrime’
- Your API key, which you can get from the Profile/Credentials page
- The right HTTP headers, as described in the API Docs – Getting Started.
That’s it! All the networking code you need to access the Algorithmia API is right there. Perhaps the most important thing to point out here is the URI format and the HTTP Headers. Whenever you’re in doubt about these details, you should browse to a random algorithm and inspect the cURL parameters given in the ‘API Usage Examples’ section.
The world runs on data, but all too often, the effort to acquire fresh data, analyze it and deploy a live analysis or model can be a challenge. We here at Algorithmia spend most of our waking hours thinking about how to help you with the latter problem, and our friends at Socrata spend their time thinking about how to make data more available. So, we combined forces to analyze municipal building permit data (using various time series algorithms), as a reasonable proxy for economic development. Thanks to Socrata, this information is now available (and automatically updated) for many cities, with no need to navigate any bureaucracy. So we pulled in data for a range of cities, including:
- Santa Rosa, CA
- Fort Worth, TX
- Boston, MA
- Edmonton, Alberta, Canada
- Los Angeles, CA
- New York City, NY
We will show that it is fairly easy to relate building permit data to local (seasonal) weather patterns and economic conditions. When evaluating a given area, it is useful to know how much development has been going on, but also how much of this is explainable by other factors, such as seasonal variation, macroeconomic indicators, commodity prices, etc. This can help us answer such questions as: “how is this city likely be affected by a recession?” or “how will real estate development fare if oil prices drop?”
- Socrata Open Data Query – pulls the permit data from Socrata
- Simple Moving Average – uses local average to smooth data
- Linear Detrend – removes increasing or decreasing trends in time series
- Autocorrelate – used to analyze the seasonality of a time series
- Remove Seasonality – removes known seasonal effects from a time series
- Outlier Detection – flags unusual data points
- Forecast – predict a given time series into the future
Try it out here:
We use an algorithm to directly access Socrata’s API that can be found here.
Once we’ve retrieved the data, we aggregate it by total number of permits issued per month, and plot this data. To make the graph clearer, it sometimes helps to smooth or denoise the data. There are a number of ways to do this, with moving averages being the most popular. The most intuitive is the Simple Moving Average, which replaces each point by the average of itself and some number of the preceding points (default is 3).
A simple first inspection of data reveals trends and large peaks/dips. Ft. Worth, TX, lends itself to this analysis. It shows a steady growth from the beginning of the record up to a peak right before the subprime crisis, with a fairly rapid fall off to a plateau.
As much as we can learn from simple inspection of the plot, there are factors making this difficult, especially when it comes to inferring economic activity. For instance, outdoor construction tends to take place during nicer weather, especially in cold places far from the equator. This often makes the data harder to interpret – a seasonal spike doesn’t say nearly as much about underlying economic activity as a non-seasonal one, so we need a way to take this into account.
Edmonton, Alberta, is a particularly clear example of this.
To address seasonality, we first need to remove the linear trends via Linear Detrend, which fits a line to the data and then subtracts it from the data, resulting in a time series of differentials from the removed trend. Without detrending, the following seasonality analysis will not work.
Permit issuance in Edmonton is clearly seasonal, even to the naked eye, but seasonality can be detected even in much noisier data using autocorrelation. Very roughly speaking, you can identify seasonality by the relative strength and definition of peaks in the autocorrelation plot of the time series, which you can calculate using our Autocorrelation algorithm. The autocorrelation of a time series is a rough measure of how similar a time series is to a lagged version of itself. If a series is highly seasonal according to some period of length k, the series will look similar to itself about every k steps between peaks and troughs.
We expect the seasonality to be strongest furthest from the equator. Sure enough, the seasonality is most clear in more northern cities such as NYC and Edmonton and least clear in cities like Los Angeles – you can check this yourself with the interactive panel above.
It can help to suppress this seasonal influence so the effects of other factors will be made more clear. In Algorithmia this can be done using an algorithm called Remove Seasonality, which, by default, detects and removes the strongest seasonal period. When we do this it smooths out seasonal variation, and what it leaves is, in a sense, what is unexpected in the data.
Note that the transformed time point should be interpreted as the difference of the actual value of a given point in time, minus the expected seasonal component of the data. It is a differential rather than an absolute value and thus can be negative.
Once we’ve accounted for linear trends and seasonality, we’re left with the most mysterious part of the data, where the more subtle signals hide. In many cases visual inspection and some detective work will be useful here, but in other cases it’s useful to automate the process of detecting subtle behavior. For instance, if you have an automated process that ingests, analyzes, and reports on incoming data, you may want to be alerted if something changes drastically, and take action based on that information. Defining what constitutes “abnormal” can be a hard problem, but by removing the larger trends, we are left with a clearer picture of these abnormalities.
The city of Santa Rosa, CA provides an interesting example. One can see from inspection the sudden spike in permits issued in 2013, but it can also be detected automatically using Outlier Detection algorithms. This algorithm works by setting all non-outliers to zero and leaving outliers unchanged, where an outlier is defined as any data point that falls more than two standard deviations from the mean of the series. We don’t have an explanation for this particular outlier, it may just be chance or a change in reporting, but does indicate that it may be worth looking into more carefully.
At this point it would be nice to tell you about our shiny crystal ball that can predict the future (and return it via an API call). Unfortunately, we’re not there quite yet, but we CAN help you see what observed trends will look like extrapolated into the future. Specifically, the Forecast algorithm takes your time series and projects observed linear and seasonal trends out a given number of time steps into the future. It does this by calculating, for each future point, its value according to extrapolation of the detected linear trend, then adding to that the differential corresponding to the contribution of each seasonal component to that point. In the demo, Forecast is set to predict using data from the 3 strongest seasonal components.
Judging from the results of the Forecast algorithm, we expect a steady increase in construction activity for Edmonton, as well as Los Angeles, Boston, and New York. Santa Rosa looks to maintain the status quo, and Forth Worth looks to have a slight downward trend.
All together now!
The takeaway from this example is not the individual pieces – most of these techniques are available a number of places. The secret here is that it is simple, composable, and fully automated. Once one person writes a data connector in Algorithmia (which is likely to happen for interesting public data, like that provided by Socrata), you don’t have to burn precious time writing and working kinks out of your own. On the other hand, if something existing gets done better, the compositional nature of Algorithmia will allow you to swap out pieces of a pipeline seamlessly. When someone comes up with a better outlier detector, then upgrading a pipeline like the one above can be literally just a few keystrokes away.
Finally, the analyses we’ve shown here are based on live data, changing all the time as new data flows in from Socrata’s pipeline to each individual city’s building permit system; Algorithmia offers an easy way to deploy and host that live analysis so the underlying data and interpretation are always up to date.
Today’s blog post is brought to you by one of our customers DeansList software. Thank you Matt for sharing your experience with our community.
At DeansList, one of the things that we do really well is build custom report cards that students take home (or get e–mailed) to their parents. We build them from the ground up with every school – taking instructions on everything from design to data points to the placement of charts. They often go home every week and, in addition to keeping parents up–to–speed, they include them in the messaging, programs and structures that engage their children every day.
Ugly, but functional.
Not surprisingly, with the promise of customization come very unique requests. Lots of our schools include fake paychecks linked to a school’s token economy – a tool to teach financial literacy. Additionally, many schools we work with serve large populations of families whose primary language is not English (often Spanish). Whenever possible we translate our reports into a second language on the back, so non–English speaking parents aren’t missing any important details. The challenge we faced was translating integers into Spanish words for the second line on the check (i.e. $430 into cuatrocientos treinta dólares). We wrote out a script to translate numbers to English words, but no one on our team has the language expertise to do the same in Spanish. It admittedly wasn’t a huge priority so we kept the words in English – ugly but functional.
Around the time we were tackling this we came across Algorithmia and it’s Bounty feature. It seemed like a longshot – but what’s the harm in asking? So we posted a request for someone to write an algorithm to Translate Integers to Spanish Words.
A Bounty Fulfilled
To be honest, after I created the bounty, I forgot about until March 2nd when I got the “Your bounty has been completed” e–mail. A user named Javier had uploaded Cardinales. I logged right in, threw some tests into the web console and, within diez minutos, was hitting the API successfully via Postman.
The Algorithmia-driven solution!
Algorithmia’s libraries plug right into any platform, and we considered using their JS on the front–end. However, for a few reasons, wrapping an endpoint to our internal API around a call to Cardinales made the most sense. First, this feature will likely get deployed again in many more schools – so abstracting it gives us more flexibility to change things around as needed. Secondly, school firewalls can be incredibly restrictive, so having the browser request the data via our server keeps all our calls in the whitelist and eliminates troubleshooting hard–to–pinpoint issues.
Safe and Secure
What’s awesome about Algothmia is it’s not inside our system, and we don’t need to provide Cardinales any kind of student data to work. We just send it a # – 354, and it comes back to us with the translation: trescientos cincuenta y cuatro. Everything happens via cURL and there’s no trace of Algorithmia or Cardinales code on our servers.
Considerations for Next Time
Or, How to Write a Better Bounty Spec…
Javier went above–and–beyond and included things in the solution that we didn’t ask for, like translations in both the masculine and the feminine, and proper translations for negative numbers. Still, now that it’s up and running, I realize there are a bunch of things we could have thought more about when we wrote the spec.
- Multiple items in a single request – Right now, we send one number at a time, and receive one translation per request. This means there’s a lot of overhead if we print 100 students reports at once. Next time around, I’ll include the ability to send an array of inputs, and receive key–value pairs as a return.
- Error codes – Cardinales handles bad inputs gracefully. Sending “abcd” returns empty quotes. However, more elaborate algorithms might require more detailed error reporting. It’s definitely something we’ll keep in mind going forward.
As more schools build savvy data teams, we’re always looking for ways to help them integrate their efforts with our own. We have a public API, but I could also see Algorithmia as an easy, cost–effective way for them to contribute their own highly–specific code without having to think about setting up their own infrastructure.
So far, leveraging Algorithmia for a non–core feature like this has been an easy, awesome experience. Our engineers were stoked to plug this in and so far it works perfectly. I’ve never met Javier, but I can read his code (it’s elegant), he’s contributed meaningfully to our platform, and I appreciate his work. Gracias!
Matt Robins is the cofounder of DeansList, a platform that manages non-academic student data. DeansList’s platform puts behavior data to work, driving actionable reporting for students, teachers administrators and parents. For more information on DeansList, or to ask Matt questions about his Algorithmia implementation, e–mail him at email@example.com.
For us here at Algorithmia, protecting the privacy and security of our user’s information is a top priority. After some time in development, we are happy to announce that starting today we will be recognizing security researchers for their efforts through a bug bounty program..
A bug bounty program is common practice amongst leading companies to improve the security and experience of their products. This type of program provides an incentive for security researchers to responsibly disclose vulnerabilities and bugs, and allows for internal security teams to respond adequately in the best interest of their users.
All vulnerabilities should be reported via firstname.lastname@example.org. GPG key available below .
We require that all researchers:
Make every effort to avoid privacy violations, degradation of user experience, disruption to production systems, and destruction of data during security testing:
- Use the designated communication channels to report vulnerability information to us; and
- Keep information about any vulnerabilities you’ve discovered confidential between yourself and Algorithmia until we’ve had 90 days to resolve the issue.
If you follow these guidelines when reporting an issue to us we commit to:
- Not institute a civil legal action against you and not support a criminal investigation;
- Work with you to understand and resolve the issue quickly (confirming the report within 72 hours of submission);
- Recognize your contribution on our site, if you are the first to report the issue and we make a code or configuration change based on the issue.
Any component developed by us under Algorithmia.com is fair game for this bounty system except individual algorithms created by our users.
Out of Scope:
Any services hosted by 3rd party providers and services are excluded from scope.
In the interest of the safety of our users, staff, the Internet at large, and you as the security researcher, the following test types are excluded from scope and not eligible for a reward:
- Findings from physical testing such as office access (e.g. open doors, tailgating)
- Findings derived primarily from social engineering (e.g. phishing)
- Findings from applications or systems not listed in the ‘Targets’ section
- Functional, UI and UX bugs and spelling mistakes
- Network level Denial of Service (DoS/DDoS) vulnerabilities
Things we do not want to see:
Personally identifiable information of users (PII) that you may have found during your research.
In 1996 Larry Page and Sergey Brin published the Backrub paper, a research project about a new kind of search engine. The concept was a link analysis algorithm that measured relative importance within a set. Based on this one algorithm, the company Google was created and the PageRank index became one of the most famous algorithmic concepts in history.
Knowing how important it is to be indexed for the right thing, we here at Algorithmia were inspired by one of our users when he added an implementation of Page Rank to the marketplace (Note that Google has moved beyond their original algorithm over a decade ago at this point). We realized that by combining various algorithms available in the Algorithmia API with the Page Rank algorithm, we would be able to get an idea of how search engines view a site or domain. So we’ve essentially integrated every intermediate step and process that a crawler goes through when examining a site (using the algorithms already available in our marketplace), which allows us to understand how machines see the web.
Understanding how pages are linked to each other on a site gives us a snapshot of the connectedness of a domain, and a glimpse into how important a search engine might consider each individual page. Additionally, a machine generated summary and topic tags give us an even better picture of how a web crawler might classify your domain.
Modern web crawlers use sophisticated algorithms to link across the entire internet but even limiting our application to pages on just one domain gives us significant insights into how any site might look to one of these crawlers.
Building a site explorer
We broke up the task of exploring a site into three steps:
- Crawl the site and retrieve as a graph
- Analyze the connectivity of the graph
- Analyze each node for its content
We found algorithms for each part already in the Algorihtmia API, which allowed for quickly building out the pipeline:
- GetLinks (retrieves all the URLs at the given URL)
- PageRank (simple implementation of the Backrub algorithm)
- Url2text (converts documents and strips tags to return the text from any URL)
- AutoTag (Uses Latent Dirichlet Allocation from mallet to produce topic tags)
Here is the result (click on a node for more info):
The individual steps
We built this app using three core technologies: AngularJS, D3.js, and the Algorithmia API.
The first thing we needed to do was crawl the domain supplied. We allow the user to determine the number of pages to crawl (limited in our demo to a max of 120 – at this point your browser will really start hurting though). For each page crawled, we retrieve every single link on that page and plot it on the graph. Then, once we have reached the max pages to crawl, we apply PageRank to the result.
Get the links:
Iterate over site:
Apply Page Rank:
Once the graph is built, we render it using a D3 force layout graph. Clicking on any individual node retrieves the content from that page, cleans up the HTML so we are left with just the text, and process the text through both the summarizer and topic tagger algorithms.
Really, the hardest part about building this was figuring out the quirks of D3, since the Algorithmia API just allowed us to stitch together all the algorithms we wanted for the process and start using them, without worrying about a single download or dependency.
Don’t take our word for it try it yourself , we have made the AngularJS code available here, feel free to fork it, modify it and use it in your own applications.
It’s really easy to expand the capabilities of the site mapper, here are some ideas:
- Sentiment by Term (understand how a term is seen across the site e.g.: Apple Watch on GeekWire)
- Count Social Shares (understand how popular any link-node is on a number of social media sites)
- many more that can be found in Algorithmia…
Made it this far?
If you tweet “I am [username] on @algorithmia” we will add another free 5k credits to your account.
– Diego (@doppene)