Wow! This morning we were selected as this year’s Startup Showcase winner at Strata + Hadoop World NY 2015. The panel of judges chose Algorithmia as the winner based on our team, technology, and innovations.
It’s an honor to take part in this competition, where 12 of the top big data startups demonstrated their technologies to a room packed full of investors, entrepreneurs, and researchers.
Three other companies stood apart from the rest, including second-place sense.io, third-place timbr.io, and the audience favorite, Blue Talon.
Build brilliant apps with Algorithmia, the largest marketplace for algorithms in the world. We help application developers solve complex problems with ease and efficiency by making algorithmic intelligence approachable and accessible in less than 5-lines of code.
Algorithmia unlocks the building blocks of human understanding, helping you make every app a smart app. Use Algorithmia to recognize patterns in your data, extract visual knowledge, understand audio, classify unstructured data, and derive meaning from language. Focus on what matters most, and let Algorithmia take care of the rest
- Infrastructureless Deployment
- REST API
- Production Ready
Natural Language Processing Summary:
Natural Language Process, or NLP for short, is a field of study focused on the interactions between human language and computers. It sits at the intersection of computer science, artificial intelligence, and computational linguistics (Wikipedia).
“Natural Language Processing is a field that covers computer understanding and manipulation of human language, and it’s ripe with possibilities for newsgathering,” Anthony Pesce said in Natural Language Processing in the kitchen. “You usually hear about it in the context of analyzing large pools of legislation or other document sets, attempting to discover patterns or root out corruption.”
NLP is a way for computers to analyze, understand, and derive meaning from human language in a smart and useful way. By utilizing NLP, developers can organize and structure knowledge to perform tasks such as automatic summarization, translation, named entity recognition, relationship extraction, sentiment analysis, speech recognition, and topic segmentation.
“Apart from common word processor operations that treat text like a mere sequence of symbols, NLP considers the hierarchical structure of language: several words make a phrase, several phrases make a sentence and, ultimately, sentences convey ideas,” John Rehling, an NLP expert at Meltwater Group, said in How Natural Language Processing Helps Uncover Social Media Sentiment. “By analyzing language for its meaning, NLP systems have long filled useful roles, such as correcting grammar, converting speech to text and automatically translating between languages.”
What Can I Use Natural Language Processing For?
- Summarize blocks of text using Summarizer to extract the most important and central ideas while ignoring irrelevant information.
- Automatically generate keyword tags from content using AutoTag, which leverages LDA, a technique that discovers topics contained within a body of text.
- Identify the type of entity extracted, such as it being a person, place, or organization using Named Entity Recognition.
- Use Sentiment Analysis to identify the sentiment of a string of text, from very negative to neutral to very positive.
- Reduce words to their root, or stem, using PorterStemmer, or break up text into tokens using Tokenizer.
These are just some of the natural language processing algorithms web developers can use.
What Are Some Real World Examples of Natural Language Processing?
Social media analysis is a great example of NLP use. Brands track conversations online to understand what customers are saying, and glean insight into user behavior.
“One of the most compelling ways NLP offers valuable intelligence is by tracking sentiment — the tone of a written message (tweet, Facebook update, etc.) — and tag that text as positive, negative or neutral,” Rehling said.
Build your own social media monitoring tool
- Start by using the algorithm Retrieve Tweets With Keyword to capture all mentions of your brand name on Twitter. In our case, we search for mentions of Algorithmia.
- Then, pipe the results into the Sentiment Analysis algorithm, which will assign a sentiment rating from 0-4 for each string (Tweet).
Similarly, Facebook uses NLP to track trending topics and popular hashtags.
“Hashtags and topics are two different ways of grouping and participating in conversations,” Chris Struhar, a software engineer on News Feed, said in How Facebook Built Trending Topics With Natural Language Processing. “So don’t think Facebook won’t recognize a string as a topic without a hashtag in front of it. Rather, it’s all about NLP: natural language processing. Ain’t nothing natural about a hashtag, so Facebook instead parses strings and figures out which strings are referring to nodes — objects in the network. We look at the text, and we try to understand what that was about.”
It’s not just social media that can use NLP to it’s benefit. Publishers are hoping to use NLP to improve the quality of their online communities by leveraging technology to “auto-filter the offensive comments on news sites to save moderators from what can be an ‘exhausting process’,” Francis Tseng said in Prototype winner using ‘natural language processing’ to solve journalism’s commenting problem.
Use NLP to build your own RSS reader
You can build a machine learning RSS reader in less than 30-minutes using the follow algorithms:
- ScrapeRSS to grab the title and content from an RSS feed.
- Html2Text to keep the important text, but strip all the HTML from the document.
- AutoTag uses Latent Dirichlet Allocation to identify relevant keywords from the text.
- Sentiment Analysis is then used to identify if the article is positive, negative, or neutral.
- Summarizer is finally used to identify the key sentences.
Recommended NLP Books for Beginners
- Speech and Language Processing: “The first of its kind to thoroughly cover language technology – at all levels and with all modern technologies – this book takes an empirical approach to the subject, based on applying statistical and other machine-learning algorithms to large corporations.”
- An Introduction to Information Retrieval: “Class-tested and coherent, this groundbreaking new textbook teaches web-era information retrieval, including web search and the related areas of text classification and text clustering from basic concepts.”
- Foundations of Statistical Natural Language Processing: “This foundational text is the first comprehensive introduction to statistical natural language processing (NLP) to appear. The book contains all the theory and algorithms needed for building NLP tools. It provides broad but rigorous coverage of mathematical and linguistic foundations, as well as detailed discussion of statistical methods, allowing students and researchers to construct their own implementations. The book covers collocation finding, word sense disambiguation, probabilistic parsing, information retrieval, and other applications.”
- Handbook of Natural Language Processing: “The Second Edition presents practical tools and techniques for implementing natural language processing in computer systems. Along with removing outdated material, this edition updates every chapter and expands the content to include emerging areas, such as sentiment analysis.”
- Statistical Language Learning (Language, Speech, and Communication): “Eugene Charniak breaks new ground in artificial intelligenceresearch by presenting statistical language processing from an artificial intelligence point of view in a text for researchers and scientists with a traditional computer science background.”
- Natural Language Understanding: “This long-awaited revision offers a comprehensive introduction to natural language understanding with developments and research in the field today. Building on the effective framework of the first edition, the new edition gives the same balanced coverage of syntax, semantics, and discourse, and offers a uniform framework based on feature-based context-free grammars and chart parsers used for syntactic and semantic processing.”
- Natural Language Processing Tutorial: “We will go from tokenization to feature extraction to creating a model using a machine learning algorithm. You can get the source of the post from github.”
- Basic Natural Language Processing: “In this tutorial competition, we dig a little “deeper” into sentiment analysis. People express their emotions in language that is often obscured by sarcasm, ambiguity, and plays on words, all of which could be very misleading for both humans and computers.“
- An NLP tutorial with Roger Ebert: “Natural Language Processing is the process of extracting information from text and speech. In this post, we walk through different approaches for automatically extracting information from text—keyword-based, statistical, machine learning—to explain why many organizations are now moving towards the more sophisticated machine-learning approaches to managing text data.”
If you’re interested in learning more, this free introductory course from Stanford University will help you will learn the fundamentals of natural language processing, and how you can use it to solve practical problems.
Once you’ve gotten the fundamentals down, apply what you’ve learned using Python and NLTK, the most popular framework for Python NLP.
- Natural language processing (Wikipedia): “Natural language processing (NLP) is a field of computer science, artificial intelligence, and computational linguistics concerned with the interactions between computers and human (natural) languages. In 1950, Alan Turing published an article titled ‘Computing Machinery and Intelligence’ which proposed what is now called the Turing test as a criterion of intelligence. Starting in the late 1980s, however, there was a revolution in NLP with the introduction of machine learning algorithms for language processing.”
- Outline of natural language processing (Wikipedia): “The following outline is provided as an overview of and topical guide to natural language processing: Natural language processing – computer activity in which computers are entailed to analyze, understand, alter, or generate natural language.”
- Apache OpenNLP: “The Apache OpenNLP library is a machine learning based toolkit for the processing of natural language text.”
- Natural Language Toolkit: “NLTK is a leading platform for building Python programs to work with human language data. It provides easy-to-use interfaces to over 50 corpora and lexical resources such as WordNet, along with a suite of text processing libraries for classification, tokenization, stemming, tagging, parsing, and semantic reasoning, wrappers for industrial-strength NLP libraries, and an active discussion forum. Natural Language Processing with Python provides a practical introduction to programming for language processing.”
We’re thrilled to be one of the twelve leading big data startups selected to present at Strata + Hadoop World 2015 at 6:30pm on Tuesday, September 29th at the Javits Center in New York City. Diego and Kenny will be showing off our intelligent algorithms to a packed room of developers, entrepreneurs, and researchers as part of Startup Showcase. Winners will be announced during Wednesday’s keynote – fingers crossed!
Check out this video of winners from Strata + Hadoop in San Jose to get a taste of the competition:
Are you in New York? Startup Showcase is free to attend, and open to the public as part of NYC DataWeek. Register here for your free ticket, and be sure to come say hi to Diego and Kenny, who will be handing out Algorithmia credits. Or reach out to us @Algorithmia.
We’re proud to be sponsoring DubHacks, the largest collegiate hackathon in the Pacific Northwest. Over 600 student developers and designers will gather at the University of Washington in Seattle campus on October 17th and 18th to form teams, build projects, and create solutions to real-world problems.
Algorithmia will be there with swag, snacks, and free API access. Come say hi!
In the mean time, check out our collection of algorithms available for use. How might you use our content recommendation, summarization, and sentiment analysis algorithms? Oh, and what might you do with our nudity detection algorithm?
LEARN MORE ABOUT DUBHACKS:
This blog post is Part II of a series exploring the Product Hunt API. Apart from the Product Hunt and Algorithmia API we use Aerobatic to quickly create the front end of our apps.
Our last post discussed how we acquired the Product Hunt dataset, how we designed a vote-ring detection algorithm, and ended with a live demo that analyzes new Product Hunt submissions for vote-rings. In this post, we will briefly explain how collaborative recommendation systems work (through the FP-Growth algorithm) and will apply that on the Product Hunt dataset using a live demo. As a cherry on top, we made a Chrome extension that will augment your Product Hunt browsing with a list of recommendations using our approach. Read on for the results, demo, and code.
Note About Recommendation Engines
There are two broad approaches to recommendation engines: those based on collaborative filtering and those based on content based. A hybrid approach marrying the two is also very common.
Collaborative Filtering recommenders depend on the way that users interact with the data – the number of overlapping decisions between users A and B (such as liking a post or buying an item) is used as an indicator to predict the likelihood that user A will make the same decision as user B on future topics. This is similar to Amazon’s “Other users also bought…”.
Content Based recommenders depends on the inherent attributes of an item – if user A likes item X, then the description/keywords/color of item X is used to predict the next item to recommend. This analysis can be extended to build a profile about what user A likes and dislikes. This is similar to Pandora’s Music Genome Project.
Here we are going to build a Collaborative Filtering recommendation engine that is specific to Product Hunt. There are many approaches to achieve this task and we will use the most straightforward one: Affinity Analysis (a variant of Association Rule Learning).
Understanding Association Rules
Businesses realized the power of Association Rules long time ago, especially for up-selling and cross-selling. For example, a grocery shop might notice that people who buy diapers are also very likely to buy beer (i.e. strong association), and therefore the grocery shop manager might decide to place the diapers close to the beer fridge to fuel impulse buying.
We looked into Algorithmia’s catalogue and we found two algorithms providing exactly this function: Weka-based implementation by Aluxian, and direct implementation by Paranoia. They were both implementing the Frequent Pattern Growth algorithm (FP-Growth), which is a popular method to build association rules through a divide-and-conquer strategy. We ran them side-by-side and we decided to go with Paranoia’s implementation.
The algorithm takes a set of transactions as an input and produces weighted associations as an output. Each transaction is represented as a single line with comma-delimited items. Instead of customers and groceries, our transaction set was the upvotes that Product Hunt users made on all the 16,000+ posts. Each user was represented as a line, and each line contained the posts’ ids that received an upvote from that user.
We Have a Demo
We created a Product Hunt-specific wrapper around the FP-Growth algorithm, which we called Product Hunt Recommender. This algorithm has a copy of the Product Hunt dataset and updates it every 24 hours (therefore it works better on posts more than few days old). It takes a single input (post id) and returns up to five recommended posts.
If you already have an Algorithmia account, you should be able to experiment directly with the algorithm through the web console (only visible if you’re signed in).
It was extremely satisfying to see how well the algorithm worked – for example, notice the result of applying the algorithm on the post ‘The Hard Thing About Hard Things’ (a book about entrepreneurship) gives a recommendation of four other books, all in entrepreneurship as well.
In a real-world scenario, a developer would run the FP-Growth algorithm on their dataset every so often and save the association rules somewhere permanent in the backend. Whenever an item is pulled out, the app would also look for strong associations as recommendations. Keep in mind that there are other routes towards a recommendation engine, such as content-based, clustering, or even hybrid solutions.
Product Genius, the Chrome Extension
We thought it would be awesome to make a Chrome extension out of this that adds a new section to the sidebar of Product Hunt posts, which we titled “Other People Also Liked”. There’s already a “Similar Products” section that is built in within Product Hunt and we can’t definitively determine what method they used to implement it. One thing’s for sure: from our limited testing, we found numerous posts where Product Genius returned better results than the built-in version.
Install the extension from here and check it out for yourself:
We also found instances where Product Hunt’s internal recommender performed better: BitBound.
What Else Can We/You Do?
You can access the code for the Chrome extension, web demo, and the dataset itself from here. You can easily experiment with other approaches using the hundreds of algorithms within the Algorithmia catalogue, such as using Keyword-Set Similarity instead of FP-Growth or a hybrid approach using the two. Let us know if you have any ideas for other algorithms you want us to demonstrate on the dataset by tweeting us at @algorithmia.
Want to create your own analysis? Follow this Sign-up and get 100,000 credits on us – a special Product Hunt promotion.