Artificial intelligence is set to hit the mainstream, thanks to improved machine learning tools, cheaper processing power, and a steep decline in the cost of cloud storage. As a result, firms are piling into the AI market, and pushing the pace of AI development across a range of fields.
“Given sufficiently large datasets, powerful computers, and the interest of subject-area experts, the deep learning tsunami looks set to wash over an ever-larger number of disciplines.“
When it does, Nvidia will be there to capitalize. They announced a new chip design specifically for deep learning, with 15 billion transistors (a 3x increase) and the ability to process data 12x faster than previous chips.
Improved hardware contributes to Facebook’s ability to use artificial intelligence to describe photos to blind users, and it’s why Microsoft can now build a JARVIS-like personal digital assistant for smartphones.
Google, meanwhile, has its sights set on “solving intelligence, and then using that to solve everything else,” thanks to better processors and their cloud platform.
This kind of machine intelligence wouldn’t be possible without improved algorithms.
“I consider machine intelligence to be the entire world of learning algorithms, the class of algorithms that provide more intelligence to a system as more data is added to the system,” Shivon Zilis, creator of the machine intelligence framework, told Fast Forward Labs. “These are algorithms that create products that seem human and smart.”
If you want to go deeper on the subject, O’Reilly has a free ebook out on The Future of Machine Intelligence, which unpacks the “concepts and innovations that represent the frontiers of ever-smarter machines” through ten interviews, spanning NLP, deep learning, autonomous cars, and more.
So, you might be wondering: Is the singularity near? Not at all, the NY Times says.
Last week SpaceX, the Elon Musk space company, successfully landed their Falcon 9 rocket on a drone ship for the first time last week.
The rocket landed vertically on a barge floating in choppy waters out in the Atlantic Ocean.
But that’s not all: Tesla, Musk’s other company, received more than 325,000 preorders for their Model 3 last week, making it the “biggest one-week launch of any product.”
The preorders represent $14 billion in potential sales for the $35,000 electric car, which is expected to ship late-2017. The unprecedented demand is a “watershed moment” for electric vehicles, the WSJ says.
As electric vehicles go mainstream, speculation mounts that the petrol car could be dead as early as 2025. We are now witnessing the slow-motion disruption of the global auto industry, Quartz adds. At least one car maker has taken notice: GM has invested $500 million in Lyft, with plans to build self-driving electric cars.
You might have heard: Google unveiled their new machine learning platform to “pour ML all over the cloud.” The moves makes TensorFlow available to developers to do machine learning in the cloud with their own data.
Machine learning is developing fast, and “what will distinguish the good companies from the rest are things like domain expertise, quality of the dataset, and the ability to find the right problems to solve.” The Economist adds, “the firms that develop an early edge in artificial intelligence may reap the greatest rewards and erect barriers to entry.”
“At its highest level, machine learning is about understanding the world through data,” said Geoffrey Gordon, acting chair of Carnegie Mellon University’s Machine Learning Department. “Anything you can think of — public policy, finance, automobiles and robotics, for example — there’s a role for machine learning.”
While algorithm development is still broken, Coursera’s new Data Science Masters program is a step in the right direction. And, here are the top machine learning books for data scientists and machine learning engineers.
Confused about ML? Here’s how to approach machine learning as a non-technical person.
This is an excerpt from the post On AlphaGo, Intuition, and the Master Objective Function, by Jason Toy, founder of Somatic.io. For more, visit his blog.
It is another great milestone in Artificial Intelligence that computers are now able to beat humans in the game of Go. The DeepMind/Google team came up with a combination of new algorithms that effectively reduced the search space of possible positions-1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000, 000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000, 000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000, 000,000,000,000,000,000 to be exact- to a much smaller number that a computer could work with.
What I’m more excited about is that Deepmind/Google are publicly talking about intuition, and trying to build systems that sort of emulate it. In their blog post, they mention “The game is played primarily through intuition and feel”. They then go into detail on how they constructed a machine learning system that reduces the amount of positions the system needed to evaluate at each turn. Their system basically makes estimations of expected outcomes instead of directly calculating directed outcomes, thereby kinda sorta simulating intuition.
What is Intuition?
So what is the definition of intuition? According to wikipedia it is “a phenomenon of the mind, describes the ability to acquire knowledge without inference or the use of reason.”
Talking about this subject in relation to artificial intelligence has been a sore spot. Can intuition exist inside of a computer? The truth of the matter is we have no idea, people have been debating this since the beginning of computers and even much earlier with Descartes and others.
Another core problem is the name and definition of intuition. Should we call it intuition, emotion, gut reactions, lizard brain, or feelings? And, what exactly does that mean. We have been trying to objectively define this for a long time.
There is a specific definition that I have been using for a while is based off one of favorite books- Thinking, Fast and Slow.
The author, Daniel Kahneman, has done studies of the mind for decades, and came to the conclusion that the mind is running 2 different algorithms:
- System 1: Fast, automatic, frequent, emotional, stereotypic, subconscious
- System 2: Slow, effortful, infrequent, logical, calculating, conscious
System 1 is a machine for jumping to conclusions. It is a heuristic based system that reacts to incomplete information and stimuli fast enough to navigate in our world in real time given all the uncertainty. It gives answers that are typically good enough, but can make mistakes given the small amount of information it processes. Escaping an animal trying to attack us, swerving away from a car that suddenly stops, looking towards a bomb explosion, adding 1+1. Those are all examples of System 1 processing.
System 2 on the other hand is a completely different system. It is a slow and deliberate system that does things like complicated math, answering tests, writing a novel. System 2 can override System 1 when it senses that the answer from System 1 is wrong. It is presumed to reside in our neocortex and as far as we know, only humans have System 2.
It seems that humans and ALL living organisms have system 1. If we assume the System 1/System 2 combination is how our brains are working, then I would make the statement that we have already built an awesome System 2 in computers. All the current machine learning algorithms we build are based off logic and calculations which is what System 2 is all about. But System 2 cannot really exist by itself, it needs guidance.
So all of these machine learning systems are always optimizing for a specific objective function. Depending on the algorithm and problem you are trying to solve, you use different algorithms and objective functions. With classical machine learning applications like regression and classification, the algorithm is trained on training data. The objective is to have the machine get as many answers correct on a previously unseen validation set.
On DeepMind’s video game system, they hook up the algorithm directly to the score of the video game and so the system is optimizing for winning in the video game.
AlphaGo’s objective function is to win at Go, and that is it, it can’t do anything else.
Notice how each of these machine learnings models have a single fixed objective function?
And, what is the objective function of general living organisms? Is it to eat? To reproduce? To live as long as possible? To be “happy”? To achieve homeostasis? How about for a human? Is is the same as other living organisms? If living organisms have a single objective function, it would seem like they all have different objective functions.
And what is my objective function as I write this blog post? I have the overarching goal of wanting to publish my thoughts on AlphaGo. As I’m writing this, my intuition on the direction of how I should write this keeps changing. How do we put this kind of intuition into a computer- Think about what you do in your daily life? How do you go about deciding what to do next?
If we want to get to general artificial intelligence,I believe that a heuristics-based reaction system, or System 1, should replace the objective function of a machine learning algorithm. In fact, they are the same thing.
For more, check out the full post On AlphaGo, Intuition, and the Master Objective Function.
“An entirely new way of interacting with our world is emerging,” writes Matt Turck from FirstMark Capital. “The Internet of Things is about the transformation of any physical object into a digital data product.” The 2016 Internet of Things is occurring in five areas: the connected home, wearables, healthcare, robotics/drones, and transportation.
Meanwhile the security of IoT devices looms, with Linux founder Torvalds reminding us that “security plays second fiddle.” For instance, an Amazon Echo got confused and hijacked a thermostat after NPR was left on.
If you think that’s amusing, remember that with Alexa, Cortana, Siri, Google Now, and others, “you have a personal digital assistant that knows you,” Microsoft’s Satya Nadella says, “knows your preferences, has the ability, in a privacy-protecting way, to go and look at your information and your organization’s information and help you with your tasks.”
Plus: Learn how to build a motion sensing security camera with a Raspberry Pi. That reminds us of our own hack: a front desk AI inspired by Mark Zuckerberg’s New Year’s resolution.
Onfographic courtesy Mobile Future.