This week we look at how Tesla is upgrading Autopilot to use radar, an algorithm that mimics human speech, and learn how physics explain how neural nets work.
Plus, we compile our top projects to try at home, and favorite articles from the past week.
Not a subscriber? Join the Emergent // Future newsletter here.
Tesla Upgrades Autopilot
You Might Have Heard: Tesla is updating Autopilot, the software that powers their self-driving car option, to use radar as the primary control sensor for navigation.
Cars will no longer need a camera to confirm visual image recognition during driving, and is geared toward preventing accidents, like the fatal Model S crash.
The new software will give the cars access to six times as many radar objects using the same hardware, which is available on all cars shipped after October 2014.
Tesla’s taking advantage of the fleet of cars already on the road to dynamically learn about the positions and locations of road signs, bridges, and other stationary objects to better map roadways and hazards, and all but eliminate false-positives.
This real-time learning system is always running, whether or not Autopilot is on. Meaning, the more you drive, the more Tesla learns.
But Did You Know Tesla’s Autopiliot has driven more than 47M miles in the past six-months? They’re adding more than a million miles of driving data every day.
Conversely, Google’s self-driving car, while taking a very different approach to autonomous vehicles, has travelled just 1.5M miles in SIX YEARS.
There’s concern that Google’s car project is losing out to rivals, like Tesla and Uber.
Meanwhile, Apple has shuttered parts of its self-driving car project, laid off dozens of employees, and is rethinking its strategy.
Google’s DeepMind can now generate sound waves that mimic human voices – a 50% improvement over the existing text-to-speech technology.
They’re calling it WaveNet, which uses neural nets to generate convincing speech and music.
Researchers fed DeepMind basic recorded speech, and used a convolutional neural network to create a complex set of rules about how certain tones follow other tones in the context of speech.
DeepMind famously beat the world’s best Go player in March, and is also being used to more efficiently manage power usage in Google’s data centers. They’ve cut their electricity bill by 15%.
Read the full announcement, and listen to examples.
How Neural Nets Work
Researchers from Harvard and MIT say they’ve discovered the secret to neural networks buried in the laws of physics, where a small subset of mathematical functions describes the way the universe operates.
This is great news, because nobody quite understands why deep neural networks are so good at solving complex problems.
With physics, structures are formed through a sequence of simple steps: particles form atoms, which form molecules, cells, organisms, planets, solar systems, galaxies, etc.
Neural nets are arranged in layers, where each layer deals with a higher and higher level of abstraction and complexity.
Read the research to learn more about why deep learning works so well.
What We’re Reading
- Stanford’s 2016 Report: One Hundred Year Study on Artificial Intelligent. A long-term investigation of the field of AI and its influences on people, their communities, and society, considering the science, engineering, and deployment of AI-enabled computing systems. (Stanford.edu)
- Attention and Augmented Recurrent Neural Networks. RNNs are one of the staples of deep learning, allowing neural networks to work with sequences of data like text, audio and video. (Distill.pub)
- Richard Feynman and The Connection Machine. For Richard a crazy idea was an opportunity to either prove it wrong or prove it right. (The Long Now Foundation)
- How a Japanese cucumber farmer is using deep learning and TensorFlow. It’s not hyperbole to say that use cases for machine learning and deep learning are only limited by our imaginations. (Google)
- Is Artificial Intelligence Permanently Inscrutable? Despite new biology-like tools, some insist interpretation is impossible. (Nautilus)
- On Generative Algorithms. A developer and generative artist living in Oslo, Norway starts messing with mathematics, algorithms, and solving problems. (Inconvergent)
Try This At Home
- Build a Sentiment Analysis Slack Chatbot in Python
- Detecting Music BPM using Neural Networks
- Find 404s: a simple Python script for creating a nicely formatted broken link report email
- Machine Learning in a Year – From being a total ml noob to start using it at work
Emergent Future is a weekly, hand-curated dispatch exploring technology through the lens of artificial intelligence, data science, and the shape of things to come. Subscribe here.