One of the biggest hurdles for artificial intelligence is having enough computing power at the edge for developers of machine vision applications. Intel Corp.’s new Neural Compute Stick 2, or NCS 2, is designed to address this challenge.
The company announced the Intel NCS 2 today at its AI developer conference in Beijing. It uses the Intel Movidius Myriad X vision processing unit (VPU) and is supported by the Intel distribution of the OpenVINO toolkit.
By 2019, 45% of data will be stored, analyzed, and acted on at the edge, predicts analyst firm IDC.
According to Intel, the NCS 2 makes it easier to test, tune, and prototype deep neural networks, with 8x greater performance than its predecessor. Intel said its vision accelerators enable developers to more quickly bring smart cameras, industrial robots, and Internet of Things (IoT) devices to production.
The NCS 2 looks like a USB thumb drive, but it includes a dedicated neural network inference accelerator. In addition, developers can use the Intel AI: In Production ecosystem to port prototypes to other form factors.
Living on the edge with AI
“We’re driving transformation in trusted use cases like retail, healthcare, and aerospace,” said Steen Graham, general manager of IoT Channels & Ecosystem at Intel. “We’re at the advent of an edge computing renaissance.”
“For example, in manufacturing, we want to see if humans are operating within safety limits,” he said during a conference call. “There are lots of opportunities for low-latency connectivity and fault tolerance.”
“The economics of edge computing is tied to the tremendous amount of data that’s being generated,” Graham noted. “You want to affordably and securely send data to the cloud but discard irrelevant or private data.”
According to Graham, people lined up for Intel’s first Neural Compute Stick, which was the first deep-learning AI at the edge.
“With NCS 2, they can start prototyping immediately,” he said. “In addition, we’ve integrated the Neural Compute Stick with the OpenVINO toolkit for deep-learning inference optimization. It supports four types of processing technologies like FPGA or VPU and frameworks like TensorFlow and Caffe.”
“The NCS 2 is still optimized for computer vision, inference at the edge for images of water, vehicles, or products,” said Graham. “A ton of use cases will emerge, and this allows you to solve throughput and affordability challenges.”
“There are tens of thousands of developers, doing everything from counting bees to detecting cancer,” he said. “This is how to unleash next wave of innovation.”
Checking water safety with NCS
The first version of Intel’s NCS has already been a boon to developers. For instance, Peter Ma, an independent developer, has used it to win hackathons and develop AI systems to help people.
In March, the government of Dubai sponsored a competition around safe drinking water. Ma co-founded CleanWater AI, which uses pattern recognition and machine learning to detect bacteria in real time. The project is also a participant in an Intel incubator.
“I’ve traveled to 30 countries in three years, and one-third of the planet is affected by infections,” said Ma. “We had the idea of using AI and a camera for water safety.”
“We’ve developed a small device that can detect and classify harmful bacteria with a 90%-plus effectiveness,” he explained. “It can be deployed at the edge, so you don’t need the cloud or unreliable infrastructure [in the developing world].”
“We’re trying to combine the digital microscope, a laptop running Ubuntu, and the Neural Computing Stick into one form factor,” Ma added. “With the OpenVINO toolkit, for upgrading to the NCS 2, we just plug it in.”
Other beneficial applications
In September, Ma co-created “Doctor Hazel,” which uses AI to diagnose skin cancer.
“These applications got lots of press and investment attention,” he said. “I’m writing a guide so that other developers can do these things.”
“I’ve made $270,000 off of hackathons and computer challenges, which is a good rate of return from the likes of Amazon, Helium, and Microsoft,” said Ma. “There are lots of markets for AI at the edge, which was previously not possible.”
Ma, who has been named an Intel “innovator of tomorrow,” also described AI “that can unlock deadbolts, sort garbage, and conduct inventory for small businesses.”
“It can run with a camera on Raspberry Pi. This is the most economical way of running inference in real time,” he noted. “Nothing out there is close to what this can do.”
Intel’s NCS has also been used to build systems that scan the Internet for illegal images of children, translate American Sign Language to text, recognize expressions of emotion, and monitor the posture of desktop users.
NCS 2 time to market
“The NCS 2 performance improvements allow developers to take code and transition it across processing technologies,” said Graham. “We’ve greatly expanded the paths from prototype to production for the developer community.”
Also at Intel AI DevCon Beijing, the company is announcing Cascade Lake, the next version of its Xeon Scalable processor for enhanced image recognition.
The new Vision Accelerator Design Products include an array of Movidius VPUs and the Intel Arria 10 FPGA. Both build on the OpenVINO toolkit to provide enough performance for real-time image analysis in IoT devices.
In addition, Intel’s SpringCrest, which will be available in 2019, is a Nervana Neural Network Processor (NNP) that uses dense matrix multiplies and custom interconnects for parallelism.
“Previously, only large customers could get access to such deep-learning tools,” Graham said. “We can’t wait to see what developer teams will do with a more scalable and affordable toolkit.”
The NCS 2 is available for $99 from Amazon in the U.S.; JD.com in China; RS Components in Europe, the Middle East, and Asia; Switch Sciences in Japan; and Mouser worldwide.
“We expect the quality of prototyping solutions to improve, and there’s a lot of excitement about solving challenges for developers with time to market,” said Graham.
The post Intel NCS 2 Designed to Accelerate AI Applications at the Edge appeared first on Robotics Business Review.