Syntiant, a startup building tiny artificial intelligence chips that bring voice commands out of the cloud and onto devices, rolled out its latest generation of chips with a power budget of 1/1000 of a watt. It is promoting the chip as ideal for always-on, battery-powered consumer devices, such as smartphones, earbuds, wearables, smart speakers, smart home locks and laptops.
The NDP120 is a system-on-a-chip that integrates multiple computing modules on a single die. At the heart is the second generation of its tensor processing core, the Syntiant Core 2, that is custom designed for a form of artificial intelligence called deep learning with a power budget of 1mW. The chip also contains an audio digital signal processor (DSP) and ultra-low-power CPU.
The custom-designed core packs more performance than its predecessor, allowing it to run neural networks — the building block of deep learning — without burning through the battery.
Syntiant’s SoCs are based on what the company calls a near-memory architecture, where the processing cores are placed close to large caches of memory to reduce the distance the data travels, promising major leaps in performance. The startup said the AI chips burn through 100 times less power than other chips in the category, lessening heat that saps even more energy efficiency. Syntiant said its in-house architecture also puts out up 10 times more throughput.
The Southern California-based startup said that it has rolled out more than 10 million chips from its first generation of chips at the end of last year. The chips, the NDP100 and NDP101, are both designed to bring artificial intelligence onto edge devices, including earphones and thermostats, instead of the cloud. That opens the door to devices that can understand “wake words” or carry out a small set of voice commands within a severely limited power budget.
But its next-generation core can supply up to 25 times more throughput, according to Syntiant. That gives it the computing horsepower to handle far larger machine learning models or many smaller workloads at the same time — for example, to cancel out echoes and other noises in the surroundings and process sounds and respond to voice commands — on a single chip. The performance gains also give Syntiant’s SoCs a larger vocabulary of voice commands.
According to the startup, the new NDP102 can run trained neural networks with more than 7 million connections, or weights up from 0.5 million in its predecessor. That’s ideal for use cases where audio filtering and echo cancelation are required for on-device, far field voice processing. The chip is also suited for sensor fusion, including from acceleration, tilt and pressure sensors.
The US-based company, led by chief executive Kurt Busch, has amassed more than $65 million in total funding since it was founded back in 2016. The startup, which currently has more than 60 employees, landed $35 million in its latest round of funding last year, led by the investment arms of Applied Materials and Microsoft. Other big-name backers include Intel and Amazon.
To differentiate itself from its rivals in the semiconductor industry, the company has also rolled out a set of development tools that make the Syntiant Core 2 easily programmable for potential customers. The company said that the new core can port all the most popular software stacks used in the artificial intelligence space, including Tensorflow, PyTorch, and Caffe, among others.
The NDP120, packed in a tiny 3.1-mm-by-2.5-mm-by-0.4-mm package, is supplemented with a programmable Tensilica HiFi 3 audio DSP and Arm Cortex-M0 CPU coupled with 64 kB of RAM. The chip is also loaded with standard I/O, including PDM, SPI, I2S, and I2C interfaces, and it incorporates up to 26 general-purpose I/O (GPIO) pins and supports up to 7 audio streams.
Syntiant said that it has started selling the chip to potential early customers and plans to start mass-producing the chip by mid-2021. The chip costs $6 each in orders of over 10,000 units.