Free Shipping on orders over US$49.99

Adding The Edge In Embedded Edge Market With

SiMa in Hindi means edge, which is what has brought to the embedded edge market by integrating machine learning on an SoC.

The edge computing market has witnessed an exponential increase in demand post the Covid pandemic. A recent study by Markets and Markets Research reported that the embedded edge market is expected to reach $101.3 billion by 2027 at a CAGR of 17.8%. The hardware segment is expected to largely contribute to this growth.

Founder Krishna Rangasayee (in blue) with India’s core team at
Founder Krishna Rangasayee (in blue) with India’s core team at

The challenge faced by this exponential growth is the complexity of integrating edge computing applications and platforms with the existing architecture. The classic system-on-chip (SoC) companies need to adopt machine learning (ML) to meet the rising demands of computing performance and cost.

This is where San Jose based startup enters the picture with its purpose-built platform, which deploys effortless ML and scaling at the edge for computer vision applications at very low power. Born in India, founder Krishna Rangasayee led a semiconductor business at Xilinx, worth more than $25 billion, for 18 years, before deciding to propel the world into a new era of disruptive semiconductor technologies.

The machine learning company has come up with the industry’s first software-centric purpose-built Machine Learning System-on-a-Chip (MLSoC) platform focused on computer vision applications.

The MLSoC is built on 16nm technology featuring low operating power and high ML processing capacity. Its processing system consists of computer vision processors for image pre- and post-processing. It has a dedicated machine-learning accelerator (MLA) that offers 50 tera-operations per second (50 Tops) for neural network computation at 10 TOPS/W. It has a cluster of four Arm Cortex-A65 dual-threaded processors forming its application processing unit (APU), operating at 1.15GHz to deliver up to 15,000 Dhrystone million instructions per second (DMIPS) for high-performance application processes.

Rangasayee says that the MLA offered a class of performance of 50 Tops at 5W. “If people only want to dissipate 3W, we cap the performance, so that it fits the 3W envelope. We can optimise for performance or power. There are trade-offs in terms of how much utilisation and performance you get, and we give customers that choice,” he explains.

The chip’s video processing unit comprises a video encoder and decoder that supports the H.264 compression standards, HEVC (high-efficiency video coding) with support for baseline/main/high profiles, 4:2:0 pixels, and 8-bit precision for real-time intelligent video processing. Surrounding the video processing unit are memory interfaces, communication interfaces, and system management—all connected via an Arteris network on chip (NoC).

With a team of 133 innovators, spread across three locations in San Jose, Bengaluru, and Europe, is backed by top investors such as Fidelity and Dell Technologies Capital. In addition, the company has hired skilled innovators in Ukraine on a contractual basis.

The company has partnered with Taiwan Semiconductor Manufacturing Company (TSMC) and has Moshe N. Gavrielov, who is on TSMC’s board of directors, as its chairman. The four-year-old company has navigated its way through the pandemic and geopolitical tensions to come up with a solution to address any computer vision problem at the lowest power.

The MLSoC can function as a standalone edge based system controller or can be added to a machine learning offload accelerator for processors, ASICs, and other devices. “Today we can compile more than 120 networks in robotics, automotive, unmanned aircraft, etc. We can work with any machine learning framework such as TensorFlow, Caffe, PyTorch, or ONNX. We could take any neural net or ML model, any frame rate, or any resolution,” Rangasayee elaborates.’s MLSoC’s MLSoC

The company is shipping its MLSoC platform, while simultaneously working on Generation 2 of the chip. As Rangasayee frames it, “I think, in our industry, the gift of doing a good job is you have to do it again, better. So, we are working on Gen 2 and we are building an amazing Gen 2 as well.”

With bandwidth and computing needs far outpacing the capacity for traditional computing, has come up with an excellent solution to catapult the embedded edge industry from classic computer vision computing into a machine learning mode.

Source link

We will be happy to hear your thoughts

Leave a reply

Enable registration in settings - general
Compare items
  • Total (0)
Shopping cart