//php echo do_shortcode(‘[responsivevoice_button voice=”US English Male” buttontext=”Listen to Post”]’) ?>
When smart homes began growing in popularity at the turn of the century, living smart meant using internet-connected devices with clever features. Two decades later, smart living has entered a new era of intelligent devices on the edge that are increasingly designed to make decisions. That’s according to the experts who spoke during a November panel talk titled “How Smart Can We Live?” as part of the Embedded Forum at electronica 2022 in Munich. Nitin Dahad, editor-in-chief of embedded.com, moderated the session.
“Smart devices started off with a lot of what now would be categorized as clever features,” said Ali Osman Ors, director of artificial intelligence and machine-learning strategy and technologies for edge processing at NXP Semiconductors. “Now we’re moving into the new definition where ‘smart’ means you have a lot of info coming from a very wide range of inputs coming to these devices, and with all of these inputs and data, the devices make a decision by themselves.”
With the addition of robots and AI, the internet of things has become the artificial intelligence of things (AIoT) and edge devices are often intelligent even when not connected, said Pier Paolo Porta, marketing director for Ambarella.
The emerging AIoT expands capabilities by providing distributed intelligence with endpoints capable of decision-making, said Suad Jusuf, senior manager at Renesas Electronics.
“We are not talking anymore about smart being [only] capable of providing data to the cloud, being always connected, but also a system that is capable of making its own decisions — for example, within the predictive maintenance arena,” he added.
As an example of this edge intelligence, advanced home-security cameras now can be trained to learn the faces of frequent visitors and alert residents when one of those familiar people approaches the front door or any door with a security camera.
At Renesas, “we have managed to establish multi-modal, vision-based, camera-based face recognition on a relatively simple M4 core Renesas device,” Jusuf said.
Mario Heid, vice president of OmniVision Europe, chimed in: “I think the smart-home security camera is a good example, because the innovation is not only being done on the processing side, more efficient processing, but also on the whole architecture of this system. In this space, we always wanted only outputs of very small amounts of data to detect this trigger and then basically react in some cases to provide more data. At the moment, most vision systems are reading the sensor all the time and all the data. There is certainly something that can be done smarter, just reading smart regions of interest, [which means] that you don’t need the full resolution. The smart device is pushing it [intelligence] all the way to the edge, which is the sensor, basically. I think there is still a lot more work that we can do, but the smart-home cam is quite impressive, is battery-operated, and can operate for one or two years. It’s a very impressive system.”
Currently, the exact specifications needed for an image sensor to give the best performance is not known, Heid said. Another challenge for AI is that training the system to get trained data takes a long time, he added.
More recently, there’s been a move toward improving the datasets, resulting in “more dedicated, higher-quality datasets specific to the use case,” Ors said. “This is becoming more prevalent beyond vision as well with non-vision sensors like audio use cases, vibration-/temperature-/pressure-type sensors where it’s hard to get hold of the necessary data for use cases like predictive maintenance, where the number of events is very sparse and distributed in time. Creating models based on a very limited dataset is quite a challenge.”
“For me, a smart device is a device which itself can sense something, detect something, and then act,” Heid said. “Smart security, a home-security camera … is basically detecting something and taking an action.”
A home-security camera can provide dense information, Porta said. The challenge is processing all that information to extract the data necessary for the task at hand.
“You have a lot of data and among those data, there is all that you need,” he said. “The drawback is you need to process that information. That is why at Ambarella, we are focusing our development and our energy to ease the processing of the data and the informational structure.”
In the future, sensors will combine vision with other perspectives, such as audio, Ors said. “Now, smart speakers are adding more on-board edge language understanding, natural-language understanding. They’re not leveraging the cloud. They’re adding vision to detect presence, to detect specific users, and customize the experience to that user.”
Moving forward, device designers must consider the balance between the cloud and the edge in predicting smartness in devices.
“At Ambarella, we tend to say, ‘Put in the edge as much as you can,’” Porta said. The advantages are speed, cost savings, better performance, and higher resolution — as long as the device can handle the complexity, he said.
“That doesn’t mean the device has to be absolutely isolated,” he added. “You can still add in the cloud. Use the cloud when it’s needed.”
NXP also advocates pushing as much to the edge as possible, Ors said.
“It’s critical,” he said. “It allows you to have real-time response so your latency is reduced. Your communication channel cost and needs are reduced, as well as that added security benefit of maintaining the privacy of your critical data on the edge as much as possible.”
But the cloud can also enhance the edge experience, Ors added. “You have the option of leveraging the cloud for your life-cycle management. There’s more data. There’s new data available.”
One challenge will be lifetime management, focusing on adaptive approaches over the lifetime of an application, Porta said.
“It will have to be able to adapt the models and solutions, because the lifetime of an application changes the parameters,” Ors said.
Finally, one key interest is refining system architecture to focus more on performance instead of just applying the brute force of more processing power, Heid said.