Free Shipping on orders over US$49.99

Experts: Data Facilitate, Impede AI Scaling


//php echo do_shortcode(‘[responsivevoice_button voice=”US English Male” buttontext=”Listen to Post”]’) ?>

A panel of experts at EE Times’ recent AI Everywhere Forum was asked to pick the holdup for AI’s rollout—software or hardware?

The answer isn’t that simple, they agreed. While not perfect, software and hardware have been developed. A bigger obstruction than either—and still in need of work—is the data used to train the AI.

Vijay Janapa Reddi, associate professor at the John A. Paulson School of Engineering, Harvard University, and VP at MLCommons.

“I think the [AI] models, we kind of know,” said Vijay Janapa Reddi. “We have the academic pipeline set. Industry people know how to do it. We have good, mature infrastructure around it.” 

Reddi speaks from experience as both an associate professor at the John A. Paulson School of Engineering, Harvard University, and as a VP at MLCommons, a consortium to grow ML from a research field into a mature industry.

What the industry of code writers and chip designers can’t quite work their way around are the questions surrounding the data and a lack of experts who know data and AI systems, a group Reddi refers to as “data engineers.”

“I think that data engineering piece is a critical nugget, and we don’t really know how to operationalize that thing at scale,” he said. “What’s the Github of data? How do you version data? How do you incrementally add data? How do you share data? How do you as a community do it so you’re not doing everything in-house? Because today we’re all doing it in-house, and that’s why we can’t scale out.”

Marshall Choy, senior VP of AI platform company SambaNova Systems.(Source: SambaNova Systems)

Specifically, the holdup can be traced to the transition from model-centric to data-centric computing—and the fact that not every company has access to a healthy stream of Ph.D.s to do the highly complex work Reddi and his fellow panelists do, said Marshall Choy, senior VP of AI platform company SambaNova Systems.

“And so people are leveling-up and looking to the vendors to provide more of that expertise and automate that into the solution to enable [the customer], especially given the times we’re in now, to do more with less in terms of resources,” he said.

The solution is vertical

Choy can’t let hardware and software off the obstruction hook, though, because his customers aren’t doing so.

“What we’re being told that we need to do is to innovate at every layer of the AI implementation stack,” he said. “And we do that in an integrated way, right? So at the silicon level, we all love to talk about chips, and we’ve developed the native data flow execution to deliver great levels of flexibility and performance. But how do you really unleash that all? It really comes down to the software, right? And so at the software layer, a lot of people are focusing on compilers and runtimes that can optimize for the computational graphs that are the underpinnings for every AI model out there. But really, to train and serve these models with scalability and performance, you’ve got to have integrated systems that are optimized as such.

“Really, it’s all these innovations that aggregate into a vertically integrated solution.”

Gold in the data

Two issues that add additional complexity to the industry’s work are data privacy and ownership.

“Who owns the data when you upload it to be trained?” Choy asked. “You know, we’ve made decisions to maintain that ownership and custody to the end user. And we have no visibility or access to that. And the model that gets built, you know, based on that data is the IP of the customer, the end user, not the vendor.”

SambaNova’s foremost privacy-related request from customers is to have AI capability brought to their data, which makes building a platform more intricate.

“That’s put an onus on as a vendor to enable solutions that can be deployed anywhere, whether that be in a public cloud, in an on-premise situation … what have you,” Choy said. “Corporate managers have even decided on where the data is going to sit, and so they’re not going to break that. So we’ve got to bring compute capabilities to data, rather than encourage somebody to move the data to the compute, right?”

Rod Metcalfe, group director, AI & digital implementation products at Cadence, said his customers are becoming savvy about controlling and protecting the AI used to automate system-on-chip design.

Rod Metcalfe, group director, AI & digital implementation products at Cadence.

“Part of that automation using AI is the creation of a machine-learning (ML) model that stores all the training data that allows us to do transfer training between different projects and other things,” he said. “There’s a lot of valuable design data in that machine learning model. 

“So we find our customers now thinking, ‘Hang on a minute. This is really our intellectual property. In addition to the design itself, we now have this intellectual property that tells us how the design was created and how it should be optimized’.”

Security within the ML models is a very relevant topic at the moment, Metcalfe said. Questions arise about what parts of the ML model should be shared, which are proprietary and how that can be controlled.

“This is going to be a topic very much going on into the future,” he added.

Choy agreed that companies see the gold in their data.

“Many organizations, especially on the enterprise side, have moved forward and really have realized that their most valuable IP and asset is the data set, and the data they possess, and how they can transform that data,” he said.

Not only do they see the value in their data, Choy’s customers don’t want to have to become AI engineers.

“Our customers want to just leave all that complexity and all that model-centric engineering work to us to automate into the product, and allow them to focus on their area of value in IP, which is the data and the data set,” he said.

Let AI do the tedious work

Moderator Nitin Dahad, editor-in-chief of embedded.com and an EE Times correspondent, asked, “Are AI models for EDA (electronic design automation) and chip design beginning to show real usefulness? How much potential is there for future algorithms to automate more parts of the chip design process?”

Metcalfe said that “there is now very good proof that AI is going to be a very transformational technology from a chip design perspective. What we see today is small parts of the chip design flow adopting AI technology. Moving forward, there’s going to be a huge amount more work done in this area. So we can expand this to system-level optimization. Today, we’re looking very much more at the hardware implementation, but one layer above that is the whole system optimization. How do you meet the latency targets you need? How do you meet the processing targets? There are lots of different architectural decisions you can make very early on in the process that AI can certainly, certainly help with.”

That includes implementation, partitioning and optimization in 3DIC design, which is notoriously complicated, Metcalfe said. It also applies to designing a system-on-chip. 

“You have to do a floor plan for a block,” he said. “Well, if you’ve got 20 blocks, you’ve got 20 floor plans to do. That’s not a very interesting task for engineers to do. You can leave AI to do some of this more repetitive, tedious type work that needs to be done that is not very rewarding from an engineering perspective.”

A recording of this panel discussion, and other content from AI Everywhere, is now available on demand at aieverywhere.eetimes.com (free registration required). 





Source link

We will be happy to hear your thoughts

Leave a reply

larkbiz
Logo
Enable registration in settings - general
Compare items
  • Total (0)
Compare
0
Shopping cart