
//php echo do_shortcode(‘[responsivevoice_button voice=”US English Male” buttontext=”Listen to Post”]’) ?>
For the last 40 years, the evolutionary path for chipsets for mobile electronics has inexorably been headed toward increasing levels of functional integration, culminating in the system-on-chip (SoC) architecture. Sometimes monolithically integrated and at other times achieved through advanced packaging, mobile SoCs incorporate multiple functions — including, but not limited to, the baseband, applications processor, RF transceiver, WLAN (i.e., Wi-Fi) and WPAN (i.e., Bluetooth) communications — into a single chip or package. Such an architecture is ideal for the thin but powerful form factors that ultimately embody modern cellphones. However, as we go beyond the smartphone to a whole new augmented-reality (AR) world — some call this the metaverse — through the portal of AR glasses, the SoC architecture is proving to be more of a hindrance than an enabler.
To turn the vision (pun intended) of AR glasses into reality, designs need to not only achieve mass-market adoption but must also be able to be worn on an everyday basis — as part of the wearer’s daily attire. In order to achieve this, glasses are required to be more streamlined than smartphones. They also need to enable fashionable form factors. Additionally, to minimize wearer fatigue, they not only have to be as light as possible but also be more balanced in terms of weight distribution to keep from having one side of the glasses be heavier than the other.
Past and current designs have typically located the electronics in the temple, or arms, of the glasses, and this continues to be a logical place. However, mobile SoCs do not lend themselves to addressing the challenges above, given their die and package size and the fact that being a single chip does not allow for any sort of weight distribution other than potentially adding more bulky components like batteries on the other arm.
At this year’s Snapdragon Summit, Qualcomm announced a departure from the SoC evolutionary path that could potentially bring AR glasses to reality. Reversing the trend, Qualcomm proposed a distributed architecture approach not only for the electronics inside the glasses but also between the glasses and a host device like a smartphone or PC distributing some of the heavy lifting for both cellular communications and some of the graphics processing.
On the second day of the Snapdragon Summit, Qualcomm announced the new Snapdragon AR2 Gen 1 platform. The platform breaks up the various silicon blocks into three modules that are spaced around the glasses, which enables designs that are more streamlined and balanced. The modules include an AR processor, an AR co-processor and a Wi-Fi connectivity module.
As announced, the AR processor will be responsible for typical GPU-type functions like image/video capture, computer vision and display driving but doing so in a hardware-accelerated fashion by incorporating ISP, Adreno Video, Adreno Display and visual analytics engine IP blocks.
Meanwhile, the AR co-processor will be focused on providing AI acceleration, as well as aggregating sensor and camera data for tasks like eye tracking, object detection and biometric authentication.
Last but not least, the connectivity module will of course be responsible for the high-speed, low-latency communications needed to make the distributed architecture viable. What might not be as obvious is the module’s use of Qualcomm’s FastConnect XR 2.0 software suite, which the company is touting as enabling a 40% reduction in power compared with previous versions while delivering the required performance.

The AR processor and co-processor work in conjunction with a host processor in a smartphone, PC or even a network to provide a distributed computing architecture complete with the heterogeneous processing, sensor fusion and AI processing capabilities native to the Snapdragon platform. The connectivity module uses Wi-Fi 7 as a high-band, simultaneous multi-link to provide up to 5.8-Gbps high-speed/high-bandwidth connectivity between the glasses and the processing host.
Not only does breaking up the platform into multiple components provide for better weight distribution and balance for AR glasses, the Snapdragon AR2 Gen 1 platform also reduces the wiring requirements by 45%, printed-circuit–board (PCB) area by 40%, the processor power consumption by 50% and Wi-Fi power consumption by 40% compared with the SoC architected Snapdragon XR2 platform, consequently using up less space, which, again, allows for sleeker, more comfortable glasses.
Some of these reductions might be counterintuitive when comparing this new distributed architecture with the existing SoC solution.
At first glance, wiring seems like it would increase with the need to interconnect the different modules in a distributed architecture, whereas in an SoC, it would all be on-chip interconnects. This would be true if the main driver of wiring in these applications were the connections between the different IP blocks on the chips.
However, in an application such as AR glasses, the vast majority of the wiring requirements is actually for the input/output (I/O) interfaces for components that would be external to the chip regardless of whether it was an SoC or the multiple modules in a distributed architecture.
Examples of these types of external components are the sensors and cameras being connected to the processor. In a distributed architecture, the wiring runs between these external (to the chip) components are minimized, as the components can be placed closest to the appropriate module as applicable instead of all runs having to go to wherever the SoC has been located.
The power reduction might also be counterintuitive when thinking about potentially having to power three modules instead of one. But Qualcomm purports that the extensive use of hardware accelerators along with the use of advanced process node technologies are indeed delivering these power savings.
Lastly, it can be argued that weight is not a factor, as even SoCs of this class are relatively negligible in terms of weight and mass. While that may be true when comparing only the SoC versus the distributed modules, including the reductions in PCB area and wiring in the equation does materially affect the impact that weight and, more specifically, weight distribution has on the wearer of the glasses — especially in all day-use cases.
Using the TSMC 4-nm process for its primary AR processor and optimized for AR workloads and requirements, Qualcomm claims that the Snapdragon AR2 Gen 1 platform provides a 2.5× increase in artificial-intelligence processing while operating at less than 1 W. The platform also provides sub-2-ms latency over Wi-Fi and 9-ms motion-to-photon latency (the lag between the user making a motion and it being displayed on the display). This high-speed, low-latency link is what enables the viability of the glasses-to-host distributed architecture. Without the fast Wi-Fi 7 connection, all processing would need to be done on the glasses, which would be prohibitive in terms of size, weight, aesthetics and wearability. Other solutions might use a wired connection between the host and AR glasses, but having a wireless connection is superior in both aesthetics and wearability.
A distributed device architecture is not a new concept; it’s one that I posited about a decade ago while working on a project that contemplated the future of mobile devices and the semiconductors and networks that support them.
However, with Qualcomm’s latest Snapdragon AR2 Gen 1 platform announcement, this concept also gets extended into the actual chip architecture and has the potential to turn what was once only theoretical into reality — augmented and otherwise.