Free Shipping on orders over US$49.99

A tech Look back at 2022: We can’t go back (And why would we want to?)


As any of you who’ve already seen my precursor “2023 Look Ahead” piece may remember, we’ve intentionally flipped the ordering of these two end-of-year writeups, beginning this year. This time, I’ll be looking back over 2022; for any history buffs out there, here are my prior retrospectives for 2019 and 2021 (we skipped 2020).

I thought I’d start by scoring what I wrote a year ago, at that time forecasting the year to come:

  • The continued COVID-19 question mark
  • Deep learning’s Cambrian moment
  • The ongoing importance of architecture
  • Open-source processors’ time in the sun
  • The normalization of remote work (and the “Great Resignation’s” aftershocks)
  • The metaverse starts to stir
  • Autonomy slowly accelerates
  • Batteries get ever denser, ever more plentiful, and ever cheaper
  • Space travel becomes commonplace

Maybe I’m just biased, but I think I’ve nailed ‘em all; let me know your opinions in the comments. In the sections that follow, I’m going to elaborate further on a few of these themes, as well as pontificate on other topics that didn’t make my year-ago forecast but ended up being particularly impactful (IMHO, of course).

Supply chain relief (to a degree)

Those of you who’ve read my late-November-published forecast for 2023 know that I’ve already touched on the supply chain subject from a looking-forward standpoint. But what about the rear-view perspective? My conclusions are mixed. To at least some degree, we’re better off than we were a year ago, particularly if you’re not a low-volume hobbyist, and in areas where bolstered supply has been assisted by demand dissipation. In the latter case I’m speaking of, for example, graphics cards, which through the early portion of 2022 remained largely unavailable.

Everything changed later in the year, however, when the cryptocurrency market went through several notable gyrations. First, Ethereum switched from proof-of-work to proof-of-stake, eliminating the need for predominantly GPU-based “miners” and in the process blessedly reducing energy consumption by (an Ethereum Foundation-estimated, mind you) 99.95%. Ethereum isn’t the only cryptocurrency, but it’s one of the largest (not even counting “forks” and variants). The resultant ongoing-demand impact on GPUs, not to mention the near-immediate increased supply of used graphics cards available for sale on eBay and elsewhere, was notable. More recently, cryptocurrencies have also dramatically decreased in value, one of many measures of increased doubts as to their future viability, fueled by factors such as the bankruptcy filing and subsequent executive indictments at exchange firm FTX. These downturns, along with energy cost increases, have largely eliminated any lingering “profit motive” for “miners” of the remaining proof-of-work cryptocurrencies.

Other PC building block components, commonly used in other system designs of course, have also experienced notable downward price corrections this past year. CPUs, for example, are now generally priced at, if not below, MSRP…when their suppliers are willing to sell them at all, that is. And I’d also estimate that semiconductor memories, both volatile (DRAM) and nonvolatile (flash memory), are currently selling for at least half the cost-per-bit than they were a year back, as these and other commodity ICs transition from constraint to oversupply periodicity. That all said, few- to sole-sourced ICs, especially those fabricated on specialty semiconductor processes, remain in short supply, leading to (for example) ongoing automobile production line shutdowns throughout 2022 (forecasted to linger for at least the next year).

Clearly, we’re not “out of the woods” as we exit 2022; in many respects we’re barely treading water, if not once again being pushed in reverse by the current. I’m speaking here of two primary factors: politics and pandemics. To the first point, a notable percentage of chips are fabricated, packaged, and tested in Asia, specifically Taiwan and South Korea. China’s saber-rattling throughout 2022 has rattled both across-the-Strait Taiwan and others around the world who are dependent on Taiwan. Meanwhile, South Korea’s belligerent northern neighbor launched at least 90 ballistic and other missiles in 2022…and I’m writing these words in mid-December, so the year’s not yet over. Finally, speaking of China, the country’s leadership recently has seemingly given up on longstanding “zero-COVID” aspirations, leading to inevitable widespread infection breakouts which, along with limited vaccine availability and effectiveness, threaten to hinder China’s ongoing ability to manufacture systems products in high volumes.

Ongoing (albeit slowing) lithography advancements

Speaking of Taiwan (headquarters of leading foundry TSMC, etc.) and South Korea (home base for also-large foundry Samsung, etc.)…within my 2021 retrospective piece, I wrote:

Were the historical lithography improvement pace to have remained steady this past year-plus, it would have to at least some degree bolstered the semiconductor wafer fabrication link in the supply chain mentioned previously, since it’d be possible to squeeze an ever-increasing number of transistors onto each wafer (therefore also die built of those transistors, counterbalanced to a degree by ever-increasing per-die design complexity).

I continued that year-ago coverage by providing examples—from Intel, Samsung and TSMC—where the historical lithography advancement cadence was seemingly not remaining steady but instead slowing. More recently, within my coverage of Apple’s September 2022 release of its various iPhone 14 products, I postulated (in noting that atypically only the “Pro” smartphones got the newest SoC built on the newest TSMC lithography this time), “Does this iPhone generation’s more limited processor evolution reflect constrained latest-generation TSMC process capacity (and/or yield)?” And even more recently, what I suspect are at least somewhat credible rumors to suggest that, as with the 5 nm-fabricated M1 SoC generation encompassing:

  • The foundation M1, containing 16 million transistors, four “performance” and four “efficiency” CPU cores, and up to eight GPU cores
  • The M1 Pro, containing 33.7 billion transistors, eight “performance” and two “efficiency” CPU cores, and up to 16 GPU cores
  • The M1 Max, containing 57 billion transistors, eight “performance” and two “efficiency” CPU cores, and up to 32 GPU cores, and
  • The M1 Ultra, an interposer-based approach that mates two M1 Max die in one package

The M2 SoC generation will reportedly also eventually only include four members. Missing—supposedly motivated by Apple and foundry partner TSMC’s shared desire to maximize available fab capacity—will be the claimed originally planned quad-die M2 “Extreme”, targeting the “Apple Silicon” (i.e., Arm)-based successor to today’s Intel-powered high-end Mac Pro. Moreover, to date only the foundation M2 family member, containing 20 million transistors, fabbed on a “2nd--gen 5 nm” process and dating from the June 2022 launch event, is available. And more generally, even the most generous interpretation of Apple’s original “product line transition to Apple Silicon complete within two years” forecast is specious at this point.

I don’t bring all of this up to pick on Apple in particular; instead, my point is to showcase Apple and TSMC as a case study example of the increasing cost and yield challenges that the entire industry is facing at ever-smaller lithographies. While claims by some, such as NVIDIA’s CEO Jensen Huang’s recent statement that “Moore’s Law is dead”, are (IMHO) both self-serving and overly dramatic, they’re not completely off-base. Further improvement will likely require technology innovations (already well underway this past year in at least some cases) beyond transistor dimension shrinkage, which we’ve largely relied on to date, such as:

By the way, shortly before I wrote these words, IEEE Spectrum published a series of articles related to 2022’s 75th anniversary of the transistor, which I commend to your attention.

Coprocessors come to the fore

Speaking of the M1 SoC series, the “ever more intelligent utilization of whatever transistor budget is available” angle of the prior section begs for a more in-depth discussion. Take a look, for example, at the M1 Pro die shot shown at Apple’s October 2021 launch event:

The functions of the “efficiency” and “performance” CPU cores, along with their associated L2 cache blocks, are likely obvious, as are the roles of the GPU cores. That said, keep in mind that GPU integration is a relatively recent phenomenon; AMD unveiled its first APU (accelerated processing unit) combining a CPU and GPU on the same die only a decade (and a year) ago, for example. The blocks labeled “SLC” refer to L3 “system level cache” shared by the CPU and GPU subsystems; alongside them are the functionally related system DRAM controllers (again, until relatively recently, in standalone core logic chips versus being integrated alongside the CPU).

But what of that “16-core NPU”? That’s the neural network inference processing core, and its presence isn’t solely restricted to Apple’s own SoCs. Google, for example, embeds its own deep learning-focused TPU (tensor processing unit) within the application processors found in its Pixel 6 and 7 smartphone lines, and MediaTek and Qualcomm include inference processing subsystems in the SoCs they more broadly license to smartphone and other system OEMs. And of course there are plenty of other coprocessors integrated in modern SoCs, which Apple didn’t bother labeling on the M1 Pro die shot because they’re too small and/or too proprietary: the ISP (image signal processor), acceleration blocks for encoding and decoding audio and video (Intel’s latest GPUs’ boosted AVI video encoding was a notable advancement, for example), a sensor fusion coprocessor, the display controller, etc.

Equally interesting to me are not only what coprocessors are included, but how they’re implemented. The fundamental tradeoff here is between highly function-flexible silicon, which unfortunately tends to also be comparatively high in both power and silicon area consumption, and more efficient but also less flexible dedicated-function silicon. Take the encoding and decoding of a new video codec, for example: in the early days of its evolution and industry standardization (and certainly the case if it a particular codec remains proprietary long-term) it’s commonly implemented either in FPGA gates (AMD’s recent belatedly concluded acquisition of Xilinx, following in the footsteps of Intel’s past purchase of Altera, is notable in this regard) or code running on a CPU and/or DSP core. As evolution of and consolidation among available codec-implementation options occurs, tailored-function processors emerge to tackle the task. Eventually, once the codec is mature, highest efficiency “hardwired” implementations reign.

Reiterating my earlier point, this reliance on coprocessors surrounding the main system CPU cluster will only become more critical going forward, as the cost of implementing each transistor in the SoC further skyrockets. That said, these coprocessors won’t necessarily be integrated on the same die as the CPU, especially if they’d significantly bloat the transistor count and/or if they require a divergent fabrication process than the one used by the CPU. Instead, they may take the form of a MCM (multi-chip module) or multi-die package.

Embracing EVs (and other battery-powered devices)

More than two decades after EDN published my analysis of General Motors’ before-its-time EV1, electric vehicles thankfully (and finally) are definitely “hot” (and no, I’m not referring to their propensity to catch fire after accidents and in certain other circumstances). I make this observation based not only on current events but also (arguably more importantly) on manufacturers’ long-term investment bets. Just two days ago (as I write these words in late December), for example, Audi stated that it would convert all existing global production factories to manufacture EVs by the end of this decade. That same day, the US Postal Service announced that beginning in 2026 it would purchase only battery-operated mail delivery vehicles, with an estimated 66,000 to be acquired by 2028. And at the beginning of December, Tesla belatedly began shipping its electric Semi, with Pepsi as the first recipient and Tesla estimating that it’ll ramp up quickly to produce 50,000 vehicles in 2024 alone.

Electrification isn’t solely the future of four-wheeled vehicles. The road on which my wife and I live branches off from another with steep hills on both sides of the intersection. This topographical reality means that anything other than a really short (or alternatively really repetitive back-and-forth) bike ride involves some long, strenuous climbing. So, for our anniversary this year, I decided to gift my wife an electric bike. I was bracing myself to drop several thousand dollars on the present, but while you certainly can still do so if you want (or alternatively tackle a DIY conversion of an existing bicycle), I happily ended up only spending a bit more than $500 while still ending up with a perfectly adequate acquisition result:

Again, however, mind the fire potential (more coverage) if you follow in my footsteps (or, if you prefer, tire tracks).

This case study exemplifies a trend that I touched on a year ago and first noted in mid-2018:

Vehicle-driven battery [editor note: and motor] advancements will have ancillary benefits to a whole host of other technology applications, some of which haven’t even been thought of (far from implemented) yet.

And it’s only one of numerous past-year examples of the trend, also including increasingly rich-featured, lighter, smaller and lower-cost drones touting longer between-charged operating life, and even boats. That all said, at least two notable factors bear consideration as counterweights to any “irrational exuberance” you might be feeling right now (thereby also explaining why at least some vehicle manufacturers are hedging their EV bets, for example).

The first, common to roadway and (eventually) air and water travel vehicles, involves both the coverage and compatibility of charging stations, addressing (along with the need for increasingly speedy-but-safe recharging times, augmented by quick-swap capabilities) the worries of those with “range anxiety”. Right now, at least, the comparative pervasiveness of Tesla’s Supercharger network is an infrequently-discussed but still-notable strength for the company. Although Tesla indicated earlier this year that it would open-source its charging system and unlock its network (via adapters) to other manufacturers’ vehicles in conjunction with the signing of the U.S. Infrastructure Investment and Jobs Act, it’s not clear, at least to this editor, how many of Tesla’s competitors will take it up on the offer versus going it alone with their own approaches in pursuit of competitive isolation-fueled leadership.

The other notable caution involves the rarity, therefore the increasing cost, of the raw materials needed to manufacture high-density rechargeable batteries. I recently experienced an ancillary outcome of this phenomenon; I bought on sale two dozen Eveready Energizer Ultimate AA batteries for $54.99 (right now they’re normally ~50% higher-priced than that). Unfortunately, my Blink outdoor security cameras won’t run on conventional AAs (although they thankfully run for a long time on these more costly alternatives). And double-unfortunately, these same batteries cost me about 50% less last time I bought a bunch of ‘em. Why? Some of it, I suspect, is that I unwisely-in-retrospect purchased them right before Christmas, when lots of parents around the world are also buying power cells for use in their kids’ toys (supply and demand, don’cha know). And part of it is likely because although these batteries are one-time-use, they’re still based on the same increasingly scarce lithium as are rechargeable Li-ion cells.

Barring the discovery of vast, cost-effectively mineable new lithium deposits somewhere(s) in the world, we’re going to need to count on two other additional demand-mitigating variables:

Let’s see where we are in a year (or maybe, more realistically, a decade), shall we? For more on the topic, I’ll again reference a (different) recent series of articles at IEEE Spectrum.

In conclusion…

As usual, I originally planned to cover a number of additional topics in this piece, such as:

But (also) as usual I ended up with more things that I wanted to write about than I had a reasonable wordcount budget to do so. Nearing 3,000 words, I’m going to restrain myself and wrap up, saving the additional topics (as well as updates on the ones I’ve explored here) for dedicated blog posts to come in the coming year(s). Let me know your thoughts on my top-topic selections, as well as what your list would have looked like, in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content




Source link

We will be happy to hear your thoughts

Leave a reply

larkbiz
Logo
Enable registration in settings - general
Compare items
  • Total (0)
Compare
0
Shopping cart