
//php echo do_shortcode(‘[responsivevoice_button voice=”US English Male” buttontext=”Listen to Post”]’) ?>
In a recent guest editorial here on EE Times, legendary professor David Patterson wrote about busting the five myths around the RISC-V instruction set architecture (ISA). At the recent RISC-V Summit organized by RISC-V International, the consortium that manages and promotes the RISC-V Instruction Set Architecture (ISA), its president, Calista Redmond, had a far more blunt message: RISC-V is inevitable.
In fact, she said, RISC-V will eventually have the best CPUs, the best software running on them and the best ecosystem of any microprocessor core family. These are mighty strong words for a nascent ISA that is only about 10 years old and that competes with the far more established Arm and x86 ISAs. It almost sounded like the Borg from Star Trek when they say, “Resistance is futile.”

Redmond’s reason for saying that RISC-V is inevitable is that its growth and success are built upon shared investments of many companies, universities and contributors. RISC-V International has more than 3180 members. Billions of dollars have been invested in the architecture, including national programs from countries and regions such as India and the E.U. This enables the development of the “best” processor in multiple price and performance categories with the contributions of so many ideas and collective knowledge. Because RISC-V is scalable, customizable and modular, it can easily be optimized for different workloads and applications.
The software ecosystem is growing, and efforts are underway to make software development more efficient with profiles and standards like a single hypervisor standard.
RISC-V origins
RISC-V is an open specification like Ethernet. It was developed at University of California, Berkeley (UC Berkeley) with a clean-slate approach to RISC (reduced instruction set computer) designs. There had been many RISC ISAs in the past: 29K, Alpha, Arm, i960, MIPS, PowerPC and SPARC to name some. All those other RISC architectures have been tied to a corporate owner, and most have become outdated.
The researchers at UC Berkley felt it was time for a clean slate with no corporate owners, initially for educational use, but they soon recognized it was useful for more than instructional purposes.
With this approach, multiple companies can build CPUs using the open standard. This means that there are many different options to get RISC-V CPUs, and there are more every year. You can download the specifications and design your own CPU. You can download open-source versions of RISC-V CPUs. You can buy a CPU core from multiple IP vendors. You can get a customized CPU core from other vendors. You can buy chiplets with RISC-V cores. You can buy a chip with an RISC-V processor. Or you can buy a full AI chip running with RISC-V cores.

The instruction set itself is scalable from 32bits to 128bits, is modular, and is extensible (customizable). As Patterson points out, there’s a concern that this flexibility would lead to architecture fragmentation. To battle that, RISC-V will be setting up some standardized profiles for applications processors, where software and systems compatibility is important. Each year, RISC-V International will release a new profile with essential components. For example, one of the most talked about extensions over the last year is vectors, which boost performance on compute and AI workloads. Even without the extensions and customization, RISC-V offers a unique business model and potentially the most efficient RISC CPU core.
Markets pulling for RISC-V
One of the most talked about markets was automotive with auto-grade cores from Andes, MIPS, NSI-TEXE (Denso) and others. One estimate that Redmond quoted in her keynote is that RISC-V would be in 10% of new automobiles by 2025.
In my conversation with European IP provider Codasip, they contend that automotive OEMs also like RISC-V because they can verify RTL code and apply formal verification methods to the design without having to trust the IP supplier. And with more vertical integration of designs, OEMs like to use customization for optimal costs, performance and power.
With its customization, it’s been no surprise that RISC-V has been growing in popularity in embedded designs. Imperas is one company offering design validation and virtual platform tools supporting custom instructions on multiple vendor IP.
The first market for RISC-V cores was in deeply embedded designs at Nvidia and Western Digital. In a keynote address at this year’s event, Qualcomm’s Manju Varma revealed that the company has been using RISC-V CPU cores in its chips since the Snapdragon 865 and has shipped over 650 million RISC-V cores to date.
One of the keynote presentations was from a Google exec. The topic was porting the Android open-source project to RISC-V. While there have been earlier ports by Alibaba, this was an official Google project. While there’s good progress running Android on RISC-V for evaluation and early development, Google made it clear it will require certain architectural features for a more mainstream product. This could really open up the market for RISC-V for consumer devices running Android, including smartphones.
For data center applications, there are products from Alibaba, Esperanto.ai and Ventana.
Ventana Veyron V1
In what was probably the biggest hardware news at the RISC-V Summit, Ventana revealed details of its new Veyron V1 data center class chiplet processor. This 8-wide superscalar, out-of-order CPU design with RAS (reliability, availability and serviceability) features running at 3.6GHz is designed to go head-to-head with the latest server processors from AMD, Arm and Intel.
The chiplet is fabricated in TSMC’s 5-nm process with 48MB of L3 cache per 16-CPU cluster. By combining multiple Veyron V1 chiplets with a central memory and I/O chip, a silicon vendor or systems company can build a server processor with 128 CPU cores in a socket.
The V1 chiplet architecture is similar to AMD’s EPYC processors, but Ventana differs in some significant ways. The chiplet connection to the memory and I/O hub uses a very low latency interface called “Bunch of Wires” (BoW) developed by the Open Compute Project in the Open Domain-Specific Architecture (ODSA) sub-project. BoW is a parallel interconnect and does not use higher-latency SerDes connections like AMD’s Infinity Fabric to convert parallel interfaces to serial, introducing latency. Although the company is using BoW today, it does plan to use UCIe in the future.
The company will offer three business models: standard chiplets with a standard third-party memory and I/O hub; The V1 chiplets with a custom hub or an IP license to the V1 Cores. The V1 looks to be the RISC-V core that will deliver impressive instruction-per-cycle (IPC) performance at competitive clock speeds.
MIPS
While we know from last year that the restructured MIPS was adopting RISC-V for future CPU development, at the Summit the company announced that Mobileye adopted it eVocore P8700 for the next-generation EyeQ SoC for autonomous driving and advanced driver assistance systems (ADAS). Mobileye had been using the MIPS architecture for its existing products. The P8700 is a multi-threaded, multi-core, multi-cluster design that can scale to 64 clusters, 512 cores and 1,024 threads. With its internal fault detection and isolation and check architecture options, that company believes it can get to ASIL-D safety and reliability rating for automotive applications.
SiFive
SiFive CEO Patrick Little gave an update on the company’s progress over the year. One milestone of note was the collaboration with MicroChip in winning the Jet Propulsion Lab (JPL)/NASA design for the next generation of space-capable computers called the HPSC. (There was also a talk about the HPSC from a JPL representative at the conference.)
The goal of the HPSC project is to define a computer that is 100 times more performance than previous space computers. The HPSC needs to be based on a long-lived ISA that NASA can depend on for the next 10-20 years, and RISC-V is deemed to be such an instruction set. Previous space computers have used the PowerPC ISA.
Another milestone for SiFive was its partnership with Intel Foundry Services (IFS) and the development of the HiFive Pro P550 chip (Intel codenamed it “Horse Creek”) in the Intel 4 process. The chip will be used in an RISC-V development platform available next year, but they did show off a validation board at the conference.
Andes
One of the earliest CPU IP providers to embrace RISC-V was Andes. The company has been steadily building a selection of CPU cores from the low end and is now adding vector extensions. Andes announced a new high end CPU core called the AX65, with a 13-stage pipeline and out-of-order execution. The smaller NX45V and AX45MPV core offer vector and scalar operation. A big win for the company is the Renesas RZ/Five MPU for automotive. Andes already offers cores that are compliant to ISO26262 and ASIL-B safety standards. While the company can’t reveal all its customer projects, it did say it has a 5-nm project in production with a 3-nm design due in 2024.
Summary
While RISC-V continues to build its ecosystem, several speakers also made the point clear that there’s a lot of work ahead. Famed investor Lip-Bu Tan of Walden International spoke of the need for additional platform and system-level features, more development boards and improved software tools. But he also said there’s strong interest from industry and government in the architecture. He noted that Walden has investments in Ventana, Akena, SiFive and Rivos. The CTO of RISC-V International, Mark Himelstein, recognized the challenges and said that software ecosystem development was his No. 1 priority.
TIRIAS Research thinks this year will be a turning point for the RISC-V instruction set with significant silicon and software scheduled to reach the market. Both established companies like Imagination Technologies and XMOS, as well as most start-ups are embracing RISC-V. It seems like its progress to mainstream adoption is unstoppable. Some might say—inevitable.