We had the privilege of talking with Andrea Gallo, Vice president of Technology at RISC-V International. Andrea oversees the technological advancement of RISC-V, collaborating with vendors and institutions to overcome challenges and grow its global presence. Andrea’s career in technology spans respective influential roles at major companies. Before joining RISC-V International, he worked at Linaro, where he pioneered Arm data center engineering initiatives, later overseeing diverse technological sectors as Vice president of section Groups, and yet managing crucial business improvement activities as executive Vice President. During his earlier tenure as a Fellow at ST-Ericsson, he focused on smartphone and application processor technology, and at STMicroelectronics he optimized hardware-software architectures and established global improvement teams.
Unlike Arm and x86, RISC-V allows anyone to implement the ISA in their processor cores without licensing fees, and encourages community contributions to the standard. It offers the flexibility to add customized extensions to the base ISA, enabling companies to make specialized accelerators for circumstantial applications. However, it’s crucial to note that the standard maintains rigorous criteria for accepting contributions, involving an extended review process. This approach helps mitigate the hazard of ecosystem fragmentation that can happen erstwhile many companies make their own extensions, possibly leading to compatibility issues. We asked the hard questions about ecosystem fragmentation, the HPC sector, the mobile industry, AI, and the future of RISC-V. Below is our in-depth interview with Andrea Gallo.
TechPowerUp: RISC-V in the data center: How is the RISC-V foundation supporting companies in the high-performance computing sector?
Andrea: There are 2 strings of activities to grow presence in HPC: performance and security. We have ratified the vector extension and are working on defining matrix extension, all aimed at improving performance.
On the safety side, we late ratified crucial extensions related to control flow integrity, specified as “Shadow Stack and Landing Pads.” These guarantee that erstwhile you have function calls, the return address remains intact and uncompromised. We have besides ratified pointer masking, a crucial first step towards memory tagging, where masked address bits within a process address space can later support memory tagging. Additionally, we are working on supervisor domain access protection (SMMTT).
Altogether, these efforts strengthen both performance and safety for high-performance computing and data centers.
TechPowerUp: What about mobile devices? We are seeing in a giant uptick in mobile computing power. How would the RISC-V foundation like to fit into that? Would that be pushing more advanced performance designs, more efficient designs, or anything else?
Andrea: There is an Android peculiar interest group (SIG) and an Android RISC-V 64 task on GitHub where all the communication and documentation are stored. There’s quite a few ongoing activity around Android on RISC-V. fresh chips on the marketplace now support the RVV 1.0 vector extension. We are besides starting to see improvement boards that usage these vector extensions, specified as the Banana Pi and the Deep Computing DC-Roma II laptop. This is very crucial from the developers’ position since native improvement on the mark platform is essential.
There are besides performance initiatives akin to those in the HPC space, focusing on vector extensions and providing commercial improvement platforms. Additionally, we have a dev board program to review fresh improvement boards with the latest chips and extensions, ensuring they have optimal performance and safety extensions. We stock these boards and supply them to key maintainers and developers in the ecosystem, making certain that operating strategy distributions are ported and tested.
Just this year, we have shipped more than 200 boards to individuals. If any key maintainer needs a board that they have not been able to get, they can contact us at help@riscv.org for evaluation and support.
TechPowerUp: So RISC-V global is actually helping out developers with improvement boards?
Andrea: Yeah.
TechPowerUp: So our next question is that, the current boom in technology is actually AI, and there are any accelerators being developed specifically to accelerate AI. That includes matrix multiplication, accumulation, and all those circumstantial things. And there are companies like Esperanto AI and Tenstorrent, doing accelerators based on RISC-V. Is there any anticipation that we would see AI circumstantial instruction extensions in the future?
Andrea:It’s not just Esperanto and Tenstorrent—Axelera, NVIDIA and Meta have all publically shared that they’re utilizing RISC-V. NVIDIA integrates RISC-V into their GPUs and Meta uses it in their AI accelerators. So, yeah, RISC-V is everywhere AI is.
When it comes to customized instructions, we have an AI/ML SIG. The function of a SIG is to analyse a circumstantial area, identify gaps, item product opportunities, and justify fresh improvement efforts.
When we ratify a fresh extension, we know that there’s a request for real-world products. For example, think of the open origin improvement in Linux. A subsystem maintainer or 1 of the higher architecture maintainers will not accept fresh code, subsystem, or contribution unless there’s a demonstrated need. all addition increases your cost of ownership and the baggage that you carry from 1 release to the next. The same rule applies to the RISC-V ISA. SIGs analyse gaps, identify solutions. In this case, for AI/ML, all the companies that we have listed are all in a position to propose circumstantial fresh instructions.
The specification process that leads to the ratification of extensions is simply a rigorous process. The fresh ratification of the BFloat16 reflects the needs of floating-point formats for AI/ML. And the ongoing work around matrix extensions is truly driven by the device learning algorithms.
TechPowerUp: Our follow-up on that would be, how fast are those peculiar interest groups? How fast are they ratifying this specification for the ISA extension?
Andrea: The velocity depends on the complexity of the proposal. If something is very minor, we can go for a fast track, and it can take a fewer months. If it is simply a major specification, then it shall go through all the process, with circumstantial review windows, that can take six months or more. This truly depends on the complexity due to the fact that it’s truly crucial that we have a rigorous review.
There’s a misconception with RISC-V is that everyone adds fresh customized instructions, and there’s immense fragmentation. As I said, I joined just end of June. My first day was the summit in Munich, the European Summit. I’ve been impressed by the rigor and the thoroughness of the process of the review process. The specifications are reviewed by the task group that prepares them. There’s an architecture review committee, then there’s 1 period of public review. There’s the review by the method steering committee. There’s a review by the all the committee chairs, by the board of directors. So there’s an attention, towards a rigorous process avoiding unnecessary fragmentation.
TechPowerUp: We briefly touched upon erstwhile everyone is doing their own, customized instructions. So for example, if we wanted to build a RISC-V accelerator, we usage the base ISA and add our application circumstantial instruction sets that accelerate the AI program. We know that it’s a feature to let for these customized extensions, but it is creating giant fragmentation in the ecosystem. What is RISC-V global doing to solve that issue?
Andrea: I mentioned the rigor of the process to write, rectify and extend a fresh specification. If you want to claim that you are RISC-V compatible, then there’s an architecture compatibility test suite that verifies that you are complying with the ISA. We run the same tests on a golden mention model and compare the signatures of the tests to guarantee alignment with the specification.
The next step in preventing fragmentation is at the software porting level. In embedded, you may have a vertically integrated software approach with vendors or device makers who control the full vertical software stack with the celebrated “spaghetti code” way of working. However, modern application processors request to run a binary OS distribution without changes. So here if an OS vendor targets just the minimal compatibility across products, then it would be the very basic RV64I or possibly RV64G, which is simply a very tiny subset. To address this, we are working on the profiles.
We have a crucial number of extensions grouped by profiles. So, specifically, we have an application processor profile, and over time, we upgrade these profile specifications. And this is simply a set of mandatory extensions and any optional extensions.
We just ratified the RVA23 Profile. The recently ratified RVA23 Profile is simply a major release for the RISC-V software ecosystem and will aid accelerate widespread implementation among toolchains and operating systems. You can learn more about it in our latest announcement.
The next step is platforms. To further improve and accelerate the reuse of software across verticals or within the same vertical across products, we are, as an ecosystem, agreeing on a set of hardware and software interfaces that will be the same and part of a platform specification. There’s a squad that is working on a server SOC and a server platform. This includes things like having the same interfaces for timers, clock, IOMMU, RAS and the related mistake reporting mechanisms. We all agree that we should usage the same interfaces for circumstantial peripherals, that are part, for example, of a server platform.
TechPowerUp: So, what is the request for yet another commercial instruction set? What is the RISC-V global Foundation doing better than competition like Arm and now, joint forces of x86?
Andrea:I would like to answer this question from 2 different perspectives: innovation and freedom from lock-in.
The rate, energy, and pace of innovation in the RISC-V ecosystem is unbelievable. The fact that anyone can start from a training course from the RISC-V website and learn how to make a RISC-V core and add customized extensions is unleashing imagination. From a developer’s perspective, being able to make a RISC-V core from day zero is simply a immense value. I took 1 of these courses as part of my ramp up, and it blew my head out. That was unbelievable. And at the same time, as custodians of the RISC-V ISA, we’re able to funnel this energy towards fresh standards and compliance. All this is something that the another architectures that you mentioned cannot achieve. Companies that are marketplace competitors collaborate within RISC-V global meetings towards common goals. We have more than 4,500 members. You cannot see this anywhere else.
Another very crucial aspect is freedom from lock-in. It’s not just about licensing model or royalties, but it’s about the ability to control your destiny without depending on another entity that may abruptly halt supporting you. Nowadays, this can be a national safety issue. There are many countries and governments present investing in RISC-V from a digital sovereignty perspective. You were correctly pointing to AI. Today, AI is becoming critical in our lives, and countries are investing towards digital sovereignty to make certain that they are building competence to make their own AI solutions in home in terms of competence, expertise, but also, IP.
We see this momentum globally. The EU is backing collaborative projects to make software defined vehicles based on RISC-V. China has the celebrated “One Student 1 Chip” program, led by the Beijing Open origin Chip investigation Institute and the University of the Chinese Academy of Sciences. They have thousands of students who propose and plan chips based on RISC-V, and more than 10 are taped out and working. A fewer months ago, Brazil joined RISC-V global as a associate due to the fact that they want to grow and accelerate programs based on RISC-V in Brazil. And of course, UC Berkeley continues to play a function in academic research. Universities, governments, and multinational companies around the planet are taking control of their own destiny, investing in RISC-V to solve local problems, while engaging globally with the RISC-V ecosystem.
TechPowerUp: You’re actually saying that the 2 paths to RISC-V success are: First, supply hardware to developers to train them on RISC-V, which will aid them become skilled engineers who may yet work in companies that make RISC-V software and hardware. And the second is taking matters into your own hands, basically.
Andrea: Yeah. It’s students, academia, startups, multinational companies, and countries.
TechPowerUp: So we have 1 final question that would be beautiful interesting to our readers. Where do you see RISC-V in about 10 years or so?
Andrea: Looking back, the growth from an academic task at UC Berkeley to where we are present is an unbelievable journey. Just in 2023, there was a 2.5x growth in terms of the overall business in the ecosystem over 2022. The SHD Group predicts that RISC-V will capture around 30% marketplace share of manufacture marketplace verticals from consumer, computer, and automotive, to datacenter and industrial by 2030, with over 20 billion RISC-V based SoCs shipping annually. We’re no longer counting cores anymore—we’re counting chips, which include many, many cores each. 10 years from now, I want to see RISC-V as the de facto ISA of choice for all fresh product design.