SV Overview

SV is in DRAFT STATUS. SV has not yet been submitted to the OpenPOWER Foundation ISA WG for review.

This document provides an overview and introduction as to why SV (a Cray-style Vector augmentation to OpenPOWER) exists, and how it works.

Sponsored by NLnet under the Privacy and Enhanced Trust Programme


Table of contents:

Introduction: SIMD and Cray Vectors

SIMD, the primary method for easy parallelism of the past 30 years in Computer Architectures, is known to be harmful. SIMD provides a seductive simplicity that is easy to implement in hardware. With each doubling in width it promises increases in raw performance without the complexity of either multi-issue or out-of-order execution.

Unfortunately, even with predication added, SIMD only becomes more and more problematic with each power of two SIMD width increase introduced through an ISA revision. The opcode proliferation, at O(N6), inexorably spirals out of control in the ISA, detrimentally impacting the hardware, the software, the compilers and the testing and compliance. Here are the typical dimensions that result in such massive proliferation, based on mass-volume DSPs and Micro-Processors:

  • Operation (add, mul)
  • bitwidth (8, 16, 32, 64, 128)
  • Conversion between bitwidths (FP16-FP32-64)
  • Signed/unsigned
  • HI/LO swizzle (Audio L/R channels)
    • HI/LO selection on src 1
    • selection on src 2
    • selection on dest
    • Example: AndesSTAR Audio DSP
  • Saturation (Clamping at max range)

These typically are multiplied up to produce explicit opcodes numbering in the thousands on, for example the ARC Video/DSP cores.

Cray-style variable-length Vectors on the other hand result in stunningly elegant and small loops, exceptionally high data throughput per instruction (by one or greater orders of magnitude than SIMD), with no alarmingly high setup and cleanup code, where at the hardware level the microarchitecture may execute from one element right the way through to tens of thousands at a time, yet the executable remains exactly the same and the ISA remains clear, true to the RISC paradigm, and clean. Unlike in SIMD, powers of two limitations are not involved in the ISA or in the assembly code.

SimpleV takes the Cray style Vector principle and applies it in the abstract to a Scalar ISA in the same way that x86 used to do its "REP" instruction. In the process, "context" is applied, allowing amongst other things a register file size increase using "tagging" (similar to how x86 originally extended registers from 32 to 64 bit).

Single-Issue concept{ width=40% height=20% }


The fundamentals are (just like x86 "REP"):

  • The Program Counter (PC) gains a "Sub Counter" context (Sub-PC)
  • Vectorization pauses the PC and runs a Sub-PC loop from 0 to VL-1 (where VL is Vector Length)
  • The Program Order of "Sub-PC" instructions must be preserved, just as is expected of instructions ordered by the PC.
  • Some registers may be "tagged" as Vectors
  • During the loop, "Vector"-tagged register are incremented by one with each iteration, executing the same instruction but with different registers
  • Once the loop is completed only then is the Program Counter allowed to move to the next instruction.

Multi-Issue with Predicated SIMD back-end ALUs{ width=40% height=40% }

Hardware (and simulator) implementors are free and clear to implement this as literally a for-loop, sitting in between instruction decode and issue. Higher performance systems may deploy SIMD backends, multi-issue and out-of-order execution, although it is strongly recommended to add predication capability directly into SIMD backend units.

A typical Cray-style Scalable Vector ISA (where a SIMD one has a fixed non-negotiable static parameter instead of a runtime-dynamic VL) performs its arithmetic as:

for i = 0 to VL-1:
     VPR(RT)[i] = VPR[RA][i] + VPR(RB)[i]

In Power ISA v3.0B pseudo-code form, an ADD operation in Simple-V, assuming both source and destination have been "tagged" as Vectors, is simply:

for i = 0 to VL-1:
     GPR(RT+i) = GPR(RA+i) + GPR(RB+i)

At its heart, SimpleV really is this simple. On top of this fundamental basis further refinements can be added which build up towards an extremely powerful Vector augmentation system, with very little in the way of additional opcodes required: simply external "context".

x86 was originally only 80 instructions: prior to AVX512 over 1,300 additional instructions have been added, almost all of them SIMD.

RISC-V RVV as of version 0.9 is over 188 instructions (more than the rest of RV64G combined: 80 for RV64G and 27 for C). Over 95% of that functionality is added to Power v3.0B, by SimpleV augmentation, with around 5 to 8 instructions.

Even in Power ISA v3.0B, the Scalar Integer ISA is around 150 instructions, with IEEE754 FP adding approximately 80 more. VSX, being based on SIMD design principles, adds somewhere in the region of 600 more. SimpleV again provides over 95% of VSX functionality, simply by augmenting the Scalar Power ISA, and in the process providing features such as predication, which VSX is entirely missing.

AVX512, SVE2, VSX, RVV, all of these systems have to provide different types of register files: Scalar and Vector is the minimum. AVX512 even provides a mini mask regfile, followed by explicit instructions that handle operations on each of them and map between all of them. SV simply not only uses the existing scalar regfiles (including CRs), but because operations exist within Power ISA to cover interactions between the scalar regfiles (mfcr, fcvt) there is very little that needs to be added.

In fairness to both VSX and RVV, there are things that are not provided by SimpleV:

  • 128 bit or above arithmetic and other operations (VSX Rijndael and SHA primitives; VSX shuffle and bitpermute operations)
  • register files above 128 entries
  • Vector lengths over 64
  • 32-bit instruction lengths. svp64 had to be added as 64 bit.

These limitations, which stem inherently from the adaptation process of starting from a Scalar ISA, are not insurmountable. Over time, they may well be addressed in future revisions of SV.

The rest of this document builds on the above simple loop to add:

  • Vector-Scalar, Scalar-Vector and Scalar-Scalar operation (of all register files: Integer, FP and CRs)
  • Traditional Vector operations (VSPLAT, VINSERT, VCOMPRESS etc)
  • Predication masks (essential for parallel if/else constructs)
  • 8, 16 and 32 bit integer operations, and both FP16 and BF16.
  • Compacted operations into registers (normally only provided by SIMD)
  • Fail-on-first (introduced in ARM SVE2)
  • A new concept: Data-dependent fail-first
  • A completely new concept: "Twin Predication"
  • vec2/3/4 "Subvectors" and Swizzling (standard fare for 3D)

All of this is without modifying the Power v3.0B ISA, except to add "wrapping context", similar to how v3.1B 64 Prefixes work.

Adding Scalar / Vector

The first augmentation to the simple loop is to add the option for all source and destinations to all be either scalar or vector. As a FSM this is where our "simple" loop gets its first complexity.

function op_add(RT, RA, RB) # add not VADD!
  int id=0, irs1=0, irs2=0;
  for i = 0 to VL-1:
    ireg[RT+id] <= ireg[RA+irs1] + ireg[RB+irs2];
    if (!RT.isvec) break;
    if (RT.isvec)  { id += 1; }
    if (RA.isvec)  { irs1 += 1; }
    if (RB.isvec)  { irs2 += 1; }

This could have been written out as eight separate cases: one each for when each of RA, RB or RT is scalar or vector. Those eight cases, when optimally combined, result in the pseudocode above.

With some walkthroughs it is clear that the loop exits immediately after the first scalar destination result is written, and that when the destination is a Vector the loop proceeds to fill up the register file, sequentially, starting at RT and ending at RT+VL-1. The two source registers will, independently, either remain pointing at RB or RA respectively, or, if marked as Vectors, will march incrementally in lockstep, producing element results along the way, as the destination also progresses through elements.

In this way all the eight permutations of Scalar and Vector behaviour are covered, although without predication the scalar-destination ones are reduced in usefulness. It does however clearly illustrate the principle.

Note in particular: there is no separate Scalar add instruction and separate Vector instruction and separate Scalar-Vector instruction, and there is no separate Vector register file: it's all the same instruction, on the standard register file, just with a loop. Scalar happens to set that loop size to one.

The important insight from the above is that, strictly speaking, Simple-V is not really a Vectorization scheme at all: it is more of a hardware ISA "Compression scheme", allowing as it does for what would normally require multiple sequential instructions to be replaced with just one. This is where the rule that Program Order must be preserved in Sub-PC execution derives from. However in other ways, which will emerge below, the "tagging" concept presents an opportunity to include features definitely not common outside of Vector ISAs, and in that regard it's definitely a class of Vectorization.

Register "tagging"

As an aside: in svp64 the encoding which allows SV to both extend the range beyond r0-r31 and to determine whether it is a scalar or vector is encoded in two to three bits, depending on the instruction.

The reason for using so few bits is because there are up to four registers to mark in this way (fma, isel) which starts to be of concern when there are only 24 available bits to specify the entire SV Vectorization Context. In fact, for a small subset of instructions it is just not possible to tag every single register. Under these rare circumstances a tag has to be shared between two registers.

Below is the pseudocode which expresses the relationship which is usually applied to every register:

if extra3_mode:
    spec = EXTRA3 # bit 2 s/v, 0-1 extends range
    spec = EXTRA2 << 1 # same as EXTRA3, shifted
if spec[2]: # vector
     RA.isvec = True
     return (RA << 2) | spec[0:1]
else:         # scalar
     RA.isvec = False
     return (spec[0:1] << 5) | RA

Here we can see that the scalar registers are extended in the top bits, whilst vectors are shifted up by 2 bits, and then extended in the LSBs. Condition Registers have a slightly different scheme, along the same principle, which takes into account the fact that each CR may be bit-level addressed by Condition Register operations.

Readers familiar with the Power ISA will know of Rc=1 operations that create an associated post-result "test", placing this test into an implicit Condition Register. The original researchers who created the POWER ISA chose CR0 for Integer, and CR1 for Floating Point. These also become Vectorized - implicitly - if the associated destination register is also Vectorized. This allows for some very interesting savings on instruction count due to the very same CR Vectors being predication masks.

Adding single predication

The next step is to add a single predicate mask. This is where it gets interesting. Predicate masks are a bitvector, each bit specifying, in order, whether the element operation is to be skipped ("masked out") or allowed. If there is no predicate, it is set to all 1s, which is effectively the same as "no predicate".

function op_add(RT, RA, RB) # add not VADD!
  int id=0, irs1=0, irs2=0;
  predval = get_pred_val(FALSE, RT); # dest mask
  for i = 0 to VL-1:
    if (predval & 1<<i) # predication bit test
       ireg[RT+id] <= ireg[RA+irs1] + ireg[RB+irs2];
       if (!RT.isvec) break;
    if (RT.isvec)  { id += 1; }
    if (RA.isvec)  { irs1 += 1; }
    if (RB.isvec)  { irs2 += 1; }

The key modification is to skip the creation and storage of the result if the relevant predicate mask bit is clear, but not the progression through the registers.

A particularly interesting case is if the destination is scalar, and the first few bits of the predicate are zero. The loop proceeds to increment the Vector source registers until the first nonzero predicate bit is found, whereupon a single Scalar result is computed, and then the loop exits. This in effect uses the predicate to perform Vector source indexing. This case was not possible without the predicate mask. Also, interestingly, the predicate mode 1<<r3 is specifically provided as a way to select one single entry from a Vector.

If all three registers are marked as Vector then the "traditional" predicated Vector behaviour is provided. Yet, just as before, all other options are still provided, right the way back to the pure-scalar case, as if this were a straight Power ISA v3.0B non-augmented instruction.

Single Predication therefore provides several modes traditionally seen in Vector ISAs:

  • VINSERT: the predicate may be set as a single bit, the sources are scalar and the destination a vector.
  • VSPLAT (result broadcasting) is provided by making the sources scalar and the destination a vector, and having no predicate set or having multiple bits set.
  • VSELECT is provided by setting up (at least one of) the sources as a vector, using a single bit in the predicate, and the destination as a scalar.

All of this capability and coverage without even adding one single actual Vector opcode, let alone 180, 600 or 1,300!

Predicate "zeroing" mode

Sometimes with predication it is ok to leave the masked-out element alone (not modify the result) however sometimes it is better to zero the masked-out elements. Zeroing can be combined with bit-wise ORing to build up vectors from multiple predicate patterns: the same combining with nonzeroing involves more mv operations and predicate mask operations. Our pseudocode therefore ends up as follows, to take the enhancement into account:

function op_add(RT, RA, RB) # add not VADD!
  int id=0, irs1=0, irs2=0;
  predval = get_pred_val(FALSE, RT); # dest pred
  for i = 0 to VL-1:
    if (predval & 1<<i) # predication bit test
       ireg[RT+id] <= ireg[RA+irs1] + ireg[RB+irs2];
       if (!RT.isvec) break;
    else if zeroing:   # predicate failed
       ireg[RT+id] = 0 # set element  to zero
    if (RT.isvec)  { id += 1; }
    if (RA.isvec)  { irs1 += 1; }
    if (RB.isvec)  { irs2 += 1; }

Many Vector systems either have zeroing or they have nonzeroing, they do not have both. This is because they usually have separate Vector register files. However SV sits on top of standard register files and consequently there are advantages to both, so both are provided.

Element Width overrides

All good Vector ISAs have the usual bitwidths for operations: 8/16/32/64 bit integer operations, and IEEE754 FP32 and 64. Often also included is FP16 and more recently BF16. The really good Vector ISAs have variable-width vectors right down to bitlevel, and as high as 1024 bit arithmetic per element, as well as IEEE754 FP128.

SV has an "override" system that changes the bitwidth of operations that were intended by the original scalar ISA designers to have (for example) 64 bit operations (only). The override widths are 8, 16 and 32 for integer, and FP16 and FP32 for IEEE754 (with BF16 to be added in the future).

This presents a particularly intriguing conundrum given that the Power Scalar ISA was never designed with for example 8 bit operations in mind, let alone Vectors of 8 bit.

The solution comes in terms of rethinking the definition of a Register File. The typical regfile may be considered to be a multi-ported SRAM block, 64 bits wide and usually 32 entries deep, to give 32 64 bit registers. In c this would be:

typedef uint64_t reg_t;
reg_t int_regfile[32]; // standard scalar 32x 64bit

Conceptually, to get our variable element width vectors, we may think of the regfile as instead being the following c-based data structure, where all types uint16_t etc. are in little-endian order:

typedef union {
    uint8_t  actual_bytes[8];
    uint8_t  b[0]; // array of type uint8_t
    uint16_t s[0]; // array of LE ordered uint16_t
    uint32_t i[0];
    uint64_t l[0]; // default Power ISA uses this
} reg_t;

reg_t int_regfile[128]; // SV extends to 128 regs

This means that Vector elements start from locations specified by 64 bit "register" but that from that location onwards the elements overlap subsequent registers.

image{ width=40% height=40% }

Here is another way to view the same concept, bearing in mind that it is assumed a LE memory order:

uint8_t reg_sram[8*128];
uint8_t *actual_bytes = &reg_sram[RA*8];
if elwidth == 8:
    uint8_t *b = (uint8_t*)actual_bytes;
    b[idx] = result;
if elwidth == 16:
    uint16_t *s = (uint16_t*)actual_bytes;
    s[idx] = result;
if elwidth == 32:
    uint32_t *i = (uint32_t*)actual_bytes;
    i[idx] = result;
if elwidth == default:
    uint64_t *l = (uint64_t*)actual_bytes;
    l[idx] = result;

Starting with all zeros, setting actual_bytes[3] in any given reg_t to 0x01 would mean that:

  • b[0..2] = 0x00 and b[3] = 0x01
  • s[0] = 0x0000 and s[1] = 0x0001
  • i[0] = 0x00010000
  • l[0] = 0x0000000000010000

In tabular form, starting an elwidth=8 loop from r0 and extending for 16 elements would begin at r0 and extend over the entirety of r1:

   | byte0 | byte1 | byte2 | byte3 | byte4 | byte5 | byte6 | byte7 |
   | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- |
r0 | b[0]  | b[1]  | b[2]  | b[3]  | b[4]  | b[5]  | b[6]  | b[7]  |
r1 | b[8]  | b[9]  | b[10] | b[11] | b[12] | b[13] | b[14] | b[15] |

Starting an elwidth=16 loop from r0 and extending for 7 elements would begin at r0 and extend partly over r1. Note that b0 indicates the low byte (lowest 8 bits) of each 16-bit word, and b1 represents the top byte:

   | byte0 | byte1 | byte2 | byte3 | byte4 | byte5 | byte6 | byte7 |
   | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- |
r0 | s[0].b0  b1   | s[1].b0  b1   | s[2].b0  b1   |  s[3].b0  b1  |
r1 | s[4].b0  b1   | s[5].b0  b1   | s[6].b0  b1   |  unmodified   |

Likewise for elwidth=32, and a loop extending for 3 elements. b0 through b3 represent the bytes (numbered lowest for LSB and highest for MSB) within each element word:

   | byte0 | byte1 | byte2 | byte3 | byte4 | byte5 | byte6 | byte7 |
   | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- |
r0 | w[0].b0  b1      b2      b3   | w[1].b0  b1      b2      b3   |
r1 | w[2].b0  b1      b2      b3   | unmodified    unmodified      |

64-bit (default) elements access the full registers. In each case the register number (RT, RA) indicates the starting point for the storage and retrieval of the elements.

Our simple loop, instead of accessing the array of regfile entries with a computed index iregs[RT+i], would access the appropriate element of the appropriate width, such as iregs[RT].s[i] in order to access 16 bit elements starting from RT. Thus we have a series of overlapping conceptual arrays that each start at what is traditionally thought of as "a register". It then helps if we have a couple of routines:

get_polymorphed_reg(reg, bitwidth, offset):
    reg_t res = 0;
    if (!reg.isvec): # scalar
        offset = 0
    if bitwidth == 8:
        reg.b = int_regfile[reg].b[offset]
    elif bitwidth == 16:
        reg.s = int_regfile[reg].s[offset]
    elif bitwidth == 32:
        reg.i = int_regfile[reg].i[offset]
    elif bitwidth == default: # 64
        reg.l = int_regfile[reg].l[offset]
    return res

set_polymorphed_reg(reg, bitwidth, offset, val):
    if (!reg.isvec): # scalar
        offset = 0
    if bitwidth == 8:
        int_regfile[reg].b[offset] = val
    elif bitwidth == 16:
        int_regfile[reg].s[offset] = val
    elif bitwidth == 32:
        int_regfile[reg].i[offset] = val
    elif bitwidth == default: # 64
        int_regfile[reg].l[offset] = val

These basically provide a convenient parameterised way to access the register file, at an arbitrary vector element offset and an arbitrary element width. Our first simple loop thus becomes:

for i = 0 to VL-1:
   src1 = get_polymorphed_reg(RA, srcwid, i)
   src2 = get_polymorphed_reg(RB, srcwid, i)
   result = src1 + src2 # actual add here
   set_polymorphed_reg(RT, destwid, i, result)

With this loop, if elwidth=16 and VL=3 the first 48 bits of the target register will contain three 16 bit addition results, and the upper 16 bits will be unaltered.

Note that things such as zero/sign-extension (and predication) have been left out to illustrate the elwidth concept. Also note that it turns out to be important to perform the operation internally at effectively an infinite bitwidth such that any truncation, rounding errors or other artefacts may all be ironed out. This turns out to be important when applying Saturation for Audio DSP workloads, particularly for multiply and IEEE754 FP rounding. By "infinite" this is conceptual only: in reality, the application of the different truncations and width-extensions set a fixed deterministic practical limit on the internal precision needed, on a per-operation basis.

Other than that, element width overrides, which can be applied to either source or destination or both, are pretty straightforward, conceptually. The details, for hardware engineers, involve byte-level write-enable lines, which is exactly what is used on SRAMs anyway. Compiler writers have to alter Register Allocation Tables to byte-level granularity.

One critical thing to note: upper parts of the underlying 64 bit register are not zero'd out by a write involving a non-aligned Vector Length. An 8 bit operation with VL=7 will not overwrite the 8th byte of the destination. The only situation where a full overwrite occurs is on "default" behaviour. This is extremely important to consider the register file as a byte-level store, not a 64-bit-level store.

Why a LE regfile?

The concept of having a regfile where the byte ordering of the underlying SRAM seems utter nonsense. Surely, a hardware implementation gets to choose the order, right? It's memory only where LE/BE matters, right? The bytes come in, all registers are 64 bit and it's just wiring, right?

Ordinarily this would be 100% correct, in both a scalar ISA and in a Cray style Vector one. The assumption in that last question was, however, "all registers are 64 bit". SV allows SIMD-style packing of vectors into the 64 bit registers, where one instruction and the next may interpret that very same register as containing elements of completely different widths.

Consequently it becomes critically important to decide a byte-order. That decision was - arbitrarily - LE mode. Actually it wasn't arbitrary at all: it was such hell to implement BE supported interpretations of CRs and LD/ST in LibreSOC, based on a terse spec that provides insufficient clarity and assumes significant working knowledge of the Power ISA, with arbitrary insertions of 7-index here and 3-bitindex there, the decision to pick LE was extremely easy.

Without such a decision, if two words are packed as elements into a 64 bit register, what does this mean? Should they be inverted so that the lower indexed element goes into the HI or the LO word? should the 8 bytes of each register be inverted? Should the bytes in each element be inverted? Should the element indexing loop order be broken onto discontiguous chunks such as 32107654 rather than 01234567, and if so at what granularity of discontinuity? These are all equally valid and legitimate interpretations of what constitutes "BE" and they all cause merry mayhem.

The decision was therefore made: the c typedef union is the canonical definition, and its members are defined as being in LE order. From there, implementations may choose whatever internal HDL wire order they like as long as the results produced conform to the elwidth pseudocode.

Note: it turns out that both x86 SIMD and NEON SIMD follow this convention, namely that both are implicitly LE, even though their ISA Manuals may not explicitly spell this out

Source and Destination overrides

A minor fly in the ointment: what happens if the source and destination are over-ridden to different widths? For example, FP16 arithmetic is not accurate enough and may introduce rounding errors when up-converted to FP32 output. The rule is therefore set:

The operation MUST take place effectively at infinite precision:
actual precision determined by the operation and the operand widths

In pseudocode this is:

for i = 0 to VL-1:
   src1 = get_polymorphed_reg(RA, srcwid, i)
   src2 = get_polymorphed_reg(RB, srcwid, i)
   opwidth = max(srcwid, destwid) # usually
   result = op_add(src1, src2, opwidth) # at max width
   set_polymorphed_reg(rd, destwid, i, result)

In reality the source and destination widths determine the actual required precision in a given ALU. The reason for setting "effectively" infinite precision is illustrated for example by Saturated-multiply, where if the internal precision was insufficient it would not be possible to correctly determine the maximum clip range had been exceeded.

Thus it will turn out that under some conditions the combination of the extension of the source registers followed by truncation of the result gets rid of bits that didn't matter, and the operation might as well have taken place at the narrower width and could save resources that way. Examples include Logical OR where the source extension would place zeros in the upper bits, the result will be truncated and throw those zeros away.

Counterexamples include the previously mentioned FP16 arithmetic, where for operations such as division of large numbers by very small ones it should be clear that internal accuracy will play a major role in influencing the result. Hence the rule that the calculation takes place at the maximum bitwidth, and truncation follows afterwards.

Signed arithmetic

What happens when the operation involves signed arithmetic? Here the implementor has to use common sense, and make sure behaviour is accurately documented. If the result of the unmodified operation is sign-extended because one of the inputs is signed, then the input source operands must be first read at their overridden bitwidth and then sign-extended:

  for i = 0 to VL-1:
   src1 = get_polymorphed_reg(RA, srcwid, i)
   src2 = get_polymorphed_reg(RB, srcwid, i)
   opwidth = max(srcwid, destwid)
   # srces known to be less than result width
   src1 = sign_extend(src1, srcwid, opwidth)
   src2 = sign_extend(src2, srcwid, opwidth)
   result = op_signed(src1, src2, opwidth) # at max width
   set_polymorphed_reg(rd, destwid, i, result)

The key here is that the cues are taken from the underlying operation.


Audio DSPs need to be able to clip sound when the "volume" is adjusted, but if it is too loud and the signal wraps, distortion occurs. The solution is to clip (saturate) the audio and allow this to be detected. In practical terms this is a post-result analysis however it needs to take place at the largest bitwidth i.e. before a result is element width truncated. Only then can the arithmetic saturation condition be detected:

for i = 0 to VL-1:
   src1 = get_polymorphed_reg(RA, srcwid, i)
   src2 = get_polymorphed_reg(RB, srcwid, i)
   opwidth = max(srcwid, destwid)
   # unsigned add
   result = op_add(src1, src2, opwidth) # at max width
   # now saturate (unsigned)
   sat = min(result, (1<<destwid)-1)
   set_polymorphed_reg(rd, destwid, i, sat)
   # set sat overflow
   if Rc=1:
      CR[i].ov = (sat != result)

So the actual computation took place at the larger width, but was post-analysed as an unsigned operation. If however "signed" saturation is requested then the actual arithmetic operation has to be carefully analysed to see what that actually means.

In terms of FP arithmetic, which by definition has a sign bit (so always takes place as a signed operation anyway), the request to saturate to signed min/max is pretty clear. However for integer arithmetic such as shift (plain shift, not arithmetic shift), or logical operations such as XOR, which were never designed to have the assumption that its inputs be considered as signed numbers, common sense has to kick in, and follow what CR0 does.

CR0 for Logical operations still applies: the test is still applied to produce CR.eq, and analysis. Following this lead we may do the same thing: although the input operations for and OR or XOR can in no way be thought of as "signed" we may at least consider the result to be signed, and thus apply min/max range detection -128 to +127 when truncating down to 8 bit for example.

for i = 0 to VL-1:
   src1 = get_polymorphed_reg(RA, srcwid, i)
   src2 = get_polymorphed_reg(RB, srcwid, i)
   opwidth = max(srcwid, destwid)
   # logical op, signed has no meaning
   result = op_xor(src1, src2, opwidth)
   # now saturate (signed)
   sat = min(result, (1<<destwid-1)-1)
   sat = max(result, -(1<<destwid-1))
   set_polymorphed_reg(rd, destwid, i, sat)

Overall here the rule is: apply common sense then document the behaviour really clearly, for each and every operation.

Quick recap so far

The above functionality pretty much covers around 85% of Vector ISA needs. Predication is provided so that parallel if/then/else constructs can be performed: critical given that sequential if/then statements and branches simply do not translate successfully to Vector workloads. VSPLAT capability is provided which is approximately 20% of all GPU workload operations. Also covered, with elwidth overriding, is the smaller arithmetic operations that caused ISAs developed from the late 80s onwards to get themselves into a tiz when adding "Multimedia" acceleration aka "SIMD" instructions.

Experienced Vector ISA readers will however have noted that VCOMPRESS and VEXPAND are missing, as is Vector "reduce" (mapreduce) capability and VGATHER and VSCATTER. Compress and Expand are covered by Twin Predication, and yet to also be covered is fail-on-first, CR-based result predication, and Subvectors and Swizzle.


Adding in support for SUBVL is a matter of adding in an extra inner for-loop, where register src and dest are still incremented inside the inner part. Predication is still taken from the VL index, however it is applied to the whole subvector:

function op_add(RT, RA, RB) # add not VADD!
  int id=0, irs1=0, irs2=0;
  predval = get_pred_val(FALSE, rd);
  for i = 0 to VL-1:
    if (predval & 1<<i) # predication uses intregs
      for (s = 0; s < SUBVL; s++)
        sd = id*SUBVL + s
        srs1 = irs1*SUBVL + s
        srs2 = irs2*SUBVL + s
        ireg[RT+sd] <= ireg[RA+srs1] + ireg[RB+srs2];
      if (!RT.isvec) break;
    if (RT.isvec)  { id += 1; }
    if (RA.isvec)  { irs1 += 1; }
    if (RB.isvec)  { irs2 += 1; }

The primary reason for this is because Shader Compilers treat vec2/3/4 as "single units". Recognising this in hardware is just sensible.


Swizzle is particularly important for 3D work. It allows in-place reordering of XYZW, ARGB etc. and access of sub-portions of the same in arbitrary order without requiring timeconsuming scalar mv instructions (scalar due to the convoluted offsets).

Swizzling does not just do permutations: it allows arbitrary selection and multiple copying of vec2/3/4 elements, such as XXXZ as the source operand, which will take 3 copies of the vec4 first element (vec4[0]), placing them at positions vec4[0], vec4[1] and vec4[2], whilst the "Z" element (vec4[2]) was copied into vec4[3].

With somewhere between 10% and 30% of operations in 3D Shaders involving swizzle this is a huge saving and reduces pressure on register files due to having to use significant numbers of mv operations to get vector elements to "line up".

In SV given the percentage of operations that also involve initialisation to 0.0 or 1.0 into subvector elements the decision was made to include those:

swizzle = get_swizzle_immed() # 12 bits
for (s = 0; s < SUBVL; s++)
    remap = (swizzle >> 3*s) & 0b111
    if remap == 0b000: continue            # skip
    if remap == 0b001: break               # end marker
    if remap == 0b010: ireg[rd+s] <= 0.0   # constant 0
    elif remap == 0b011: ireg[rd+s] <= 1.0 # constant 1
    else:                                  # XYZW
       sm = id*SUBVL + (remap-4)
       ireg[rd+s] <= ireg[RA+sm]

Note that a value of 0b000 will leave the target subvector element untouched. This is equivalent to a predicate mask which is built-in, in immediate form, into the mv.swizzle operation. mv.swizzle is rare in that it is one of the few instructions needed to be added that are never going to be part of a Scalar ISA. Even in High Performance Compute workloads it is unusual: it is only because SV is targetted at 3D and Video that it is being considered.

Some 3D GPU ISAs also allow for two-operand subvector swizzles. These are sufficiently unusual, and the immediate opcode space required so large (12 bits per vec4 source), that the tradeoff balance was decided in SV to only add mv.swizzle.

Twin Predication

Twin Predication is cool. Essentially it is a back-to-back VCOMPRESS-VEXPAND (a multiple sequentially ordered VINSERT). The compress part is covered by the source predicate and the expand part by the destination predicate. Of course, if either of those is all 1s then the operation degenerates to VCOMPRESS or VEXPAND, respectively.

function op(RT, RS):
  ps = get_pred_val(FALSE, RS); # predication on src
  pd = get_pred_val(FALSE, RT); # ... AND on dest
  for (int i = 0, int j = 0; i < VL && j < VL;):
    if (RS.isvec) while (!(ps & 1<<i)) i++;
    if (RT.isvec) while (!(pd & 1<<j)) j++;
    reg[RT+j] = SCALAR_OPERATION_ON(reg[RS+i])
    if (RS.isvec) i++;
    if (RT.isvec) j++; else break

Here's the interesting part: given the fact that SV is a "context" extension, the above pattern can be applied to a lot more than just MV, which is normally only what VCOMPRESS and VEXPAND do in traditional Vector ISAs: move registers. Twin Predication can be applied to extsw or fcvt, LD/ST operations and even rlwinmi and other operations taking a single source and immediate(s) such as addi. All of these are termed single-source, single-destination.

LDST Address-generation, or AGEN, is a special case of single source, because elwidth overriding does not make sense to apply to the computation of the 64 bit address itself, but it does make sense to apply elwidth overrides to the data being accessed at that memory address.

It also turns out that by using a single bit set in the source or destination, all the sequential ordered standard patterns of Vector ISAs are provided: VSPLAT, VSELECT, VINSERT, VCOMPRESS, VEXPAND.

The only one missing from the list here, because it is non-sequential, is VGATHER (and VSCATTER): moving registers by specifying a vector of register indices (regs[rd] = regs[regs[rs]] in a loop). This one is tricky because it typically does not exist in standard scalar ISAs. If it did it would be called mv.x. Once Vectorized, it's a VGATHER/VSCATTER.

Exception-based Fail-on-first

One of the major issues with Vectorized LD/ST operations is when a batch of LDs cross a page-fault boundary. With considerable resources being taken up with in-flight data, a large Vector LD being cancelled or unable to roll back is either a detriment to performance or can cause data corruption.

What if, then, rather than cancel an entire Vector LD because the last operation would cause a page fault, instead truncate the Vector to the last successful element?

This is called "fail-on-first". Here is strncpy, illustrated from RVV:

strncpy: a3, a0               # Copy dst
    setvli x0, a2, vint8    # Vectors of bytes.
    vlbff.v v1, (a1)        # Get src bytes v0, v1, 0       # Flag zero bytes
    vmfirst a4, v0          # Zero found?
    vmsif.v v0, v0          # Set mask up to and including zero byte.
    vsb.v v1, (a3), v0.t    # Write out bytes
    c.bgez a4, exit           # Done
    csrr t1, vl             # Get number of bytes fetched
    c.add a1, a1, t1          # Bump src pointer
    c.sub a2, a2, t1          # Decrement count.
    c.add a3, a3, t1          # Bump dst pointer
    c.bnez a2, loop           # Anymore?

Vector Length VL is truncated inherently at the first page faulting byte-level LD. Otherwise, with more powerful hardware the number of elements LOADed from memory could be dozens to hundreds or greater (memory bandwidth permitting).

With VL truncated the analysis looking for the zero byte and the subsequent STORE (a straight ST, not a ffirst ST) can proceed, safe in the knowledge that every byte loaded in the Vector is valid. Implementors are even permitted to "adapt" VL, truncating it early so that, for example, subsequent iterations of loops will have LD/STs on aligned boundaries.

SIMD strncpy hand-written assembly routines are, to be blunt about it, a total nightmare. 240 instructions is not uncommon, and the worst thing about them is that they are unable to cope with detection of a page fault condition.

Note: see

Data-dependent fail-first

Data-dependent fail-first stops at the first failure:

if Rc=0: BO = inv<<2 | 0b00 # test CR.eq bit z/nz
for i in range(VL):
    # predication test, skip all masked out elements.
    if predicate_masked_out(i): continue # skip
    result = op(iregs[RA+i], iregs[RB+i])
    CRnew = analyse(result) # calculates eq/lt/gt
    # now test CR, similar to branch
    if CRnew[BO[0:1]] != BO[2]:
        VL = i+VLi # truncate: only successes allowed
    # test passed: store result (and CR?)
    if not RC1: iregs[RT+i] = result
    if RC1 or Rc=1: crregs[offs+i] = CRnew

This is particularly useful, again, for FP operations that might overflow, where it is desirable to end the loop early, but also desirable to complete at least those operations that were okay (passed the test) without also having to slow down execution by adding extra instructions that tested for the possibility of that failure, in advance of doing the actual calculation.

The only minor downside here though is the change to VL, which in some implementations may cause pipeline stalls.

Vertical-First Mode

image{ width=40% height=40% }

This is a relatively new addition to SVP64 under development as of July 2021. Where Horizontal-First is the standard Cray-style for-loop, Vertical-First typically executes just the one scalar element in each Vectorized operation. That element is selected by srcstep and dststep neither of which are changed as a side-effect of execution. Illustrating this in pseodocode, with a branch/loop. To create loops, a new instruction svstep must be called, explicitly, with Rc=1:

  sv.addi r0.v, r8.v, 5 # GPR(0+dststep) = GPR(8+srcstep) + 5
  sv.addi r0.v, r8, 5   # GPR(0+dststep) = GPR(8        ) + 5
  sv.addi r0, r8.v, 5   # GPR(0        ) = GPR(8+srcstep) + 5
  svstep.               # srcstep++, dststep++, CR0.eq = srcstep==VL
  beq loop

Three examples are illustrated of different types of Scalar-Vector operations. Note that in its simplest form only one element is executed per instruction not multiple elements per instruction. (The more advanced version of Vertical-First mode may execute multiple elements per instruction, however the number executed must remain a fixed quantity.)

Now that such explicit loops can increment inexorably towards VL, of course we now need a way to test if srcstep or dststep have reached VL. This is achieved in one of two ways: svstep has an Rc=1 mode where CR0 will be updated if VL is reached. A standard v3.0B Branch Conditional may rely on that. Alternatively, the number of elements may be transferred into CTR, as is standard practice in Power ISA. Here, SVP64 branches have a mode which allows CTR to be decremented by the number of vertical elements executed.

Instruction format

Whilst this overview shows the internals, it does not go into detail on the actual instruction format itself. There are a couple of reasons for this: firstly, it's under development, and secondly, it needs to be proposed to the OpenPOWER Foundation ISA WG for consideration and review.

That said: draft pages for setvl and svp64 are written up. The setvl instruction is pretty much as would be expected from a Cray style VL instruction: the only differences being that, firstly, the MAXVL (Maximum Vector Length) has to be specified, because that determines - precisely - how many of the scalar registers are to be used for a given Vector. Secondly: within the limit of MAXVL, VL is required to be set to the requested value. By contrast, RVV systems permit the hardware to set arbitrary values of VL.

The other key question is of course: what's the actual instruction format, and what's in it? Bearing in mind that this requires OPF review, the current draft is at the svp64 page, and includes space for all the different modes, the predicates, element width overrides, SUBVL and the register extensions, in 24 bits. This just about fits into a Power v3.1B 64 bit Prefix by borrowing some of the Reserved Encoding space. The v3.1B suffix - containing as it does a 32 bi Power instruction - aligns perfectly with SV.

Further reading is at the main SV page.


Starting from a scalar ISA - Power v3.0B - it was shown above that, with conceptual sub-loops, a Scalar ISA can be turned into a Vector one, by embedding Scalar instructions - unmodified - into a Vector "context" using "Prefixing". With careful thought, this technique reaches 90% par with good Vector ISAs, increasing to 95% with the addition of a mere handful of additional context-vectorizeable scalar instructions (mv.x amongst them).

What is particularly cool about the SV concept is that custom extensions and research need not be concerned about inventing new Vector instructions and how to get them to interact with the Scalar ISA: they are effectively one and the same. Any new instruction added at the Scalar level is inherently and automatically Vectorized, following some simple rules.