This is the appendix to svp64, providing explanations of modes etc. leaving the main svp64 page's primary purpose as outlining the instruction format.

Table of contents:

Partial Implementations

It is perfectly legal to implement subsets of SVP64 as long as illegal instruction traps are always raised on unimplemented features, so that soft-emulation is possible, even for future revisions of SVP64. With SVP64 being partly controlled through contextual SPRs, a little care has to be taken.

All SPRs not implemented including reserved ones for future use must raise an illegal instruction trap if read or written. This allows software the opportunity to emulate the context created by the given SPR.

Embedded Scalar Scenario

In this scenario an implementation does not wish to implement the Vectorisation but simply wishes to take advantage of predication or other feature of SVP64, such as instructions that might only be available if prefixed. Such an implementation would be entirely free to do so with the proviso that:

  • any attempts to call setvl shall either raise an illegal instruction or be partially implemented to set SVSTATE correctly.
  • if SVSTATE contains any value in any bit that is not supported in hardware, an illegal instruction shall be raised when an SVP64 prefixed instruction is executed.
  • if SVSTATE contains values requesting supported features at the time that the prefixed instruction is executed then it is executed in hardware as per specification, with no illegal exception trap raised.

Example, assuming that hardware implements scalar operations only, and implements predication but not elwidth overrides:

setvli r0, 4            # sets VL equal to 4
sv.addi r5, r0, 1       # raises an 0x700 trap
setvli r0, 1            # sets VL equal to 1
sv.addi r5, r0, 1       # gets executed by hardware
sv.addi/ew=8 r5, r0, 1  # raises an 0x700 trap
sv.ori/sm=EQ r5, r0, 1  # executed by hardware

The first sv.addi raises an illegal instruction trap because VL has been set to 4, and this is not supported. Likewise elwidth overrides if requested always raise illegal instruction traps.

Full implementation (current revision) scenario

In this scenario, SVP64 is implemented as it stands in its entirety. However a future revision or a competitor processor decides to also implement portions of Quad-Precision VSX as SVP64-Vectorised. Compatibility is only achieved if the earlier implementor raises illegal instruction exceptions on all unimplemented opcodes within the SVP64-Prefixed space, even those marked by the Scalar Power ISA as not needing to raise illegal instructions.

Additionally a future version of the specification adds a new feature, requiring an additional SPR. This SPR was, at the time of implementation, marked as "Reserved". The early implementation raises an illegal instruction trap when this SPR is read or written, and consequently has an opportunity to trap-and-emulate the full capability of the revised version of the SVP64 Specification.

XER, SO and other global flags

Vector systems are expected to be high performance. This is achieved through parallelism, which requires that elements in the vector be independent. XER SO/OV and other global "accumulation" flags (CR.SO) cause Read-Write Hazards on single-bit global resources, having a significant detrimental effect.

Consequently in SV, XER.SO behaviour is disregarded (including in cmp instructions). XER.SO is not read, but XER.OV may be written, breaking the Read-Modify-Write Hazard Chain that complicates microarchitectural implementations. This includes when scalar identity behaviour occurs. If precise OpenPOWER v3.0/1 scalar behaviour is desired then OpenPOWER v3.0/1 instructions should be used without an SV Prefix.

TODO jacob add about OV

Of note here is that XER.SO and OV may already be disregarded in the Power ISA v3.0/1 SFFS (Scalar Fixed and Floating) Compliancy Subset. SVP64 simply makes it mandatory to disregard XER.SO even for other Subsets, but only for SVP64 Prefixed Operations.

XER.CA/CA32 on the other hand is expected and required to be implemented according to standard Power ISA Scalar behaviour. Interestingly, due to SVP64 being in effect a hardware for-loop around Scalar instructions executing in precise Program Order, a little thought shows that a Vectorised Carry-In-Out add is in effect a Big Integer Add, taking a single bit Carry In and producing, at the end, a single bit Carry out. High performance implementations may exploit this observation to deploy efficient Parallel Carry Lookahead.

# assume VL=4, this results in 4 sequential ops (below)
sv.adde r0.v, r4.v, r8.v

# instructions that get executed in backend hardware:
adde r0, r4, r8 # takes carry-in, produces carry-out
adde r1, r5, r9 # takes carry from previous
adde r3, r7, r11 # likewise

It can clearly be seen that the carry chains from one 64 bit add to the next, the end result being that a 256-bit "Big Integer Add" has been performed, and that CA contains the 257th bit. A one-instruction 512-bit Add may be performed by setting VL=8, and a one-instruction 1024-bit add by setting VL=16, and so on.

v3.0B/v3.1 relevant instructions

SV is primarily designed for use as an efficient hybrid 3D GPU / VPU / CPU ISA.

Vectorisation of the VSX Packed SIMD system makes no sense whatsoever, the sole exceptions potentially being any operations with 128-bit operands such as vrlq (Rotate Quad Word) and xsaddqp (Scalar Quad-precision Add). SV effectively replaces the majority of VSX, requiring far less instructions, and provides, at the very minimum, predication (which VSX was designed without).

Likewise, Load/Store Multiple make no sense to have because they are not only provided by SV, the SV alternatives may be predicated as well, making them far better suited to use in function calls and context-switching.

Additionally, some v3.0/1 instructions simply make no sense at all in a Vector context: rfid falls into this category, as well as sc and scv. Here there is simply no point trying to Vectorise them: the standard OpenPOWER v3.0/1 instructions should be called instead.

Fortuitously this leaves several Major Opcodes free for use by SV to fit alternative future instructions. In a 3D context this means Vector Product, Vector Normalise, mv.swizzle, Texture LD/ST operations, and others critical to an efficient, effective 3D GPU and VPU ISA. With such instructions being included as standard in other commercially-successful GPU ISAs it is likewise critical that a 3D GPU/VPU based on svp64 also have such instructions.

Note however that svp64 is stand-alone and is in no way critically dependent on the existence or provision of 3D GPU or VPU instructions. These should be considered extensions, and their discussion and specification is out of scope for this document.

Note, again: this is only under svp64 prefixing. Standard v3.0B / v3.1B is not altered by svp64 in any way.

Major opcode map (v3.0B)

This table is taken from v3.0B. Table 9: Primary Opcode Map (opcode bits 0:5)

    |  000   |   001 |  010  | 011   |  100  |    101 |  110  |  111
000 |        |       |  tdi  | twi   | EXT04 |        |       | mulli | 000
001 | subfic |       | cmpli | cmpi  | addic | addic. | addi  | addis | 001
010 | bc/l/a | EXT17 | b/l/a | EXT19 | rlwimi| rlwinm |       | rlwnm | 010
011 |  ori   | oris  | xori  | xoris | andi. | andis. | EXT30 | EXT31 | 011
100 |  lwz   | lwzu  | lbz   | lbzu  | stw   | stwu   | stb   | stbu  | 100
101 |  lhz   | lhzu  | lha   | lhau  | sth   | sthu   | lmw   | stmw  | 101
110 |  lfs   | lfsu  | lfd   | lfdu  | stfs  | stfsu  | stfd  | stfdu | 110
111 |  lq    | EXT57 | EXT58 | EXT59 | EXT60 | EXT61  | EXT62 | EXT63 | 111
    |  000   |   001 |   010 |  011  |   100 |   101  | 110   |  111

Suitable for svp64-only

This is the same table containing v3.0B Primary Opcodes except those that make no sense in a Vectorisation Context have been removed. These removed POs can, in the SV Vector Context only, be assigned to alternative (Vectorised-only) instructions, including future extensions. EXT04 retains the scalar madd* operations but would have all PackedSIMD (aka VSX) operations removed.

Note, again, to emphasise: outside of svp64 these opcodes do not change. When not prefixed with svp64 these opcodes specifically retain their v3.0B / v3.1B OpenPOWER Standard compliant meaning.

    |  000   |   001 |  010  | 011   |  100  |    101 |  110  |  111
000 |        |       |       |       | EXT04 |        |       | mulli | 000
001 | subfic |       | cmpli | cmpi  | addic | addic. | addi  | addis | 001
010 | bc/l/a |       |       | EXT19 | rlwimi| rlwinm |       | rlwnm | 010
011 |  ori   | oris  | xori  | xoris | andi. | andis. | EXT30 | EXT31 | 011
100 |  lwz   | lwzu  | lbz   | lbzu  | stw   | stwu   | stb   | stbu  | 100
101 |  lhz   | lhzu  | lha   | lhau  | sth   | sthu   |       |       | 101
110 |  lfs   | lfsu  | lfd   | lfdu  | stfs  | stfsu  | stfd  | stfdu | 110
111 |        |       | EXT58 | EXT59 |       | EXT61  |       | EXT63 | 111
    |  000   |   001 |   010 |  011  |   100 |   101  | 110   |  111

It is important to note that having a different v3.0B Scalar opcode that is different from an SVP64 one is highly undesirable: the complexity in the decoder is greatly increased.

EXTRA Field Mapping

The purpose of the 9-bit EXTRA field mapping is to mark individual registers (RT, RA, BFA) as either scalar or vector, and to extend their numbering from 0..31 in Power ISA v3.0 to 0..127 in SVP64. Three of the 9 bits may also be used up for a 2nd Predicate (Twin Predication) leaving a mere 6 bits for qualifying registers. As can be seen there is significant pressure on these (and in fact all) SVP64 bits.

In Power ISA v3.1 prefixing there are bits which describe and classify the prefix in a fashion that is independent of the suffix. MLSS for example. For SVP64 there is insufficient space to make the SVP64 Prefix "self-describing", and consequently every single Scalar instruction had to be individually analysed, by rote, to craft an EXTRA Field Mapping. This process was semi-automated and is described in this section. The final results, which are part of the SVP64 Specification, are here:

Firstly, every instruction's mnemonic (add RT, RA, RB) was analysed from reading the markdown formatted version of the Scalar pseudocode which is machine-readable and found in isatables. The analysis gives, by instruction, a "Register Profile". add RT, RA, RB for example is given a designation RM-2R-1W because it requires two GPR reads and one GPR write.

Secondly, the total number of registers was added up (2R-1W is 3 registers) and if less than or equal to three then that instruction could be given an EXTRA3 designation. Four or more is given an EXTRA2 designation because there are only 9 bits available.

Thirdly, the instruction was analysed to see if Twin or Single Predication was suitable. As a general rule this was if there was only a single operand and a single result (extw and LD/ST) however it was found that some 2 or 3 operand instructions also qualify. Given that 3 of the 9 bits of EXTRA had to be sacrificed for use in Twin Predication, some compromises were made, here. LDST is Twin but also has 3 operands in some operations, so only EXTRA2 can be used.

Fourthly, a packing format was decided: for 2R-1W an EXTRA3 indexing could have been decided that RA would be indexed 0 (EXTRA bits 0-2), RB indexed 1 (EXTRA bits 3-5) and RT indexed 2 (EXTRA bits 6-8). In some cases (LD/ST with update) RA-as-a-source is given a different EXTRA index from RA-as-a-result (because it is possible to do, and perceived to be useful). Rc=1 co-results (CR0, CR1) are always given the same EXTRA index as their main result (RT, FRT).

Fifthly, in an automated process the results of the analysis were outputted in CSV Format for use in machine-readable form by;a=blob;f=src/openpower/sv/;hb=HEAD

This process was laborious but logical, and, crucially, once a decision is made (and ratified) cannot be reversed. Qualifying future Power ISA Scalar instructions for SVP64 is strongly advised to utilise this same process and the same program as a canonical method of maintaining the relationships. Alterations to that same program which change the Designation is prohibited once finalised (ratified through the Power ISA WG Process). It would be similar to deciding that add should be changed from X-Form to D-Form.

Single Predication

This is a standard mode normally found in Vector ISAs. every element in every source Vector and in the destination uses the same bit of one single predicate mask.

In SVSTATE, for Single-predication, implementors MUST increment both srcstep and dststep: unlike Twin-Predication the two must be equal at all times.

Twin Predication

This is a novel concept that allows predication to be applied to a single source and a single dest register. The following types of traditional Vector operations may be encoded with it, without requiring explicit opcodes to do so

Those patterns (and more) may be applied to:

  • mv (the usual way that V* ISA operations are created)
  • exts* sign-extension
  • rwlinm and other RS-RA shift operations (note: excluding those that take RA as both a src and dest. These are not 1-src 1-dest, they are 2-src, 1-dest)
  • LD and ST (treating AGEN as one source)
  • FP fclass, fsgn, fneg, fabs, fcvt, frecip, fsqrt etc.
  • Condition Register ops mfcr, mtcr and other similar

This is a huge list that creates extremely powerful combinations, particularly given that one of the predicate options is (1<<r3)

Additional unusual capabilities of Twin Predication include a back-to-back version of VCOMPRESS-VEXPAND which is effectively the ability to do sequentially ordered multiple VINSERTs. The source predicate selects a sequentially ordered subset of elements to be inserted; the destination predicate specifies the sequentially ordered recipient locations. This is equivalent to llvm.masked.compressstore.* followed by llvm.masked.expandload.* with a single instruction.

This extreme power and flexibility comes down to the fact that SVP64 is not actually a Vector ISA: it is a loop-abstraction-concept that is applied in general to Scalar operations, just like the x86 REP instruction (if put on steroids).

Reduce modes

Reduction in SVP64 is deterministic and somewhat of a misnomer. A normal Vector ISA would have explicit Reduce opcodes with defined characteristics per operation: in SX Aurora there is even an additional scalar argument containing the initial reduction value, and the default is either 0 or 1 depending on the specifics of the explicit opcode. SVP64 fundamentally has to utilise existing Scalar Power ISA v3.0B operations, which presents some unique challenges.

The solution turns out to be to simply define reduction as permitting deterministic element-based schedules to be issued using the base Scalar operations, and to rely on the underlying microarchitecture to resolve Register Hazards at the element level. This goes back to the fundamental principle that SV is nothing more than a Sub-Program-Counter sitting between Decode and Issue phases.

Microarchitectures may take opportunities to parallelise the reduction but only if in doing so they preserve Program Order at the Element Level. Opportunities where this is possible include an OR operation or a MIN/MAX operation: it may be possible to parallelise the reduction, but for Floating Point it is not permitted due to different results being obtained if the reduction is not executed in strict Program-Sequential Order.

In essence it becomes the programmer's responsibility to leverage the pre-determined schedules to desired effect.

Scalar result reduction and iteration

Scalar Reduction per se does not exist, instead is implemented in SVP64 as a simple and natural relaxation of the usual restriction on the Vector Looping which would terminate if the destination was marked as a Scalar. Scalar Reduction by contrast keeps issuing Vector Element Operations even though the destination register is marked as scalar. Thus it is up to the programmer to be aware of this, observe some conventions, and thus end up achieving the desired outcome of scalar reduction.

It is also important to appreciate that there is no actual imposition or restriction on how this mode is utilised: there will therefore be several valuable uses (including Vector Iteration and "Reverse-Gear") and it is up to the programmer to make best use of the (strictly deterministic) capability provided.

In this mode, which is suited to operations involving carry or overflow, one register must be assigned, by convention by the programmer to be the "accumulator". Scalar reduction is thus categorised by:

  • One of the sources is a Vector
  • the destination is a scalar
  • optionally but most usefully when one source scalar register is also the scalar destination (which may be informally termed the "accumulator")
  • That the source register type is the same as the destination register type identified as the "accumulator". Scalar reduction on cmp, setb or isel makes no sense for example because of the mixture between CRs and GPRs.

Note that issuing instructions in Scalar reduce mode such as setb are neither UNDEFINED nor prohibited, despite them not making much sense at first glance. Scalar reduce is strictly defined behaviour, and the cost in hardware terms of prohibition of seemingly non-sensical operations is too great.
Therefore it is permitted and required to be executed successfully. Implementors MAY choose to optimise such instructions in instances where their use results in "extraneous execution", i.e. where it is clear that the sequence of operations, comprising multiple overwrites to a scalar destination without cumulative, iterative, or reductive behaviour (no "accumulator"), may discard all but the last element operation. Identification of such is trivial to do for setb and cmp: the source register type is a completely different register file from the destination. Likewise Scalar reduction when the destination is a Vector is as if the Reduction Mode was not requested.

Typical applications include simple operations such as ADD r3, r10.v, r3 where, clearly, r3 is being used to accumulate the addition of all elements of the vector starting at r10.

 # add RT, RA,RB but when RT==RA
 for i in range(VL):
      iregs[RA] += iregs[RB+i] # RT==RA

However, unless the operation is marked as "mapreduce" (sv.add/mr) SV ordinarily terminates at the first scalar operation. Only by marking the operation as "mapreduce" will it continue to issue multiple sub-looped (element) instructions in Program Order.

To perform the loop in reverse order, the RG (reverse gear) bit must be set. This may be useful in situations where the results may be different (floating-point) if executed in a different order. Given that there is no actual prohibition on Reduce Mode being applied when the destination is a Vector, the "Reverse Gear" bit turns out to be a way to apply Iterative or Cumulative Vector operations in reverse. sv.add/rg r3.v, r4.v, r4.v for example will start at the opposite end of the Vector and push a cumulative series of overlapping add operations into the Execution units of the underlying hardware.

Other examples include shift-mask operations where a Vector of inserts into a single destination register is required (see bitmanip, bmset), as a way to construct a value quickly from multiple arbitrary bit-ranges and bit-offsets. Using the same register as both the source and destination, with Vectors of different offsets masks and values to be inserted has multiple applications including Video, cryptography and JIT compilation.

# assume VL=4:
# * Vector of shift-offsets contained in RC (r12.v)
# * Vector of masks contained in RB (r8.v)
# * Vector of values to be masked-in in RA (r4.v)
# * Scalar destination RT (r0) to receive all mask-offset values
sv.bmset/mr r0, r4.v, r8.v, r12.v

Due to the Deterministic Scheduling, Subtract and Divide are still permitted to be executed in this mode, although from an algorithmic perspective it is strongly discouraged. It would be better to use addition followed by one final subtract, or in the case of divide, to get better accuracy, to perform a multiply cascade followed by a final divide.

Note that single-operand or three-operand scalar-dest reduce is perfectly well permitted: the programmer may still declare one register, used as both a Vector source and Scalar destination, to be utilised as the "accumulator". In the case of sv.fmadds and sv.maddhw etc this naturally fits well with the normal expected usage of these operations.

If an interrupt or exception occurs in the middle of the scalar mapreduce, the scalar destination register MUST be updated with the current (intermediate) result, because this is how Program Order is preserved (Vector Loops are to be considered to be just another way of issuing instructions in Program Order). In this way, after return from interrupt, the scalar mapreduce may continue where it left off. This provides "precise" exception behaviour.

Note that hardware is perfectly permitted to perform multi-issue parallel optimisation of the scalar reduce operation: it's just that as far as the user is concerned, all exceptions and interrupts MUST be precise.

Vector result reduce mode

Vector Reduce Mode issues a deterministic tree-reduction schedule to the underlying micro-architecture. Like Scalar reduction, the "Scalar Base" (Power ISA v3.0B) operation is leveraged, unmodified, to give the appearance and effect of Reduction.

Given that the tree-reduction schedule is deterministic, Interrupts and exceptions can therefore also be precise. The final result will be in the first non-predicate-masked-out destination element, but due again to the deterministic schedule programmers may find uses for the intermediate results.

When Rc=1 a corresponding Vector of co-resultant CRs is also created. No special action is taken: the result and its CR Field are stored "as usual" exactly as all other SVP64 Rc=1 operations.

Sub-Vector Horizontal Reduction

Note that when SVM is clear and SUBVL!=1 the sub-elements are independent, i.e. they are mapreduced per sub-element as a result. illustration with a vec2, assuming RA==RT, e.g sv.add/mr/vec2 r4, r4, r16

for i in range(0, VL):
    # RA==RT in the instruction. does not have to be
    iregs[RT].x = op(iregs[RT].x, iregs[RB+i].x)
    iregs[RT].y = op(iregs[RT].y, iregs[RB+i].y)

Thus logically there is nothing special or unanticipated about SVM=0: it is expected behaviour according to standard SVP64 Sub-Vector rules.

By contrast, when SVM is set and SUBVL!=1, a Horizontal Subvector mode is enabled, which behaves very much more like a traditional Vector Processor Reduction instruction. Example for a vec3:

for i in range(VL):
    result = iregs[RA+i].x
    result = op(result, iregs[RA+i].y)
    result = op(result, iregs[RA+i].z)
    iregs[RT+i] = result

In this mode, when Rc=1 the Vector of CRs is as normal: each result element creates a corresponding CR element (for the final, reduced, result).


Data-dependent fail-on-first has two distinct variants: one for LD/ST (see ldst, the other for arithmetic operations (actually, CR-driven) (normal) and CR operations (cr ops). Note in each case the assumption is that vector elements are required appear to be executed in sequential Program Order, element 0 being the first.

  • LD/ST ffirst treats the first LD/ST in a vector (element 0) as an ordinary one. Exceptions occur "as normal". However for elements 1 and above, if an exception would occur, then VL is truncated to the previous element.
  • Data-driven (CR-driven) fail-on-first activates when Rc=1 or other CR-creating operation produces a result (including cmp). Similar to branch, an analysis of the CR is performed and if the test fails, the vector operation terminates and discards all element operations above the current one (and the current one if VLi is not set), and VL is truncated to either the previous element or the current one, depending on whether VLi (VL "inclusive") is set.

Thus the new VL comprises a contiguous vector of results, all of which pass the testing criteria (equal to zero, less than zero).

The CR-based data-driven fail-on-first is new and not found in ARM SVE or RVV. It is extremely useful for reducing instruction count, however requires speculative execution involving modifications of VL to get high performance implementations. An additional mode (RC1=1) effectively turns what would otherwise be an arithmetic operation into a type of cmp. The CR is stored (and the CR.eq bit tested against the inv field). If the CR.eq bit is equal to inv then the Vector is truncated and the loop ends. Note that when RC1=1 the result elements are never stored, only the CRs.

VLi is only available as an option when Rc=0 (or for instructions which do not have Rc). When set, the current element is always also included in the count (the new length that VL will be set to). This may be useful in combination with "inv" to truncate the Vector to exclude elements that fail a test, or, in the case of implementations of strncpy, to include the terminating zero.

In CR-based data-driven fail-on-first there is only the option to select and test one bit of each CR (just as with branch BO). For more complex tests this may be insufficient. If that is the case, a vectorised crops (crand, cror) may be used, and ffirst applied to the crop instead of to the arithmetic vector.

One extremely important aspect of ffirst is:

  • LDST ffirst may never set VL equal to zero. This because on the first element an exception must be raised "as normal".
  • CR-based data-dependent ffirst on the other hand can set VL equal to zero. This is the only means in the entirety of SV that VL may be set to zero (with the exception of via the SV.STATE SPR). When VL is set zero due to the first element failing the CR bit-test, all subsequent vectorised operations are effectively nops which is precisely the desired and intended behaviour.

Another aspect is that for ffirst LD/STs, VL may be truncated arbitrarily to a nonzero value for any implementation-specific reason. For example: it is perfectly reasonable for implementations to alter VL when ffirst LD or ST operations are initiated on a nonaligned boundary, such that within a loop the subsequent iteration of that loop begins subsequent ffirst LD/ST operations on an aligned boundary. Likewise, to reduce workloads or balance resources.

CR-based data-dependent first on the other hand MUST not truncate VL arbitrarily to a length decided by the hardware: VL MUST only be truncated based explicitly on whether a test fails. This because it is a precise test on which algorithms will rely.

Data-dependent fail-first on CR operations (crand etc)

Operations that actually produce or alter CR Field as a result do not also in turn have an Rc=1 mode. However it makes no sense to try to test the 4 bits of a CR Field for being equal or not equal to zero. Moreover, the result is already in the form that is desired: it is a CR field. Therefore, CR-based operations have their own SVP64 Mode, described in cr ops

There are two primary different types of CR operations:

  • Those which have a 3-bit operand field (referring to a CR Field)
  • Those which have a 5-bit operand (referring to a bit within the whole 32-bit CR)

More details can be found in cr ops.

pred-result mode

Pred-result mode may not be applied on CR-based operations.

Although CR operations (mtcr, crand, cror) may be Vectorised, predicated, pred-result mode applies to operations that have an Rc=1 mode, or make sense to add an RC1 option.

Predicate-result merges common CR testing with predication, saving on instruction count. In essence, a Condition Register Field test is performed, and if it fails it is considered to have been as if the destination predicate bit was zero. Given that there are no CR-based operations that produce Rc=1 co-results, there can be no pred-result mode for mtcr and other CR-based instructions

Arithmetic and Logical Pred-result, which does have Rc=1 or for which RC1 Mode makes sense, is covered in normal

CR Operations

CRs are slightly more involved than INT or FP registers due to the possibility for indexing individual bits (crops BA/BB/BT). Again however the access pattern needs to be understandable in relation to v3.0B / v3.1B numbering, with a clear linear relationship and mapping existing when SV is applied.

CR EXTRA mapping table and algorithm

Numbering relationships for CR fields are already complex due to being in BE format (the relationship is not clearly explained in the v3.0B or v3.1 specification). However with some care and consideration the exact same mapping used for INT and FP regfiles may be applied, just to the upper bits, as explained below. The notation CR{field number} is used to indicate access to a particular Condition Register Field (as opposed to the notation CR[bit] which accesses one bit of the 32 bit Power ISA v3.0B Condition Register)

CR{n} refers to CR0 when n=0 and consequently, for CR0-7, is defined, in v3.0B pseudocode, as:

 CR{7-n} = CR[32+n*4:35+n*4]

For SVP64 the relationship for the sequential numbering of elements is to the CR fields within the CR Register, not to individual bits within the CR register.

In OpenPOWER v3.0/1, BF/BT/BA/BB are all 5 bits. The top 3 bits (0:2) select one of the 8 CRs; the bottom 2 bits (3:4) select one of 4 bits in that CR (EQ/LT/GT/SO). The numbering was determined (after 4 months of analysis and research) to be as follows:

CR_index = 7-(BA>>2)      # top 3 bits but BE
bit_index = 3-(BA & 0b11) # low 2 bits but BE
CR_reg = CR{CR_index}     # get the CR
# finally get the bit from the CR.
CR_bit = (CR_reg & (1<<bit_index)) != 0

When it comes to applying SV, it is the CR_reg number to which SV EXTRA2/3 applies, not the CR_bit portion (bits 3-4):

if extra3_mode:
    spec = EXTRA3
    spec = EXTRA2<<1 | 0b0
if spec[0]:
   # vector constructs "BA[0:2] spec[1:2] 00 BA[3:4]"
   return ((BA >> 2)<<6) | # hi 3 bits shifted up
          (spec[1:2]<<4) | # to make room for these
          (BA & 0b11)      # CR_bit on the end
   # scalar constructs "00 spec[1:2] BA[0:4]"
   return (spec[1:2] << 5) | BA

Thus, for example, to access a given bit for a CR in SV mode, the v3.0B algorithm to determine CR_reg is modified to as follows:

CR_index = 7-(BA>>2)      # top 3 bits but BE
if spec[0]:
    # vector mode, 0-124 increments of 4
    CR_index = (CR_index<<4) | (spec[1:2] << 2)
    # scalar mode, 0-32 increments of 1
    CR_index = (spec[1:2]<<3) | CR_index
# same as for v3.0/v3.1 from this point onwards
bit_index = 3-(BA & 0b11) # low 2 bits but BE
CR_reg = CR{CR_index}     # get the CR
# finally get the bit from the CR.
CR_bit = (CR_reg & (1<<bit_index)) != 0

Note here that the decoding pattern to determine CR_bit does not change.

Note: high-performance implementations may read/write Vectors of CRs in batches of aligned 32-bit chunks (CR0-7, CR7-15). This is to greatly simplify internal design. If instructions are issued where CR Vectors do not start on a 32-bit aligned boundary, performance may be affected.

CR fields as inputs/outputs of vector operations

CRs (or, the arithmetic operations associated with them) may be marked as Vectorised or Scalar. When Rc=1 in arithmetic operations that have no explicit EXTRA to cover the CR, the CR is Vectorised if the destination is Vectorised. Likewise if the destination is scalar then so is the CR.

When vectorized, the CR inputs/outputs are sequentially read/written to 4-bit CR fields. Vectorised Integer results, when Rc=1, will begin writing to CR8 (TBD evaluate) and increase sequentially from there. This is so that:

  • implementations may rely on the Vector CRs being aligned to 8. This means that CRs may be read or written in aligned batches of 32 bits (8 CRs per batch), for high performance implementations.
  • scalar Rc=1 operation (CR0, CR1) and callee-saved CRs (CR2-4) are not overwritten by vector Rc=1 operations except for very large VL
  • CR-based predication, from CR32, is also not interfered with (except by large VL).

However when the SV result (destination) is marked as a scalar by the EXTRA field the standard v3.0B behaviour applies: the accompanying CR when Rc=1 is written to. This is CR0 for integer operations and CR1 for FP operations.

Note that yes, the CR Fields are genuinely Vectorised. Unlike in SIMD VSX which has a single CR (CR6) for a given SIMD result, SV Vectorised OpenPOWER v3.0B scalar operations produce a tuple of element results: the result of the operation as one part of that element and a corresponding CR element. Greatly simplified pseudocode:

for i in range(VL):
     # calculate the vector result of an add
     iregs[RT+i] = iregs[RA+i] + iregs[RB+i]
     # now calculate CR bits
     CRs{8+i}.eq = iregs[RT+i] == 0
     CRs{8+i}.gt = iregs[RT+i] > 0
     ... etc

If a "cumulated" CR based analysis of results is desired (a la VSX CR6) then a followup instruction must be performed, setting "reduce" mode on the Vector of CRs, using cr ops (crand, crnor) to do so. This provides far more flexibility in analysing vectors than standard Vector ISAs. Normal Vector ISAs are typically restricted to "were all results nonzero" and "were some results nonzero". The application of mapreduce to Vectorised cr operations allows far more sophisticated analysis, particularly in conjunction with the new crweird operations see cr int predication.

Note in particular that the use of a separate instruction in this way ensures that high performance multi-issue OoO inplementations do not have the computation of the cumulative analysis CR as a bottleneck and hindrance, regardless of the length of VL.

Additionally, SVP64 branches may be used, even when the branch itself is to the following instruction. The combined side-effects of CTR reduction and VL truncation provide several benefits.

(see discussion. some alternative schemes are described there)

Rc=1 when SUBVL!=1

sub-vectors are effectively a form of Packed SIMD (length 2 to 4). Only 1 bit of predicate is allocated per subvector; likewise only one CR is allocated per subvector.

This leaves a conundrum as to how to apply CR computation per subvector, when normally Rc=1 is exclusively applied to scalar elements. A solution is to perform a bitwise OR or AND of the subvector tests. Given that OE is ignored in SVP64, this field may (when available) be used to select OR or AND behavior.

Table of CR fields

CRn is the notation used by the OpenPower spec to refer to CR field #i, so FP instructions with Rc=1 write to CR1 (n=1).

CRs are not stored in SPRs: they are registers in their own right. Therefore context-switching the full set of CRs involves a Vectorised mfcr or mtcr, using VL=8 to do so. This is exactly as how scalar OpenPOWER context-switches CRs: it is just that there are now more of them.

The 64 SV CRs are arranged similarly to the way the 128 integer registers are arranged. TODO a python program that auto-generates a CSV file which can be included in a table, which is in a new page (so as not to overwhelm this one). cr names

Register Profiles


Instructions are broken down by Register Profiles as listed in the following auto-generated page: opcode regs deduped. "Non-SV" indicates that the operations with this Register Profile cannot be Vectorised (mtspr, bc, dcbz, twi)

TODO generate table which will be here reg profiles

SV pseudocode illilustration

Single-predicated Instruction

illustration of normal mode add operation: zeroing not included, elwidth overrides not included. if there is no predicate, it is set to all 1s

function op_add(rd, rs1, rs2) # add not VADD!
  int i, id=0, irs1=0, irs2=0;
  predval = get_pred_val(FALSE, rd);
  for (i = 0; i < VL; i++)
    STATE.srcoffs = i # save context
    if (predval & 1<<i) # predication uses intregs
       ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
    if (!int_vec[rd].isvec) break;
    if (rd.isvec)  { id += 1; }
    if (rs1.isvec) { irs1 += 1; }
    if (rs2.isvec) { irs2 += 1; }
    if (id == VL or irs1 == VL or irs2 == VL)
      # end VL hardware loop
      STATE.srcoffs = 0; # reset

This has several modes:

  • RT.v = RA.v RB.v
  • RT.v = RA.v RB.s (and RA.s RB.v)
  • RT.v = RA.s RB.s
  • RT.s = RA.v RB.v
  • RT.s = RA.v RB.s (and RA.s RB.v)
  • RT.s = RA.s RB.s

All of these may be predicated. Vector-Vector is straightfoward. When one of source is a Vector and the other a Scalar, it is clear that each element of the Vector source should be added to the Scalar source, each result placed into the Vector (or, if the destination is a scalar, only the first nonpredicated result).

The one that is not obvious is RT=vector but both RA/RB=scalar. Here this acts as a "splat scalar result", copying the same result into all nonpredicated result elements. If a fixed destination scalar was intended, then an all-Scalar operation should be used.


Assembly Annotation

Assembly code annotation is required for SV to be able to successfully mark instructions as "prefixed".

A reasonable (prototype) starting point:

svp64 [field=value]*


  • ew=8/16/32 - element width
  • sew=8/16/32 - source element width
  • vec=2/3/4 - SUBVL
  • mode=mr/satu/sats/crpred
  • pred=1\<\<3/r3/~r3/r10/~r10/r30/~r30/lt/gt/le/ge/eq/ne

similar to x86 "rex" prefix.

For actual assembler:

sv.asmcode/mode.vec{N}.ew=8,sw=16,m={pred},sm={pred} reg.v, src.s


  • m={pred}: predicate mask mode
  • sm={pred}: source-predicate mask mode (only allowed in Twin-predication)
  • vec{N}: vec2 OR vec3 OR vec4 - sets SUBVL=2/3/4
  • ew={N}: ew=8/16/32 - sets elwidth override
  • sw={N}: sw=8/16/32 - sets source elwidth override
  • ff={xx}: see fail-first mode
  • pr={xx}: see predicate-result mode
  • sat{x}: satu / sats - see saturation mode
  • mr: see map-reduce mode
  • mr.svm see map-reduce with sub-vector mode
  • crm: see map-reduce CR mode
  • crm.svm see map-reduce CR with sub-vector mode
  • sz: predication with source-zeroing
  • dz: predication with dest-zeroing

For modes:

  • pred-result:
    • pm=lt/gt/le/ge/eq/ne/so/ns
    • RC1 mode
  • fail-first
    • ff=lt/gt/le/ge/eq/ne/so/ns
    • RC1 mode
  • saturation:
    • sats
    • satu
  • map-reduce:
    • mr OR crm: "normal" map-reduce mode or CR-mode.
    • mr.svm OR crm.svm: when vec2/3/4 set, sub-vector mapreduce is enabled

Proposed Parallel-reduction algorithm

This algorithm contains a MV operation and may NOT be used. Removal of the MV operation may be achieved by using index-redirection as was achieved in DCT and FFT REMAP

/// reference implementation of proposed SimpleV reduction semantics.
                // reduction operation -- we still use this algorithm even
                // if the reduction operation isn't associative or
                // commutative.
/// XXX `pred` is a user-visible Vector Condition register XXXX
/// all input arrays have length `vl`
def reduce(vl, vec, pred):
    pred = copy(pred) # must not damage predicate
    step = 1;
    while step < vl
        step *= 2;
        for i in (0..vl).step_by(step)
            other = i + step / 2;
            other_pred = other < vl && pred[other];
            if pred[i] && other_pred
                vec[i] += vec[other];
            else if other_pred
                XXX vec[i] = vec[other];      XXX
            pred[i] |= other_pred;

The first principle in SVP64 being violated is that SVP64 is a fully-independent Abstraction of hardware-looping in between issue and execute phases that has no relation to the operation it issues. The above pseudocode conditionally changes not only the type of element operation issued (a MV in some cases) but also the number of arguments (2 for a MV). At the very least, for Vertical-First Mode this will result in unanticipated and unexpected behaviour (maximise "surprises" for programmers) in the middle of loops, that will be far too hard to explain.

The second principle being violated by the above algorithm is the expectation that temporary storage is available for a modified predicate: there is no such space, and predicates are read-only to reduce complexity at the micro-architectural level. SVP64 is founded on the principle that all operations are "re-entrant" with respect to interrupts and exceptions: SVSTATE must be saved and restored alongside PC and MSR, but nothing more. It is perfectly fine to have context-switching back to the operation be somewhat slower, through "reconstruction" of temporary internal state based on what SVSTATE contains, but nothing more.

An alternative algorithm is therefore required that does not perform MVs, and does not require additional state to be saved on context-switching.

def reduce(  vl,  vec, pred ):
    pred = copy(pred) # must not damage predicate
    j = 0
    vi = [] # array of lookup indices to skip nonpredicated
    for i, pbit in enumerate(pred):
       if pbit:
           vi[j] = i
           j += 1
    step = 2
    while step <= vl
        halfstep = step // 2
        for i in (0..vl).step_by(step)
            other = vi[i + halfstep]
            ir = vi[i]
            other_pred = other < vl && pred[other]
            if pred[i] && other_pred
                vec[ir] += vec[other]
            else if other_pred:
               vi[ir] = vi[other] # index redirection, no MV
            pred[ir] |= other_pred # reconstructed on context-switch
         step *= 2

In this version the need for an explicit MV is made unnecessary by instead leaving elements in situ. The internal modifications to the predicate may, due to the reduction being entirely deterministic, be "reconstructed" on a context-switch. This may make some implementations slower.

Implementor's Note: many SIMD-based Parallel Reduction Algorithms are implemented in hardware with MVs that ensure lane-crossing is minimised. The mistake which would be catastrophic to SVP64 to make is to then limit the Reduction Sequence for all implementors based solely and exclusively on what one specific internal microarchitecture does. In SIMD ISAs the internal SIMD Architectural design is exposed and imposed on the programmer. Cray-style Vector ISAs on the other hand provide convenient, compact and efficient encodings of abstract concepts. It is the Implementor's responsibility to produce a design that complies with the above algorithm, utilising internal Micro-coding and other techniques to transparently insert MV operations if necessary or desired, to give the level of efficiency or performance required.

Element-width overrides

Element-width overrides are best illustrated with a packed structure union in the c programming language. The following should be taken literally, and assume always a little-endian layout:

typedef union {
    uint8_t  b[];
    uint16_t s[];
    uint32_t i[];
    uint64_t l[];
    uint8_t actual_bytes[8];
} el_reg_t;

elreg_t int_regfile[128];

get_polymorphed_reg(reg, bitwidth, offset):
    el_reg_t res;
    res.l = 0; // TODO: going to need sign-extending / zero-extending
    if bitwidth == 8:
        reg.b = int_regfile[reg].b[offset]
    elif bitwidth == 16:
        reg.s = int_regfile[reg].s[offset]
    elif bitwidth == 32:
        reg.i = int_regfile[reg].i[offset]
    elif bitwidth == 64:
        reg.l = int_regfile[reg].l[offset]
    return res

set_polymorphed_reg(reg, bitwidth, offset, val):
    if (!reg.isvec):
        # not a vector: first element only, overwrites high bits
        int_regfile[reg].l[0] = val
    elif bitwidth == 8:
        int_regfile[reg].b[offset] = val
    elif bitwidth == 16:
        int_regfile[reg].s[offset] = val
    elif bitwidth == 32:
        int_regfile[reg].i[offset] = val
    elif bitwidth == 64:
        int_regfile[reg].l[offset] = val

In effect the GPR registers r0 to r127 (and corresponding FPRs fp0 to fp127) are reinterpreted to be "starting points" in a byte-addressable memory. Vectors - which become just a virtual naming construct - effectively overlap.

It is extremely important for implementors to note that the only circumstance where upper portions of an underlying 64-bit register are zero'd out is when the destination is a scalar. The ideal register file has byte-level write-enable lines, just like most SRAMs, in order to avoid READ-MODIFY-WRITE.

An example ADD operation with predication and element width overrides:

  for (i = 0; i < VL; i++)
    if (predval & 1<<i) # predication
       src1 = get_polymorphed_reg(RA, srcwid, irs1)
       src2 = get_polymorphed_reg(RB, srcwid, irs2)
       result = src1 + src2 # actual add here
       set_polymorphed_reg(RT, destwid, ird, result)
       if (!RT.isvec) break
    if (RT.isvec)  { id += 1; }
    if (RA.isvec)  { irs1 += 1; }
    if (RB.isvec)  { irs2 += 1; }

Thus it can be clearly seen that elements are packed by their element width, and the packing starts from the source (or destination) specified by the instruction.

Twin (implicit) result operations

Some operations in the Power ISA already target two 64-bit scalar registers: lq for example, and LD with update. Some mathematical algorithms are more efficient when there are two outputs rather than one, providing feedback loops between elements (the most well-known being add with carry). 64-bit multiply for example actually internally produces a 128 bit result, which clearly cannot be stored in a single 64 bit register. Some ISAs recommend "macro op fusion": the practice of setting a convention whereby if two commonly used instructions (mullo, mulhi) use the same ALU but one selects the low part of an identical operation and the other selects the high part, then optimised micro-architectures may "fuse" those two instructions together, using Micro-coding techniques, internally.

The practice and convention of macro-op fusion however is not compatible with SVP64 Horizontal-First, because Horizontal Mode may only be applied to a single instruction at a time, and SVP64 is based on the principle of strict Program Order even at the element level. Thus it becomes necessary to add explicit more complex single instructions with more operands than would normally be seen in the average RISC ISA (3-in, 2-out, in some cases). If it was not for Power ISA already having LD/ST with update as well as Condition Codes and lq this would be hard to justify.

With limited space in the EXTRA Field, and Power ISA opcodes being only 32 bit, 5 operands is quite an ask. lq however sets a precedent: RTp stands for "RT pair". In other words the result is stored in RT and RT+1. For Scalar operations, following this precedent is perfectly reasonable. In Scalar mode, madded therefore stores the two halves of the 128-bit multiply into RT and RT+1.

What, then, of sv.madded? If the destination is hard-coded to RT and RT+1 the instruction is not useful when Vectorised because the output will be overwritten on the next element. To solve this is easy: define the destination registers as RT and RT+MAXVL respectively. This makes it easy for compilers to statically allocate registers even when VL changes dynamically.

Bear in mind that both RT and RT+MAXVL are starting points for Vectors, and bear in mind that element-width overrides still have to be taken into consideration, the starting point for the implicit destination is best illustrated in pseudocode:

 # demo of madded
 for (i = 0; i < VL; i++)
    if (predval & 1<<i) # predication
       src1 = get_polymorphed_reg(RA, srcwid, irs1)
       src2 = get_polymorphed_reg(RB, srcwid, irs2)
       src2 = get_polymorphed_reg(RC, srcwid, irs3)
       result = src1*src2 + src2
       destmask = (2<<destwid)-1
       # store two halves of result, both start from RT.
       set_polymorphed_reg(RT, destwid, ird      , result&destmask)
       set_polymorphed_reg(RT, destwid, ird+MAXVL, result>>destwid)
       if (!RT.isvec) break
    if (RT.isvec)  { id += 1; }
    if (RA.isvec)  { irs1 += 1; }
    if (RB.isvec)  { irs2 += 1; }
    if (RC.isvec)  { irs3 += 1; }

The significant part here is that the second half is stored starting not from RT+MAXVL at all: it is the element index that is offset by MAXVL, both halves actually starting from RT. If VL is 3, MAXVL is 5, RT is 1, and dest elwidth is 32 then the elements RT0 to RT2 are stored:

      0..31     32..63
 r0  unchanged unchanged
 r1  RT0.lo    RT1.lo
 r2  RT2.lo    unchanged
 r3  unchanged RT0.hi
 r4  RT1.hi    RT2.hi
 r5  unchanged unchanged

Note that all of the LO halves start from r1, but that the HI halves start from half-way into r3. The reason is that with MAXVL bring 5 and elwidth being 32, this is the 5th element offset (in 32 bit quantities) counting from r1.

Programmer's note: accessing registers that have been placed starting on a non-contiguous boundary (half-way along a scalar register) can be inconvenient: REMAP can provide an offset but it requires extra instructions to set up. A simple solution is to ensure that MAXVL is rounded up such that the Vector ends cleanly on a contiguous register boundary. MAXVL=6 in the above example would achieve that

Additional DRAFT Scalar instructions in 3-in 2-out form with an implicit 2nd destination: