TODO ideas

https://bugs.libre-soc.org/show_bug.cgi?id=527

  • idea 1: modify cmp (and other CR generators?) with qualifiers that create single bit prefix vector into int reg
  • idea 2: override CR SO field in vector form to be predicate bit per element
  • idea 3: reading of predicates is from bits of int reg
  • idea 4: SO CR field no longer overflow, contains copy of int reg predicate element bit (passed through). when OE set?

Requirements

  • must be easily implementable in any microarchitecture including:
    • small and large out-of-order
    • in-order
    • FSM (0.3 IPC or below)
    • single or multi-issue
  • must not compromise or penalise any microarchitectural performance
  • must cover up to 64 elements
  • must still work for elwidth over-rides

Additional Capabilities

  • two modes, "zeroing" and "non-zeroing". zeroing mode places a zero in the masked-out element results, where non-zeroing leaves the destination (result) element unmodified.
  • predicate must be invertable via an opcode bit (to avoid the need for an instruction which inverts all bits of the predicate mask)

Implementation note: even in in-order microarchitectures it is strongly adviseable to use byte-level write-enable lines on the register file. This in combination with 8-bit SIMD element overrides allows, in "non-zeroing" mode, the predicate mask to very simply be directly ANDed with the regfile write-enable lines to achieve the required functionality of leaving masked-out elements unmodified, right down to the 8 bit element level. The alternative is to perform a READ-MODIFY-MASK-WRITE cycle which is costly and compromises performance. Avoided very simply with byte-level write-enable.

General implications and considerations

OE=1 and SO

XER.SO (sticky overflow) is known to cause massive slowdown in pretty much every microarchitecture and it definitely compromises the performance of out-of-order systems. The reason is that it introduces a READ-MODIFY-WRITE cycle between XER.SO and CR0 (which contains a copy of the SO field after inclusion of the overflow). The result and source registers branch off as RaW and WaR hazards from this RMW chain.

This is even before predication or vectorization were to be added on top, i.e. these are existing weaknesses in OpenPOWER as a scalar ISA.

As well-known weaknesses that compromise performance, very little use of OE=1 is actually made, outside of unit tests and Conformance Tests. Consequently it makes very little sense to continue to propagate OE=1 in the Vectorization context of SV.

Vector Chaining

(see masked vector chaining)

One of the design principles of SV is that the use of VL should be as closely equivalent to a direct substitution of the scalar operations of the hardware for-loop as possible, as if those looped operations were actually in the instruction stream (as scalar operations) rather than being issued from the Vector loop.

The implications here are that register dependency hazards still have to be respected inter-element even when (conceptually) pushed into the instruction stream from a hardware for-loop.

Using a multi-issue out-of-order engine as the underlying microarchitectural basis this is not as difficult to achieve as it first seems (the hard work having been done by the Dependency Matrices). In addition, Vector Chaining should also be possible for a multi-issue out-of-order engine to cope with, as long as false (unnecessary) Dependency Hazards are not introduced in between Vectors, where the dependencies actually only exist between elements in the Vector.

The concept of recognising that it is the elements within the Vector that have Dependency Hazards rather than the Vectors themselves is what permits Cray-style "chaining".

This "false/unnecessary hazard" condition eliminates and/or compromises the performance or drives up resource utilisation in at least two of the proposals below.

Proposals

Adding new predicate register file type and associated opcodes

This idea, adding new predicate manipulation opcodes, violates the fundamental design principles of SV to not add new vector-related instructions unless essential or compelling.

All other proposals utilise existing scalar opcodes which already happen to have bitmanipulation, arithmetic, and inter-file transfer capability (mfcr, mfspr etc). They also involve adding extra scalar bitmanip opcodes, such that by utilising scalar registers as predicate masks SV achieves "par" with other Cray-style (variable-length) Vector ISAs, all without actually having to add any actual Vector opcodes.

In addition those scalar 64-bit bitmanip operations, although some of them are obscure and unusual in the scalar world, do actually have practical applications outside of a vector context.

(Hilariously and confusingly those very same scalar bitmanip opcodes may themselves be SV-vectorized however with VL only being up to 64 elements it is not anticipated that SV-bitmanip would be used to generate up to 64 bit predicate masks, when a single 64 bit scalar operation will suffice).

The summary is that adding a full set special vector opcodes just for manipulating predicate masks and being able to transfer them to other regfiles (a la mfcr) is anomalous, costly, and unnecessary.

CR-based predication proposal

this involves treating each CR as providing one bit of predicate. If there is limited space in SVPrefix it will be a fixed bit (bit 0) otherwise it may be selected (bit 0 to 3 of the CR) through a field in the opcode.

the crucial advantage of this proposal is that the Function Units can have one more register (a CR) added as their Read Dependency Hazards just like all the other incoming source registers, and there is no need for a special "Predicate Shadow Function Unit".

a big advantage of this is that unpredicated operations just set the predicate to an immediate of all 1s and the actual ALUs require very little modification.

a disadvantage is that to support the selection of 8 bit of predicate from 8 CRs (via the "full" 8x CR port") would require allocating 32-bit datapath to the relevant FUs. This could be reduced by adding yet another type of special virtual register port or datapath that masks out the required predicate bits closer to the regfile.

another disadvantage is that the CR regfile needs to be expanded from 8x 4bit CRs to a minimum of 64x or preferably 128x 4-bit CRs. Beyond that they can be transferred using vectorized mfcr and mtcrf into INT regs. this is a huge number of CR regs, each of which will need a DM column in the FU-REGs Matrix. however this cost can be mitigated through regfile cacheing, bringing FU-REGs column numbers back down to "sane".

Predicated SIMD HI32-LO32 FUs

an analysis of changing the element widths (for SIMD) gives the following potential arrangements, for which it is assumed that 2x 32-bit FUs "pair up" for single 64 bit arithmetic, HI32 and LO32 style.

  • 64-bit operations. 2 FUs and their DM rows "collaborate"
    • 2x 32-bit source registers gang together for 64 bit input
    • 2x 32-bit output registers likewise for output
    • 1x CR (from the LO32 FU DM side) for a predicate bit
  • 32-bit operations. 2 FUs collaborate 2x32 SIMD style
    • 2x 32-bit source registers go into separate input halves of the SIMD ALU
    • 2x 32-bit outputs likewise for output
    • 2x CRs (one for HI32, one for LO32) for a predicate bit for each of the 2x32bit SIMD pair
  • 16-bit operations. 2 FUs collaborate 4x16 SIMD style
    • 2x 2x16-bit source registers group together to provide 4x16 inputs
    • likewise for outputs
    • EITHER 2x 2xCRs (2 for HI32, 2 for LO32) provide 4 predicate bits
    • OR 1x 8xCR "full" port is utilised (on LO32 FU) followed by masking at the ALU behind the FU pair, extracting the required 4 predicate bits
  • 8-bit operations. 2 FUs collaborate 8x8 SIMD style
    • 2x 4x8-bit source registers
    • likewise for outputs
    • 1x 8xCR "full" port is utilised (on LO32 FU) and all 8 bits are passed through to the underlying 64-bit ALU to perform 8x 8-bit predicated operations

Predicated SIMD straight 64-bit FUs

  • 64-bit operations. 1 FU, 1 64 bit operation
    • 1x 64-bit source register
    • 1x 64-bit output register
    • 1x CR for a predicate bit
  • 32-bit operations. 1 FUs 2x32 SIMD style
    • 1x 64-bit source register dynamically splits to 2x 32-bit
    • 1x 64-bit output likewise
    • 2x CRs for a predicate bit for each of the 2x32bit SIMD pair
  • 16-bit operations. 1 FUs 4x16 SIMD style
    • 1x 4x16-bit source registers
    • likewise for outputs
    • 1x 8xCR "full" port is utilised followed by masking at the ALU behind the FU pair, extracting the required 4 predicate bits
  • 8-bit operations. 1 FU 8x8 SIMD style
    • 1x 8x8-bit source registers
    • likewise for outputs
    • 1x 8xCR "full" port is utilised LO32 and all 8 bits used to perform 8x 8-bit predicated operations

Here again the underying 64-bit ALU requires the 8x predicate bits to cover the 8x8-bit SIMD operations (7 of which are dormant/unused in 64-bit predicated operations but still have to be there to cover 8x8-bit SIMD).

Given that the initial idea of using the "full" (virtual) 32-bit CR read port (which reads all 8 CRs CR0-CR7 simultaneously) would require a 32-bit broadcast bus to every predication-capable Function Unit, the bus bandwidth can again be reduced by performing the selection of the masks (bit 0 thru bit 3 of each CR) closer to the regfile i.e. before hitting the broadcast bus.

One scalar int per predicate element.

Similar to RVV and similar to the one-CR-per-element concept above, the idea here is to use the LSB of any given element in a vector of predicates. This idea has quite a lot of merit to it.

Implementation-wise just like in the CR-based case a special regfile port could be added that gets the LSB of each scalar integer register and routes them through to the broadcast bus.

The disadvantages appear on closer analysis:

  • Unlike the "full" CR port (which reads 8x CRs CR0-7 in one hit) trying the same trick on the scalar integer regfile, to obtain just 8 predicate bits (each being an LSB of a given 64 bit scalar int), would require a whopping 8x64bit set of reads to the INT regfile instead of a scant 1x32bit read. Resource-wise, then, this idea is expensive.
  • With predicate bits being distributed out amongst 64 bit scalar registers, scalar bitmanipulation operations that can be performed after transferring Vectors of CMP operations from CRs to INTs (vectorized-mfcr) are more challenging and costly. Rather than use vectorized mfcr, complex transfers of the LSBs into a single scalar int are required.

In a "normal" Vector ISA this would be solved by adding opcodes that perform the kinds of bitmanipulation operations normally needed for predicate masks, as specialist operations on those masks. However for SV the rule has been set: "no unnecessary additional Vector Instructions" because it is possible to use existing PowerISA scalar bitmanip opcodes to cover the same job.

The problem is that vectors of LSBs need to be transferred to scalar int regs, bitmanip operations carried out, and then transferred back, which is exceptionally costly.

On balance this is a less favourable option than vectorizing CRs

Scalar (single) integer as predicate, with one DM row

This idea has merit in that to perform predicate bitmanip operations the predicate is already in scalar INT reg form and consequently standard scalar INT bitmanip operations can be done straight away. Vectorized mfcr can be used to get CMP results or Vectorized Rc=1 CRs into the scalar INT, easily.

This idea has several disadvantages.

  • the single DM entry for the entire 64 bits creates a read hazard that has to be resolved through the addition of a special Shadowing Function Unit. Only when the entire predicate is available can the die-cancel/ok be pulled on the FU elements each bit covers
  • this situation is exacerbated if one vector creates a predicate mask that is then used to mask immediately following instructions. Ordinarily (i.e. without the predicate involved), Cray-style "chaining" would be possible. The single DM entry for the entire predicate mask prohibits this because the subsequent operations can only proceed when the entire mask has been computed and placed in full into the scalar integer register.
  • Allocation of bits to FUs gets particularly complex for SIMD (elwidth overrides) requiring shift and mask logic that is simply not needed compared to "one-for-one" schemes (above)

Overall there is very little in favour of this concept.

Scalar (single) integer as predicate with one DM row per bit

The Dependency Matrix logic from the CR proposal favourably applies equally to this proposal. However there are additional caveats that weigh against it:

  • Like the single scalar DM entry proposal, the integer scalar register had to be covered also by a single DM entry (for when it is used as an integer register).
  • Unlike the same, it must also be covered by a 64-wide suite of bitlevel Dependency Matrix Rows. These numbers are so massive as to cause some concern.
  • A solution is to introduce a virtual register naming scheme however this also introduces huge complexity as the register cache has to be capable of swapping reservations from 64 bitlevel to full 64bit scalar level and keep the Dependency Matrices synchronised

it is enormously complex and likely to result in debugging, verification and ongoing maintenance difficulties.

Schemes which split (a scalar) integer reg into mask "chunks"

These ideas are based on the principle that each chunk of 8 (or 16) bits of a scalar integer register may be covered by its own DM column in FU-REGs. 8 chunks of a scalar 64-bit integer register for use as a bit-level predicate mask onto 64 vector elements would for example require 8 DM entries.

This would, for vector sizes of 8, solve the "chaining" problem reasonably well even when two FUs (or two clock cycles) were required to deal with 4 elements at a time. The "compare" that generated the predicate would be ready to go into the first "chunk" of predicate bits whilst the second compare was still being issued.

It would also require a lot smaller DMs than the single-bit-per-element ideas.

The problems start when trying to allocate bits of predicate to units. Just like the single-DM-row per entire scalar reg case, a shadow-capable Predicate Function Unit is now required (already determined to be costly) except now if there are 8 chunks requiring 8 Predicate FUs the problem is now made 8x worse.

Not only that but it is even more complex when trying to bring in virtual register cacheing in order to bring down overall FU-REGs DM row count, although the numbers are much lower: 8x 8-bit chunks of scalar int only requires 8 DM Rows and 8 virtual subdivisions however this is per in-flight register.

The additional complexity of the cross-over point between use as a chunked predicate mask and when the same underlying register is used as an actual scalar (or even vector) integer register is also carried over from the bit-level DM subdivision case.

Out-of-order systems, to be effective, require several operations to be "in-flight" (POWER10 has up to 1,000 in-flight instructions) and if every predicated vector operation needed one 8-chunked scalar register each it becomes exceedingly complex very quickly.

Even more than that, in a predicated chaining scenario, when computing the mask from a vector "compare", the groupings are troublesome to think through how to implement, which is itself a bad sign. It is suspected that chaining will be complex or adversely affected by certain combinations of element width.

Overall this idea which initially seems to save resources brings together all the least favourable implementation aspects of other proposals and requires and combines all of them.