LLVM 17.0.0git
RISCVISelLowering.h
Go to the documentation of this file.
1//===-- RISCVISelLowering.h - RISC-V DAG Lowering Interface -----*- C++ -*-===//
2//
3// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
4// See https://llvm.org/LICENSE.txt for license information.
5// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
6//
7//===----------------------------------------------------------------------===//
8//
9// This file defines the interfaces that RISC-V uses to lower LLVM code into a
10// selection DAG.
11//
12//===----------------------------------------------------------------------===//
13
14#ifndef LLVM_LIB_TARGET_RISCV_RISCVISELLOWERING_H
15#define LLVM_LIB_TARGET_RISCV_RISCVISELLOWERING_H
16
17#include "RISCV.h"
22#include <optional>
23
24namespace llvm {
25class RISCVSubtarget;
26struct RISCVRegisterInfo;
27namespace RISCVISD {
28enum NodeType : unsigned {
35 /// Select with condition operator - This selects between a true value and
36 /// a false value (ops #3 and #4) based on the boolean result of comparing
37 /// the lhs and rhs (ops #0 and #1) of a conditional expression with the
38 /// condition code in op #2, a XLenVT constant from the ISD::CondCode enum.
39 /// The lhs and rhs are XLenVT integers. The true and false values can be
40 /// integer or floating point.
46
47 // Add the Lo 12 bits from an address. Selected to ADDI.
49 // Get the Hi 20 bits from an address. Selected to LUI.
51
52 // Represents an AUIPC+ADDI pair. Selected to PseudoLLA.
54
55 // Selected as PseudoAddTPRel. Used to emit a TP-relative relocation.
57
58 // Load address.
60
61 // Multiply high for signedxunsigned.
63 // RV64I shifts, directly matching the semantics of the named RISC-V
64 // instructions.
68 // 32-bit operations from RV64M that can't be simply matched with a pattern
69 // at instruction selection time. These have undefined behavior for division
70 // by 0 or overflow (divw) like their target independent counterparts.
74 // RV64IB rotates, directly matching the semantics of the named RISC-V
75 // instructions.
78 // RV64IZbb bit counting instructions directly matching the semantics of the
79 // named RISC-V instructions.
82
83 // RV64IZbb absolute value for i32. Expanded to (max (negw X), X) during isel.
85
86 // FPR<->GPR transfer operations when the FPR is smaller than XLEN, needed as
87 // XLEN is the only legal integer width.
88 //
89 // FMV_H_X matches the semantics of the FMV.H.X.
90 // FMV_X_ANYEXTH is similar to FMV.X.H but has an any-extended result.
91 // FMV_X_SIGNEXTH is similar to FMV.X.H and has a sign-extended result.
92 // FMV_W_X_RV64 matches the semantics of the FMV.W.X.
93 // FMV_X_ANYEXTW_RV64 is similar to FMV.X.W but has an any-extended result.
94 //
95 // This is a more convenient semantic for producing dagcombines that remove
96 // unnecessary GPR->FPR->GPR moves.
102 // FP to XLen int conversions. Corresponds to fcvt.l(u).s/d/h on RV64 and
103 // fcvt.w(u).s/d/h on RV32. Unlike FP_TO_S/UINT these saturate out of
104 // range inputs. These are used for FP_TO_S/UINT_SAT lowering. Rounding mode
105 // is passed as a TargetConstant operand using the RISCVFPRndMode enum.
108 // FP to 32 bit int conversions for RV64. These are used to keep track of the
109 // result being sign extended to 64 bit. These saturate out of range inputs.
110 // Used for FP_TO_S/UINT and FP_TO_S/UINT_SAT lowering. Rounding mode
111 // is passed as a TargetConstant operand using the RISCVFPRndMode enum.
114
115 // Rounds an FP value to its corresponding integer in the same FP format.
116 // First operand is the value to round, the second operand is the largest
117 // integer that can be represented exactly in the FP format. This will be
118 // expanded into multiple instructions and basic blocks with a custom
119 // inserter.
121
122 // READ_CYCLE_WIDE - A read of the 64-bit cycle CSR on a 32-bit target
123 // (returns (Lo, Hi)). It takes a chain operand.
125 // brev8, orc.b, zip, and unzip from Zbb and Zbkb. All operands are i32 or
126 // XLenVT.
131 // Vector Extension
132 // VMV_V_X_VL matches the semantics of vmv.v.x but includes an extra operand
133 // for the VL value to be used for the operation. The first operand is
134 // passthru operand.
136 // VFMV_V_F_VL matches the semantics of vfmv.v.f but includes an extra operand
137 // for the VL value to be used for the operation. The first operand is
138 // passthru operand.
140 // VMV_X_S matches the semantics of vmv.x.s. The result is always XLenVT sign
141 // extended from the vector element size.
143 // VMV_S_X_VL matches the semantics of vmv.s.x. It carries a VL operand.
145 // VFMV_S_F_VL matches the semantics of vfmv.s.f. It carries a VL operand.
147 // Splats an 64-bit value that has been split into two i32 parts. This is
148 // expanded late to two scalar stores and a stride 0 vector load.
149 // The first operand is passthru operand.
151 // Read VLENB CSR
153 // Truncates a RVV integer vector by one power-of-two. Carries both an extra
154 // mask and VL operand.
156 // Matches the semantics of vslideup/vslidedown. The first operand is the
157 // pass-thru operand, the second is the source vector, the third is the
158 // XLenVT index (either constant or non-constant), the fourth is the mask
159 // and the fifth the VL.
162 // Matches the semantics of vslide1up/slide1down. The first operand is
163 // passthru operand, the second is source vector, third is the XLenVT scalar
164 // value. The fourth and fifth operands are the mask and VL operands.
167 // Matches the semantics of the vid.v instruction, with a mask and VL
168 // operand.
170 // Matches the semantics of the vfcnvt.rod function (Convert double-width
171 // float to single-width float, rounding towards odd). Takes a double-width
172 // float vector and produces a single-width float vector. Also has a mask and
173 // VL operand.
175 // These nodes match the semantics of the corresponding RVV vector reduction
176 // instructions. They produce a vector result which is the reduction
177 // performed over the second vector operand plus the first element of the
178 // third vector operand. The first operand is the pass-thru operand. The
179 // second operand is an unconstrained vector type, and the result, first, and
180 // third operand's types are expected to be the corresponding full-width
181 // LMUL=1 type for the second operand:
182 // nxv8i8 = vecreduce_add nxv8i8, nxv32i8, nxv8i8
183 // nxv2i32 = vecreduce_add nxv2i32, nxv8i32, nxv2i32
184 // The different in types does introduce extra vsetvli instructions but
185 // similarly it reduces the number of registers consumed per reduction.
186 // Also has a mask and VL operand.
199
200 // Vector binary ops with a merge as a third operand, a mask as a fourth
201 // operand, and VL as a fifth operand.
219
224
233
234 // Vector unary ops with a mask as a second operand and VL as a third operand.
238 FCOPYSIGN_VL, // Has a merge operand
244 VFCVT_RM_X_F_VL, // Has a rounding mode operand.
245 VFCVT_RM_XU_F_VL, // Has a rounding mode operand.
248 VFCVT_RM_F_X_VL, // Has a rounding mode operand.
249 VFCVT_RM_F_XU_VL, // Has a rounding mode operand.
252
253 // Vector FMA ops with a mask as a fourth operand and VL as a fifth operand.
258
259 // Widening instructions with a merge value a third operand, a mask as a
260 // fourth operand, and VL as a fifth operand.
272
273 // Narrowing logical shift right.
274 // Operands are (source, shift, passthru, mask, vl)
276
277 // Vector compare producing a mask. Fourth operand is input mask. Fifth
278 // operand is VL.
280
281 // Vector select with an additional VL operand. This operation is unmasked.
283 // Vector select with operand #2 (the value when the condition is false) tied
284 // to the destination and an additional VL operand. This operation is
285 // unmasked.
287
288 // Mask binary operators.
292
293 // Set mask vector to all zeros or ones.
296
297 // Matches the semantics of vrgather.vx and vrgather.vv with extra operands
298 // for passthru and VL. Operands are (src, index, mask, passthru, vl).
302
303 // Vector sign/zero extend with additional mask & VL operands.
306
307 // vcpop.m with additional mask and VL operands.
309
310 // vfirst.m with additional mask and VL operands.
312
313 // Reads value of CSR.
314 // The first operand is a chain pointer. The second specifies address of the
315 // required CSR. Two results are produced, the read value and the new chain
316 // pointer.
318 // Write value to CSR.
319 // The first operand is a chain pointer, the second specifies address of the
320 // required CSR and the third is the value to write. The result is the new
321 // chain pointer.
323 // Read and write value of CSR.
324 // The first operand is a chain pointer, the second specifies address of the
325 // required CSR and the third is the value to write. Two results are produced,
326 // the value read before the modification and the new chain pointer.
328
329 // FP to 32 bit int conversions for RV64. These are used to keep track of the
330 // result being sign extended to 64 bit. These saturate out of range inputs.
343
344 // WARNING: Do not add anything in the end unless you want the node to
345 // have memop! In fact, starting from FIRST_TARGET_MEMORY_OPCODE all
346 // opcodes will be thought as target memory ops!
347
348 // Load address.
351
357};
358} // namespace RISCVISD
359
361 const RISCVSubtarget &Subtarget;
362
363public:
364 explicit RISCVTargetLowering(const TargetMachine &TM,
365 const RISCVSubtarget &STI);
366
367 const RISCVSubtarget &getSubtarget() const { return Subtarget; }
368
369 bool getTgtMemIntrinsic(IntrinsicInfo &Info, const CallInst &I,
370 MachineFunction &MF,
371 unsigned Intrinsic) const override;
372 bool isLegalAddressingMode(const DataLayout &DL, const AddrMode &AM, Type *Ty,
373 unsigned AS,
374 Instruction *I = nullptr) const override;
375 bool isLegalICmpImmediate(int64_t Imm) const override;
376 bool isLegalAddImmediate(int64_t Imm) const override;
377 bool isTruncateFree(Type *SrcTy, Type *DstTy) const override;
378 bool isTruncateFree(EVT SrcVT, EVT DstVT) const override;
379 bool isZExtFree(SDValue Val, EVT VT2) const override;
380 bool isSExtCheaperThanZExt(EVT SrcVT, EVT DstVT) const override;
381 bool signExtendConstant(const ConstantInt *CI) const override;
382 bool isCheapToSpeculateCttz(Type *Ty) const override;
383 bool isCheapToSpeculateCtlz(Type *Ty) const override;
384 bool isMaskAndCmp0FoldingBeneficial(const Instruction &AndI) const override;
385 bool hasAndNotCompare(SDValue Y) const override;
386 bool hasBitTest(SDValue X, SDValue Y) const override;
389 unsigned OldShiftOpcode, unsigned NewShiftOpcode,
390 SelectionDAG &DAG) const override;
391 /// Return true if the (vector) instruction I will be lowered to an instruction
392 /// with a scalar splat operand for the given Operand number.
393 bool canSplatOperand(Instruction *I, int Operand) const;
394 /// Return true if a vector instruction will lower to a target instruction
395 /// able to splat the given operand.
396 bool canSplatOperand(unsigned Opcode, int Operand) const;
398 SmallVectorImpl<Use *> &Ops) const override;
399 bool shouldScalarizeBinop(SDValue VecOp) const override;
400 bool isOffsetFoldingLegal(const GlobalAddressSDNode *GA) const override;
401 int getLegalZfaFPImm(const APFloat &Imm, EVT VT) const;
402 bool isFPImmLegal(const APFloat &Imm, EVT VT,
403 bool ForCodeSize) const override;
404 bool isExtractSubvectorCheap(EVT ResVT, EVT SrcVT,
405 unsigned Index) const override;
406
407 bool isIntDivCheap(EVT VT, AttributeList Attr) const override;
408
409 bool preferScalarizeSplat(unsigned Opc) const override;
410
411 bool softPromoteHalfType() const override { return true; }
412
413 /// Return the register type for a given MVT, ensuring vectors are treated
414 /// as a series of gpr sized integers.
416 EVT VT) const override;
417
418 /// Return the number of registers for a given MVT, ensuring vectors are
419 /// treated as a series of gpr sized integers.
422 EVT VT) const override;
423
424 bool shouldFoldSelectWithIdentityConstant(unsigned BinOpcode,
425 EVT VT) const override;
426
427 /// Return true if the given shuffle mask can be codegen'd directly, or if it
428 /// should be stack expanded.
429 bool isShuffleMaskLegal(ArrayRef<int> M, EVT VT) const override;
430
431 bool isMultiStoresCheaperThanBitsMerge(EVT LTy, EVT HTy) const override {
432 // If the pair to store is a mixture of float and int values, we will
433 // save two bitwise instructions and one float-to-int instruction and
434 // increase one store instruction. There is potentially a more
435 // significant benefit because it avoids the float->int domain switch
436 // for input value. So It is more likely a win.
437 if ((LTy.isFloatingPoint() && HTy.isInteger()) ||
438 (LTy.isInteger() && HTy.isFloatingPoint()))
439 return true;
440 // If the pair only contains int values, we will save two bitwise
441 // instructions and increase one store instruction (costing one more
442 // store buffer). Since the benefit is more blurred we leave such a pair
443 // out until we get testcase to prove it is a win.
444 return false;
445 }
446
447 bool
449 unsigned DefinedValues) const override;
450
451 // Provide custom lowering hooks for some operations.
452 SDValue LowerOperation(SDValue Op, SelectionDAG &DAG) const override;
454 SelectionDAG &DAG) const override;
455
456 SDValue PerformDAGCombine(SDNode *N, DAGCombinerInfo &DCI) const override;
457
459 const APInt &DemandedElts,
460 TargetLoweringOpt &TLO) const override;
461
463 KnownBits &Known,
464 const APInt &DemandedElts,
465 const SelectionDAG &DAG,
466 unsigned Depth) const override;
468 const APInt &DemandedElts,
469 const SelectionDAG &DAG,
470 unsigned Depth) const override;
471
472 const Constant *getTargetConstantFromLoad(LoadSDNode *LD) const override;
473
474 // This method returns the name of a target specific DAG node.
475 const char *getTargetNodeName(unsigned Opcode) const override;
476
477 ConstraintType getConstraintType(StringRef Constraint) const override;
478
479 unsigned getInlineAsmMemConstraint(StringRef ConstraintCode) const override;
480
481 std::pair<unsigned, const TargetRegisterClass *>
483 StringRef Constraint, MVT VT) const override;
484
485 void LowerAsmOperandForConstraint(SDValue Op, std::string &Constraint,
486 std::vector<SDValue> &Ops,
487 SelectionDAG &DAG) const override;
488
491 MachineBasicBlock *BB) const override;
492
494 SDNode *Node) const override;
495
497 EVT VT) const override;
498
499 bool shouldFormOverflowOp(unsigned Opcode, EVT VT,
500 bool MathUsed) const override {
501 if (VT == MVT::i8 || VT == MVT::i16)
502 return false;
503
504 return TargetLowering::shouldFormOverflowOp(Opcode, VT, MathUsed);
505 }
506
507 bool convertSetCCLogicToBitwiseLogic(EVT VT) const override {
508 return VT.isScalarInteger();
509 }
510 bool convertSelectOfConstantsToMath(EVT VT) const override { return true; }
511
512 bool preferZeroCompareBranch() const override { return true; }
513
514 bool shouldInsertFencesForAtomic(const Instruction *I) const override {
515 return isa<LoadInst>(I) || isa<StoreInst>(I);
516 }
518 AtomicOrdering Ord) const override;
520 AtomicOrdering Ord) const override;
521
523 EVT VT) const override;
524
526 return ISD::SIGN_EXTEND;
527 }
528
530 return ISD::SIGN_EXTEND;
531 }
532
535 unsigned ExpansionFactor) const override {
539 ExpansionFactor);
540 }
541
543 CombineLevel Level) const override;
544
545 /// If a physical register, this returns the register that receives the
546 /// exception address on entry to an EH pad.
548 getExceptionPointerRegister(const Constant *PersonalityFn) const override;
549
550 /// If a physical register, this returns the register that receives the
551 /// exception typeid on entry to a landing pad.
553 getExceptionSelectorRegister(const Constant *PersonalityFn) const override;
554
555 bool shouldExtendTypeInLibCall(EVT Type) const override;
556 bool shouldSignExtendTypeInLibCall(EVT Type, bool IsSigned) const override;
557
558 /// Returns the register with the specified architectural or ABI name. This
559 /// method is necessary to lower the llvm.read_register.* and
560 /// llvm.write_register.* intrinsics. Allocatable registers must be reserved
561 /// with the clang -ffixed-xX flag for access to be allowed.
562 Register getRegisterByName(const char *RegName, LLT VT,
563 const MachineFunction &MF) const override;
564
565 // Lower incoming arguments, copy physregs into vregs
567 bool IsVarArg,
569 const SDLoc &DL, SelectionDAG &DAG,
570 SmallVectorImpl<SDValue> &InVals) const override;
572 bool IsVarArg,
574 LLVMContext &Context) const override;
575 SDValue LowerReturn(SDValue Chain, CallingConv::ID CallConv, bool IsVarArg,
577 const SmallVectorImpl<SDValue> &OutVals, const SDLoc &DL,
578 SelectionDAG &DAG) const override;
580 SmallVectorImpl<SDValue> &InVals) const override;
581
583 Type *Ty) const override;
584 bool isUsedByReturnOnly(SDNode *N, SDValue &Chain) const override;
585 bool mayBeEmittedAsTailCall(const CallInst *CI) const override;
586 bool shouldConsiderGEPOffsetSplit() const override { return true; }
587
588 bool decomposeMulByConstant(LLVMContext &Context, EVT VT,
589 SDValue C) const override;
590
592 SDValue ConstNode) const override;
593
595 shouldExpandAtomicRMWInIR(AtomicRMWInst *AI) const override;
597 Value *AlignedAddr, Value *Incr,
598 Value *Mask, Value *ShiftAmt,
599 AtomicOrdering Ord) const override;
604 Value *AlignedAddr, Value *CmpVal,
605 Value *NewVal, Value *Mask,
606 AtomicOrdering Ord) const override;
607
608 /// Returns true if the target allows unaligned memory accesses of the
609 /// specified type.
611 EVT VT, unsigned AddrSpace = 0, Align Alignment = Align(1),
613 unsigned *Fast = nullptr) const override;
614
616 SelectionDAG & DAG, const SDLoc &DL, SDValue Val, SDValue *Parts,
617 unsigned NumParts, MVT PartVT, std::optional<CallingConv::ID> CC)
618 const override;
619
621 SelectionDAG & DAG, const SDLoc &DL, const SDValue *Parts,
622 unsigned NumParts, MVT PartVT, EVT ValueVT,
623 std::optional<CallingConv::ID> CC) const override;
624
625 // Return the value of VLMax for the given vector type (i.e. SEW and LMUL)
626 SDValue computeVLMax(MVT VecVT, SDLoc DL, SelectionDAG &DAG) const;
627
628 static RISCVII::VLMUL getLMUL(MVT VT);
629 inline static unsigned computeVLMAX(unsigned VectorBits, unsigned EltSize,
630 unsigned MinSize) {
631 // Original equation:
632 // VLMAX = (VectorBits / EltSize) * LMUL
633 // where LMUL = MinSize / RISCV::RVVBitsPerBlock
634 // The following equations have been reordered to prevent loss of precision
635 // when calculating fractional LMUL.
636 return ((VectorBits / EltSize) * MinSize) / RISCV::RVVBitsPerBlock;
637 };
638 static unsigned getRegClassIDForLMUL(RISCVII::VLMUL LMul);
639 static unsigned getSubregIndexByMVT(MVT VT, unsigned Index);
640 static unsigned getRegClassIDForVecVT(MVT VT);
641 static std::pair<unsigned, unsigned>
643 unsigned InsertExtractIdx,
644 const RISCVRegisterInfo *TRI);
646
647 bool shouldRemoveExtendFromGSIndex(EVT IndexVT, EVT DataVT) const override;
648
649 bool isLegalElementTypeForRVV(Type *ScalarTy) const;
650
651 bool shouldConvertFpToSat(unsigned Op, EVT FPVT, EVT VT) const override;
652
653 unsigned getJumpTableEncoding() const override;
654
656 const MachineBasicBlock *MBB,
657 unsigned uid,
658 MCContext &Ctx) const override;
659
660 bool isVScaleKnownToBeAPowerOfTwo() const override;
661
663 ISD::MemIndexedMode &AM, bool &IsInc,
664 SelectionDAG &DAG) const;
667 SelectionDAG &DAG) const override;
670 SelectionDAG &DAG) const override;
671
673 uint64_t ElemSize) const override {
674 // Scaled addressing not supported on indexed load/stores
675 return Scale == 1;
676 }
677
678 /// If the target has a standard location for the stack protector cookie,
679 /// returns the address of that location. Otherwise, returns nullptr.
680 Value *getIRStackGuard(IRBuilderBase &IRB) const override;
681
682private:
683 /// RISCVCCAssignFn - This target-specific function extends the default
684 /// CCValAssign with additional information used to lower RISC-V calling
685 /// conventions.
686 typedef bool RISCVCCAssignFn(const DataLayout &DL, RISCVABI::ABI,
687 unsigned ValNo, MVT ValVT, MVT LocVT,
688 CCValAssign::LocInfo LocInfo,
689 ISD::ArgFlagsTy ArgFlags, CCState &State,
690 bool IsFixed, bool IsRet, Type *OrigTy,
691 const RISCVTargetLowering &TLI,
692 std::optional<unsigned> FirstMaskArgument);
693
694 void analyzeInputArgs(MachineFunction &MF, CCState &CCInfo,
695 const SmallVectorImpl<ISD::InputArg> &Ins, bool IsRet,
696 RISCVCCAssignFn Fn) const;
697 void analyzeOutputArgs(MachineFunction &MF, CCState &CCInfo,
699 bool IsRet, CallLoweringInfo *CLI,
700 RISCVCCAssignFn Fn) const;
701
702 template <class NodeTy>
703 SDValue getAddr(NodeTy *N, SelectionDAG &DAG, bool IsLocal = true) const;
704 SDValue getStaticTLSAddr(GlobalAddressSDNode *N, SelectionDAG &DAG,
705 bool UseGOT) const;
706 SDValue getDynamicTLSAddr(GlobalAddressSDNode *N, SelectionDAG &DAG) const;
707
708 SDValue lowerGlobalAddress(SDValue Op, SelectionDAG &DAG) const;
709 SDValue lowerBlockAddress(SDValue Op, SelectionDAG &DAG) const;
710 SDValue lowerConstantPool(SDValue Op, SelectionDAG &DAG) const;
711 SDValue lowerJumpTable(SDValue Op, SelectionDAG &DAG) const;
712 SDValue lowerGlobalTLSAddress(SDValue Op, SelectionDAG &DAG) const;
713 SDValue lowerSELECT(SDValue Op, SelectionDAG &DAG) const;
714 SDValue lowerBRCOND(SDValue Op, SelectionDAG &DAG) const;
715 SDValue lowerVASTART(SDValue Op, SelectionDAG &DAG) const;
716 SDValue lowerFRAMEADDR(SDValue Op, SelectionDAG &DAG) const;
717 SDValue lowerRETURNADDR(SDValue Op, SelectionDAG &DAG) const;
718 SDValue lowerShiftLeftParts(SDValue Op, SelectionDAG &DAG) const;
719 SDValue lowerShiftRightParts(SDValue Op, SelectionDAG &DAG, bool IsSRA) const;
720 SDValue lowerSPLAT_VECTOR_PARTS(SDValue Op, SelectionDAG &DAG) const;
721 SDValue lowerVectorMaskSplat(SDValue Op, SelectionDAG &DAG) const;
722 SDValue lowerVectorMaskExt(SDValue Op, SelectionDAG &DAG,
723 int64_t ExtTrueVal) const;
724 SDValue lowerVectorMaskTruncLike(SDValue Op, SelectionDAG &DAG) const;
725 SDValue lowerVectorTruncLike(SDValue Op, SelectionDAG &DAG) const;
726 SDValue lowerVectorFPExtendOrRoundLike(SDValue Op, SelectionDAG &DAG) const;
727 SDValue lowerINSERT_VECTOR_ELT(SDValue Op, SelectionDAG &DAG) const;
728 SDValue lowerEXTRACT_VECTOR_ELT(SDValue Op, SelectionDAG &DAG) const;
729 SDValue LowerINTRINSIC_WO_CHAIN(SDValue Op, SelectionDAG &DAG) const;
730 SDValue LowerINTRINSIC_W_CHAIN(SDValue Op, SelectionDAG &DAG) const;
731 SDValue LowerINTRINSIC_VOID(SDValue Op, SelectionDAG &DAG) const;
732 SDValue lowerVPREDUCE(SDValue Op, SelectionDAG &DAG) const;
733 SDValue lowerVECREDUCE(SDValue Op, SelectionDAG &DAG) const;
734 SDValue lowerVectorMaskVecReduction(SDValue Op, SelectionDAG &DAG,
735 bool IsVP) const;
736 SDValue lowerFPVECREDUCE(SDValue Op, SelectionDAG &DAG) const;
737 SDValue lowerINSERT_SUBVECTOR(SDValue Op, SelectionDAG &DAG) const;
738 SDValue lowerEXTRACT_SUBVECTOR(SDValue Op, SelectionDAG &DAG) const;
739 SDValue lowerVECTOR_DEINTERLEAVE(SDValue Op, SelectionDAG &DAG) const;
740 SDValue lowerVECTOR_INTERLEAVE(SDValue Op, SelectionDAG &DAG) const;
741 SDValue lowerSTEP_VECTOR(SDValue Op, SelectionDAG &DAG) const;
742 SDValue lowerVECTOR_REVERSE(SDValue Op, SelectionDAG &DAG) const;
743 SDValue lowerVECTOR_SPLICE(SDValue Op, SelectionDAG &DAG) const;
744 SDValue lowerABS(SDValue Op, SelectionDAG &DAG) const;
745 SDValue lowerMaskedLoad(SDValue Op, SelectionDAG &DAG) const;
746 SDValue lowerMaskedStore(SDValue Op, SelectionDAG &DAG) const;
747 SDValue lowerFixedLengthVectorFCOPYSIGNToRVV(SDValue Op,
748 SelectionDAG &DAG) const;
749 SDValue lowerMaskedGather(SDValue Op, SelectionDAG &DAG) const;
750 SDValue lowerMaskedScatter(SDValue Op, SelectionDAG &DAG) const;
751 SDValue lowerFixedLengthVectorLoadToRVV(SDValue Op, SelectionDAG &DAG) const;
752 SDValue lowerFixedLengthVectorStoreToRVV(SDValue Op, SelectionDAG &DAG) const;
753 SDValue lowerFixedLengthVectorSetccToRVV(SDValue Op, SelectionDAG &DAG) const;
754 SDValue lowerFixedLengthVectorLogicOpToRVV(SDValue Op, SelectionDAG &DAG,
755 unsigned MaskOpc,
756 unsigned VecOpc) const;
757 SDValue lowerFixedLengthVectorShiftToRVV(SDValue Op, SelectionDAG &DAG) const;
758 SDValue lowerFixedLengthVectorSelectToRVV(SDValue Op,
759 SelectionDAG &DAG) const;
760 SDValue lowerToScalableOp(SDValue Op, SelectionDAG &DAG, unsigned NewOpc,
761 bool HasMergeOp = false, bool HasMask = true) const;
762 SDValue lowerVPOp(SDValue Op, SelectionDAG &DAG, unsigned RISCVISDOpc,
763 bool HasMergeOp = false) const;
764 SDValue lowerLogicVPOp(SDValue Op, SelectionDAG &DAG, unsigned MaskOpc,
765 unsigned VecOpc) const;
766 SDValue lowerVPExtMaskOp(SDValue Op, SelectionDAG &DAG) const;
767 SDValue lowerVPSetCCMaskOp(SDValue Op, SelectionDAG &DAG) const;
768 SDValue lowerVPFPIntConvOp(SDValue Op, SelectionDAG &DAG,
769 unsigned RISCVISDOpc) const;
770 SDValue lowerVPStridedLoad(SDValue Op, SelectionDAG &DAG) const;
771 SDValue lowerVPStridedStore(SDValue Op, SelectionDAG &DAG) const;
772 SDValue lowerFixedLengthVectorExtendToRVV(SDValue Op, SelectionDAG &DAG,
773 unsigned ExtendOpc) const;
774 SDValue lowerGET_ROUNDING(SDValue Op, SelectionDAG &DAG) const;
775 SDValue lowerSET_ROUNDING(SDValue Op, SelectionDAG &DAG) const;
776
777 SDValue lowerEH_DWARF_CFA(SDValue Op, SelectionDAG &DAG) const;
778 SDValue lowerCTLZ_CTTZ_ZERO_UNDEF(SDValue Op, SelectionDAG &DAG) const;
779
780 SDValue lowerStrictFPExtend(SDValue Op, SelectionDAG &DAG) const;
781
782 SDValue expandUnalignedRVVLoad(SDValue Op, SelectionDAG &DAG) const;
783 SDValue expandUnalignedRVVStore(SDValue Op, SelectionDAG &DAG) const;
784
785 bool isEligibleForTailCallOptimization(
786 CCState &CCInfo, CallLoweringInfo &CLI, MachineFunction &MF,
787 const SmallVector<CCValAssign, 16> &ArgLocs) const;
788
789 /// Generate error diagnostics if any register used by CC has been marked
790 /// reserved.
791 void validateCCReservedRegs(
792 const SmallVectorImpl<std::pair<llvm::Register, llvm::SDValue>> &Regs,
793 MachineFunction &MF) const;
794
795 bool useRVVForFixedLengthVectorVT(MVT VT) const;
796
797 MVT getVPExplicitVectorLengthTy() const override;
798
799 /// RVV code generation for fixed length vectors does not lower all
800 /// BUILD_VECTORs. This makes BUILD_VECTOR legalisation a source of stores to
801 /// merge. However, merging them creates a BUILD_VECTOR that is just as
802 /// illegal as the original, thus leading to an infinite legalisation loop.
803 /// NOTE: Once BUILD_VECTOR can be custom lowered for all legal vector types,
804 /// this override can be removed.
805 bool mergeStoresAfterLegalization(EVT VT) const override;
806
807 /// Disable normalizing
808 /// select(N0&N1, X, Y) => select(N0, select(N1, X, Y), Y) and
809 /// select(N0|N1, X, Y) => select(N0, select(N1, X, Y, Y))
810 /// RISC-V doesn't have flags so it's better to perform the and/or in a GPR.
811 bool shouldNormalizeToSelectSequence(LLVMContext &, EVT) const override {
812 return false;
813 };
814
815 /// For available scheduling models FDIV + two independent FMULs are much
816 /// faster than two FDIVs.
817 unsigned combineRepeatedFPDivisors() const override;
818};
819namespace RISCVVIntrinsicsTable {
820
822 unsigned IntrinsicID;
824 uint8_t VLOperand;
825 bool hasScalarOperand() const {
826 // 0xF is not valid. See NoScalarOperand in IntrinsicsRISCV.td.
827 return ScalarOperand != 0xF;
828 }
829 bool hasVLOperand() const {
830 // 0x1F is not valid. See NoVLOperand in IntrinsicsRISCV.td.
831 return VLOperand != 0x1F;
832 }
833};
834
835using namespace RISCV;
836
837#define GET_RISCVVIntrinsicsTable_DECL
838#include "RISCVGenSearchableTables.inc"
839
840} // end namespace RISCVVIntrinsicsTable
841
842} // end namespace llvm
843
844#endif
MachineBasicBlock & MBB
MachineBasicBlock MachineBasicBlock::iterator DebugLoc DL
Function Alias Analysis Results
Analysis containing CSE Info
Definition: CSEInfo.cpp:27
static GCMetadataPrinterRegistry::Add< ErlangGCPrinter > X("erlang", "erlang-compatible garbage collector")
IRTranslator LLVM IR MI
#define RegName(no)
#define I(x, y, z)
Definition: MD5.cpp:58
unsigned const TargetRegisterInfo * TRI
static GCMetadataPrinterRegistry::Add< OcamlGCMetadataPrinter > Y("ocaml", "ocaml 3.10-compatible collector")
const char LLVMTargetMachineRef TM
This file describes how to lower LLVM code to machine code.
@ Flags
Definition: TextStubV5.cpp:93
Class for arbitrary precision integers.
Definition: APInt.h:75
ArrayRef - Represent a constant reference to an array (0 or more elements consecutively in memory),...
Definition: ArrayRef.h:41
An instruction that atomically checks whether a specified value is in a memory location,...
Definition: Instructions.h:513
an instruction that atomically reads a memory location, combines it with another value,...
Definition: Instructions.h:718
CCState - This class holds information needed while lowering arguments and return values.
This class represents a function call, abstracting a target machine's calling convention.
This is the shared class of boolean and integer constants.
Definition: Constants.h:78
This is an important base class in LLVM.
Definition: Constant.h:41
A parsed version of the target data layout string in and methods for querying it.
Definition: DataLayout.h:110
bool hasMinSize() const
Optimize this function for minimum size (-Oz).
Definition: Function.h:646
Common base class shared among various IRBuilders.
Definition: IRBuilder.h:94
This is an important class for using LLVM in a threaded context.
Definition: LLVMContext.h:67
This class is used to represent ISD::LOAD nodes.
Context object for machine code objects.
Definition: MCContext.h:76
Base class for the full range of assembler expressions which are needed for parsing.
Definition: MCExpr.h:35
Machine Value Type.
Function & getFunction()
Return the LLVM function that this machine code represents.
Representation of each machine instruction.
Definition: MachineInstr.h:68
Flags
Flags values. These may be or'd together.
static std::pair< unsigned, unsigned > decomposeSubvectorInsertExtractToSubRegs(MVT VecVT, MVT SubVecVT, unsigned InsertExtractIdx, const RISCVRegisterInfo *TRI)
bool getIndexedAddressParts(SDNode *Op, SDValue &Base, SDValue &Offset, ISD::MemIndexedMode &AM, bool &IsInc, SelectionDAG &DAG) const
static unsigned getSubregIndexByMVT(MVT VT, unsigned Index)
Value * getIRStackGuard(IRBuilderBase &IRB) const override
If the target has a standard location for the stack protector cookie, returns the address of that loc...
bool shouldConvertFpToSat(unsigned Op, EVT FPVT, EVT VT) const override
Should we generate fp_to_si_sat and fp_to_ui_sat from type FPVT to type VT from min(max(fptoi)) satur...
bool shouldSinkOperands(Instruction *I, SmallVectorImpl< Use * > &Ops) const override
Check if sinking I's operands to I's basic block is profitable, because the operands can be folded in...
SDValue LowerReturn(SDValue Chain, CallingConv::ID CallConv, bool IsVarArg, const SmallVectorImpl< ISD::OutputArg > &Outs, const SmallVectorImpl< SDValue > &OutVals, const SDLoc &DL, SelectionDAG &DAG) const override
This hook must be implemented to lower outgoing return values, described by the Outs array,...
bool shouldFoldSelectWithIdentityConstant(unsigned BinOpcode, EVT VT) const override
Return true if pulling a binary operation into a select with an identity constant is profitable.
bool mayBeEmittedAsTailCall(const CallInst *CI) const override
Return true if the target may be able emit the call instruction as a tail call.
MachineBasicBlock * EmitInstrWithCustomInserter(MachineInstr &MI, MachineBasicBlock *BB) const override
This method should be implemented by targets that mark instructions with the 'usesCustomInserter' fla...
Instruction * emitLeadingFence(IRBuilderBase &Builder, Instruction *Inst, AtomicOrdering Ord) const override
Inserts in the IR a target-specific intrinsic specifying a fence.
ISD::NodeType getExtendForAtomicOps() const override
Returns how the platform's atomic operations are extended (ZERO_EXTEND, SIGN_EXTEND,...
bool isTruncateFree(Type *SrcTy, Type *DstTy) const override
Return true if it's free to truncate a value of type FromTy to type ToTy.
unsigned getInlineAsmMemConstraint(StringRef ConstraintCode) const override
bool preferZeroCompareBranch() const override
Return true if the heuristic to prefer icmp eq zero should be used in code gen prepare.
Value * emitMaskedAtomicRMWIntrinsic(IRBuilderBase &Builder, AtomicRMWInst *AI, Value *AlignedAddr, Value *Incr, Value *Mask, Value *ShiftAmt, AtomicOrdering Ord) const override
Perform a masked atomicrmw using a target-specific intrinsic.
bool allowsMisalignedMemoryAccesses(EVT VT, unsigned AddrSpace=0, Align Alignment=Align(1), MachineMemOperand::Flags Flags=MachineMemOperand::MONone, unsigned *Fast=nullptr) const override
Returns true if the target allows unaligned memory accesses of the specified type.
const Constant * getTargetConstantFromLoad(LoadSDNode *LD) const override
This method returns the constant pool value that will be loaded by LD.
const RISCVSubtarget & getSubtarget() const
TargetLowering::ShiftLegalizationStrategy preferredShiftLegalizationStrategy(SelectionDAG &DAG, SDNode *N, unsigned ExpansionFactor) const override
SDValue PerformDAGCombine(SDNode *N, DAGCombinerInfo &DCI) const override
This method will be invoked for all target nodes and for any target-independent nodes that the target...
bool isOffsetFoldingLegal(const GlobalAddressSDNode *GA) const override
Return true if folding a constant offset with the given GlobalAddress is legal.
void computeKnownBitsForTargetNode(const SDValue Op, KnownBits &Known, const APInt &DemandedElts, const SelectionDAG &DAG, unsigned Depth) const override
Determine which of the bits specified in Mask are known to be either zero or one and return them in t...
const char * getTargetNodeName(unsigned Opcode) const override
This method returns the name of a target specific DAG node.
bool canSplatOperand(Instruction *I, int Operand) const
Return true if the (vector) instruction I will be lowered to an instruction with a scalar splat opera...
bool shouldExtendTypeInLibCall(EVT Type) const override
Returns true if arguments should be extended in lib calls.
bool isLegalAddImmediate(int64_t Imm) const override
Return true if the specified immediate is legal add immediate, that is the target has add instruction...
const MCExpr * LowerCustomJumpTableEntry(const MachineJumpTableInfo *MJTI, const MachineBasicBlock *MBB, unsigned uid, MCContext &Ctx) const override
bool shouldConvertConstantLoadToIntImm(const APInt &Imm, Type *Ty) const override
Return true if it is beneficial to convert a load of a constant to just the constant itself.
bool targetShrinkDemandedConstant(SDValue Op, const APInt &DemandedBits, const APInt &DemandedElts, TargetLoweringOpt &TLO) const override
SDValue computeVLMax(MVT VecVT, SDLoc DL, SelectionDAG &DAG) const
bool shouldExpandBuildVectorWithShuffles(EVT VT, unsigned DefinedValues) const override
MVT getRegisterTypeForCallingConv(LLVMContext &Context, CallingConv::ID CC, EVT VT) const override
Return the register type for a given MVT, ensuring vectors are treated as a series of gpr sized integ...
bool decomposeMulByConstant(LLVMContext &Context, EVT VT, SDValue C) const override
Return true if it is profitable to transform an integer multiplication-by-constant into simpler opera...
bool isLegalAddressingMode(const DataLayout &DL, const AddrMode &AM, Type *Ty, unsigned AS, Instruction *I=nullptr) const override
Return true if the addressing mode represented by AM is legal for this target, for a load/store of th...
bool hasAndNotCompare(SDValue Y) const override
Return true if the target should transform: (X & Y) == Y —> (~X & Y) == 0 (X & Y) !...
bool shouldScalarizeBinop(SDValue VecOp) const override
Try to convert an extract element of a vector binary operation into an extract element followed by a ...
bool isDesirableToCommuteWithShift(const SDNode *N, CombineLevel Level) const override
Return true if it is profitable to move this shift by a constant amount through its operand,...
bool hasBitTest(SDValue X, SDValue Y) const override
Return true if the target has a bit-test instruction: (X & (1 << Y)) ==/!= 0 This knowledge can be us...
static unsigned computeVLMAX(unsigned VectorBits, unsigned EltSize, unsigned MinSize)
bool isCheapToSpeculateCtlz(Type *Ty) const override
Return true if it is cheap to speculate a call to intrinsic ctlz.
Value * emitMaskedAtomicCmpXchgIntrinsic(IRBuilderBase &Builder, AtomicCmpXchgInst *CI, Value *AlignedAddr, Value *CmpVal, Value *NewVal, Value *Mask, AtomicOrdering Ord) const override
Perform a masked cmpxchg using a target-specific intrinsic.
bool isFPImmLegal(const APFloat &Imm, EVT VT, bool ForCodeSize) const override
Returns true if the target can instruction select the specified FP immediate natively.
bool convertSelectOfConstantsToMath(EVT VT) const override
Return true if a select of constants (select Cond, C1, C2) should be transformed into simple math ops...
unsigned getJumpTableEncoding() const override
Return the entry encoding for a jump table in the current function.
bool isMulAddWithConstProfitable(SDValue AddNode, SDValue ConstNode) const override
Return true if it may be profitable to transform (mul (add x, c1), c2) -> (add (mul x,...
EVT getSetCCResultType(const DataLayout &DL, LLVMContext &Context, EVT VT) const override
Return the ValueType of the result of SETCC operations.
bool CanLowerReturn(CallingConv::ID CallConv, MachineFunction &MF, bool IsVarArg, const SmallVectorImpl< ISD::OutputArg > &Outs, LLVMContext &Context) const override
This hook should be implemented to check whether the return values described by the Outs array can fi...
unsigned ComputeNumSignBitsForTargetNode(SDValue Op, const APInt &DemandedElts, const SelectionDAG &DAG, unsigned Depth) const override
This method can be implemented by targets that want to expose additional information about sign bits ...
MVT getContainerForFixedLengthVector(MVT VT) const
static unsigned getRegClassIDForVecVT(MVT VT)
Register getExceptionPointerRegister(const Constant *PersonalityFn) const override
If a physical register, this returns the register that receives the exception address on entry to an ...
TargetLowering::AtomicExpansionKind shouldExpandAtomicRMWInIR(AtomicRMWInst *AI) const override
Returns how the IR-level AtomicExpand pass should expand the given AtomicRMW, if at all.
bool isExtractSubvectorCheap(EVT ResVT, EVT SrcVT, unsigned Index) const override
Return true if EXTRACT_SUBVECTOR is cheap for extracting this result type from this source type with ...
std::pair< unsigned, const TargetRegisterClass * > getRegForInlineAsmConstraint(const TargetRegisterInfo *TRI, StringRef Constraint, MVT VT) const override
Given a physical register constraint (e.g.
bool isLegalElementTypeForRVV(Type *ScalarTy) const
bool signExtendConstant(const ConstantInt *CI) const override
Return true if this constant should be sign extended when promoting to a larger type.
bool shouldProduceAndByConstByHoistingConstFromShiftsLHSOfAnd(SDValue X, ConstantSDNode *XC, ConstantSDNode *CC, SDValue Y, unsigned OldShiftOpcode, unsigned NewShiftOpcode, SelectionDAG &DAG) const override
Given the pattern (X & (C l>>/<< Y)) ==/!= 0 return true if it should be transformed into: ((X <</l>>...
Register getRegisterByName(const char *RegName, LLT VT, const MachineFunction &MF) const override
Returns the register with the specified architectural or ABI name.
SDValue LowerOperation(SDValue Op, SelectionDAG &DAG) const override
This callback is invoked for operations that are unsupported by the target, which are registered to u...
static unsigned getRegClassIDForLMUL(RISCVII::VLMUL LMul)
bool isUsedByReturnOnly(SDNode *N, SDValue &Chain) const override
Return true if result of the specified node is used by a return node only.
bool softPromoteHalfType() const override
bool isFMAFasterThanFMulAndFAdd(const MachineFunction &MF, EVT VT) const override
Return true if an FMA operation is faster than a pair of fmul and fadd instructions.
TargetLowering::AtomicExpansionKind shouldExpandAtomicCmpXchgInIR(AtomicCmpXchgInst *CI) const override
Returns how the given atomic cmpxchg should be expanded by the IR-level AtomicExpand pass.
bool shouldSignExtendTypeInLibCall(EVT Type, bool IsSigned) const override
Returns true if arguments should be sign-extended in lib calls.
Register getExceptionSelectorRegister(const Constant *PersonalityFn) const override
If a physical register, this returns the register that receives the exception typeid on entry to a la...
bool convertSetCCLogicToBitwiseLogic(EVT VT) const override
Use bitwise logic to make pairs of compares more efficient.
ISD::NodeType getExtendForAtomicCmpSwapArg() const override
Returns how the platform's atomic compare and swap expects its comparison value to be extended (ZERO_...
void AdjustInstrPostInstrSelection(MachineInstr &MI, SDNode *Node) const override
This method should be implemented by targets that mark instructions with the 'hasPostISelHook' flag.
bool isShuffleMaskLegal(ArrayRef< int > M, EVT VT) const override
Return true if the given shuffle mask can be codegen'd directly, or if it should be stack expanded.
bool isCheapToSpeculateCttz(Type *Ty) const override
Return true if it is cheap to speculate a call to intrinsic cttz.
bool isLegalICmpImmediate(int64_t Imm) const override
Return true if the specified immediate is legal icmp immediate, that is the target has icmp instructi...
bool isLegalScaleForGatherScatter(uint64_t Scale, uint64_t ElemSize) const override
SDValue LowerFormalArguments(SDValue Chain, CallingConv::ID CallConv, bool IsVarArg, const SmallVectorImpl< ISD::InputArg > &Ins, const SDLoc &DL, SelectionDAG &DAG, SmallVectorImpl< SDValue > &InVals) const override
This hook must be implemented to lower the incoming (formal) arguments, described by the Ins array,...
void ReplaceNodeResults(SDNode *N, SmallVectorImpl< SDValue > &Results, SelectionDAG &DAG) const override
This callback is invoked when a node result type is illegal for the target, and the operation was reg...
bool getTgtMemIntrinsic(IntrinsicInfo &Info, const CallInst &I, MachineFunction &MF, unsigned Intrinsic) const override
Given an intrinsic, checks if on the target the intrinsic will need to map to a MemIntrinsicNode (tou...
bool isVScaleKnownToBeAPowerOfTwo() const override
Return true only if vscale must be a power of two.
static RISCVII::VLMUL getLMUL(MVT VT)
int getLegalZfaFPImm(const APFloat &Imm, EVT VT) const
void LowerAsmOperandForConstraint(SDValue Op, std::string &Constraint, std::vector< SDValue > &Ops, SelectionDAG &DAG) const override
Lower the specified operand into the Ops vector.
bool splitValueIntoRegisterParts(SelectionDAG &DAG, const SDLoc &DL, SDValue Val, SDValue *Parts, unsigned NumParts, MVT PartVT, std::optional< CallingConv::ID > CC) const override
Target-specific splitting of values into parts that fit a register storing a legal type.
Instruction * emitTrailingFence(IRBuilderBase &Builder, Instruction *Inst, AtomicOrdering Ord) const override
unsigned getNumRegistersForCallingConv(LLVMContext &Context, CallingConv::ID CC, EVT VT) const override
Return the number of registers for a given MVT, ensuring vectors are treated as a series of gpr sized...
ConstraintType getConstraintType(StringRef Constraint) const override
getConstraintType - Given a constraint letter, return the type of constraint it is for this target.
bool shouldConsiderGEPOffsetSplit() const override
bool preferScalarizeSplat(unsigned Opc) const override
bool isIntDivCheap(EVT VT, AttributeList Attr) const override
Return true if integer divide is usually cheaper than a sequence of several shifts,...
bool shouldRemoveExtendFromGSIndex(EVT IndexVT, EVT DataVT) const override
bool isMultiStoresCheaperThanBitsMerge(EVT LTy, EVT HTy) const override
Return true if it is cheaper to split the store of a merged int val from a pair of smaller values int...
bool getPostIndexedAddressParts(SDNode *N, SDNode *Op, SDValue &Base, SDValue &Offset, ISD::MemIndexedMode &AM, SelectionDAG &DAG) const override
Returns true by value, base pointer and offset pointer and addressing mode by reference if this node ...
bool shouldFormOverflowOp(unsigned Opcode, EVT VT, bool MathUsed) const override
Try to convert math with an overflow comparison into the corresponding DAG node operation.
bool shouldInsertFencesForAtomic(const Instruction *I) const override
Whether AtomicExpandPass should automatically insert fences and reduce ordering for this atomic.
bool getPreIndexedAddressParts(SDNode *N, SDValue &Base, SDValue &Offset, ISD::MemIndexedMode &AM, SelectionDAG &DAG) const override
Returns true by value, base pointer and offset pointer and addressing mode by reference if the node's...
SDValue joinRegisterPartsIntoValue(SelectionDAG &DAG, const SDLoc &DL, const SDValue *Parts, unsigned NumParts, MVT PartVT, EVT ValueVT, std::optional< CallingConv::ID > CC) const override
Target-specific combining of register parts into its original value.
bool isMaskAndCmp0FoldingBeneficial(const Instruction &AndI) const override
Return if the target supports combining a chain like:
bool isSExtCheaperThanZExt(EVT SrcVT, EVT DstVT) const override
Return true if sign-extension from FromTy to ToTy is cheaper than zero-extension.
SDValue LowerCall(TargetLowering::CallLoweringInfo &CLI, SmallVectorImpl< SDValue > &InVals) const override
This hook must be implemented to lower calls into the specified DAG.
bool isZExtFree(SDValue Val, EVT VT2) const override
Return true if zero-extending the specific node Val to type VT2 is free (either because it's implicit...
Wrapper class representing virtual and physical registers.
Definition: Register.h:19
Wrapper class for IR location info (IR ordering and DebugLoc) to be passed into SDNode creation funct...
Represents one node in the SelectionDAG.
Unlike LLVM values, Selection DAG nodes may return multiple values as the result of a computation.
This is used to represent a portion of an LLVM function in a low-level Data Dependence DAG representa...
Definition: SelectionDAG.h:225
MachineFunction & getMachineFunction() const
Definition: SelectionDAG.h:469
This class consists of common code factored out of the SmallVector class to reduce code duplication b...
Definition: SmallVector.h:577
This is a 'vector' (really, a variable-sized array), optimized for the case when the array is small.
Definition: SmallVector.h:1200
StringRef - Represent a constant reference to a string, i.e.
Definition: StringRef.h:50
virtual bool shouldFormOverflowOp(unsigned Opcode, EVT VT, bool MathUsed) const
Try to convert math with an overflow comparison into the corresponding DAG node operation.
ShiftLegalizationStrategy
Return the preferred strategy to legalize tihs SHIFT instruction, with ExpansionFactor being the recu...
virtual ShiftLegalizationStrategy preferredShiftLegalizationStrategy(SelectionDAG &DAG, SDNode *N, unsigned ExpansionFactor) const
AtomicExpansionKind
Enum that specifies what an atomic load/AtomicRMWInst is expanded to, if at all.
This class defines information used to lower LLVM code to legal SelectionDAG operators that the targe...
Primary interface to the complete machine description for the target machine.
Definition: TargetMachine.h:78
TargetRegisterInfo base class - We assume that the target defines a static array of TargetRegisterDes...
The instances of the Type class are immutable: once they are created, they are never changed.
Definition: Type.h:45
LLVM Value Representation.
Definition: Value.h:74
@ Fast
Attempts to make calls as fast as possible (e.g.
Definition: CallingConv.h:41
@ C
The default llvm calling convention, compatible with C.
Definition: CallingConv.h:34
NodeType
ISD::NodeType enum - This enum defines the target-independent operators for a SelectionDAG.
Definition: ISDOpcodes.h:40
@ BUILTIN_OP_END
BUILTIN_OP_END - This must be the last enum value in this list.
Definition: ISDOpcodes.h:1324
@ SIGN_EXTEND
Conversion operators.
Definition: ISDOpcodes.h:773
static const int FIRST_TARGET_MEMORY_OPCODE
FIRST_TARGET_MEMORY_OPCODE - Target-specific pre-isel operations which do not reference a specific me...
Definition: ISDOpcodes.h:1336
MemIndexedMode
MemIndexedMode enum - This enum defines the load / store indexed addressing modes.
Definition: ISDOpcodes.h:1396
static const int FIRST_TARGET_STRICTFP_OPCODE
FIRST_TARGET_STRICTFP_OPCODE - Target-specific pre-isel operations which cannot raise FP exceptions s...
Definition: ISDOpcodes.h:1330
@ SELECT_CC
Select with condition operator - This selects between a true value and a false value (ops #3 and #4) ...
static constexpr unsigned RVVBitsPerBlock
This is an optimization pass for GlobalISel generic memory operations.
Definition: AddressRanges.h:18
@ Offset
Definition: DWP.cpp:406
AtomicOrdering
Atomic ordering for LLVM's memory model.
CombineLevel
Definition: DAGCombine.h:15
#define N
This struct is a compact representation of a valid (non-zero power of two) alignment.
Definition: Alignment.h:39
Extended Value Type.
Definition: ValueTypes.h:34
bool isFloatingPoint() const
Return true if this is a FP or a vector FP type.
Definition: ValueTypes.h:139
bool isScalarInteger() const
Return true if this is an integer, but not a vector.
Definition: ValueTypes.h:149
bool isInteger() const
Return true if this is an integer or a vector integer type.
Definition: ValueTypes.h:144
This structure contains all information that is necessary for lowering calls.