LLVM  14.0.0git
MipsTargetInfo.cpp
Go to the documentation of this file.
1 //===-- MipsTargetInfo.cpp - Mips Target Implementation -------------------===//
2 //
3 // Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
5 // SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
6 //
7 //===----------------------------------------------------------------------===//
8
11 using namespace llvm;
12
14  static Target TheMipsTarget;
15  return TheMipsTarget;
16 }
18  static Target TheMipselTarget;
19  return TheMipselTarget;
20 }
22  static Target TheMips64Target;
23  return TheMips64Target;
24 }
26  static Target TheMips64elTarget;
27  return TheMips64elTarget;
28 }
29
32  /*HasJIT=*/true>
33  X(getTheMipsTarget(), "mips", "MIPS (32-bit big endian)", "Mips");
34
36  /*HasJIT=*/true>
37  Y(getTheMipselTarget(), "mipsel", "MIPS (32-bit little endian)", "Mips");
38
40  /*HasJIT=*/true>
41  A(getTheMips64Target(), "mips64", "MIPS (64-bit big endian)", "Mips");
42
44  /*HasJIT=*/true>
45  B(getTheMips64elTarget(), "mips64el", "MIPS (64-bit little endian)",
46  "Mips");
47 }
use
Move duplicate certain instructions close to their use
Definition: Localizer.cpp:31
functions
amdgpu propagate attributes Late propagate attributes from kernels to functions
Definition: AMDGPUPropagateAttributes.cpp:199
ABI
Generic address nodes are lowered to some combination of target independent and machine specific ABI
Definition: Relocation.txt:34
is
should just be implemented with a CLZ instruction Since there are other e that share this it would be best to implement this in a target independent as zero is the default value for the binary encoder e add r0 add r5 Register operands should be distinct That is
as
compiles conv shl5 shl ret i32 or10 it would be better as
constant
we should consider alternate ways to model stack dependencies Lots of things could be done in WebAssemblyTargetTransformInfo cpp there are numerous optimization related hooks that can be overridden in WebAssemblyTargetLowering Instead of the OptimizeReturned which should consider preserving the returned attribute through to MachineInstrs and extending the MemIntrinsicResults pass to do this optimization on calls too That would also let the WebAssemblyPeephole pass clean up dead defs for such as it does for stores Consider implementing and or getMachineCombinerPatterns Find a clean way to fix the problem which leads to the Shrink Wrapping pass being run after the WebAssembly PEI pass When setting multiple variables to the same constant
llvm
---------------------— PointerInfo ------------------------------------—
Definition: AllocatorList.h:23
it
into xmm2 addss xmm2 xmm1 xmm3 addss xmm3 movaps xmm0 unpcklps xmm0 ret seems silly when it could just be one addps Expand libm rounding functions main should enable SSE DAZ mode and other fast SSE modes Think about doing i64 math in SSE regs on x86 This testcase should have no SSE instructions in it
zero
We currently generate a but we really shouldn eax ecx xorl edx divl ecx eax divl ecx movl eax ret A similar code sequence works for division We currently compile i32 v2 eax eax jo LBB1_2 atomic and others It is also currently not done for read modify write instructions It is also current not done if the OF or CF flags are needed The shift operators have the complication that when the shift count is zero
operations
pointer
Replace within non kernel function use of LDS with pointer
Definition: AMDGPUReplaceLDSUseWithPointer.cpp:443
include
include(LLVM-Build) add_subdirectory(IR) add_subdirectory(FuzzMutate) add_subdirectory(FileCheck) add_subdirectory(InterfaceStub) add_subdirectory(IRReader) add_subdirectory(CodeGen) add_subdirectory(BinaryFormat) add_subdirectory(Bitcode) add_subdirectory(Bitstream) add_subdirectory(DWARFLinker) add_subdirectory(Extensions) add_subdirectory(Frontend) add_subdirectory(Transforms) add_subdirectory(Linker) add_subdirectory(Analysis) add_subdirectory(LTO) add_subdirectory(MC) add_subdirectory(MCA) add_subdirectory(Object) add_subdirectory(ObjectYAML) add_subdirectory(Option) add_subdirectory(Remarks) add_subdirectory(DebugInfo) add_subdirectory(DWP) add_subdirectory(ExecutionEngine) add_subdirectory(Target) add_subdirectory(AsmParser) add_subdirectory(LineEditor) add_subdirectory(ProfileData) add_subdirectory(Passes) add_subdirectory(TextAPI) add_subdirectory(ToolDrivers) add_subdirectory(XRay) if(LLVM_INCLUDE_TESTS) add_subdirectory(Testing) endif() add_subdirectory(WindowsManifest) set(LLVMCONFIGLIBRARYDEPENDENCIESINC "$Definition: CMakeLists.txt:1 llvm::Target Target - Wrapper for Target specific information. Definition: TargetRegistry.h:137 uses This might compile to this xmm1 xorps xmm0 movss xmm0 ret Now consider if the code caused xmm1 to get spilled This might produce this xmm1 movaps xmm0 movaps xmm1 movss xmm0 ret since the reload is only used by these we could fold it into the uses Definition: README-SSE.txt:258 llvm::tgtok::Code @ Code Definition: TGLexer.h:50 llvm::MipsISD::Lo @ Lo Definition: MipsISelLowering.h:79 example Generic address nodes are lowered to some combination of target independent and machine specific and compilation options The choice of specific instructions that are to be used is delegated to ISel which in turn relies on TableGen patterns to choose subtarget specific instructions For example Definition: Relocation.txt:38 to Should compile to Definition: README.txt:449 i8 Clang compiles this i8 Definition: README.txt:504 llvm::AMDGPU::Exp::Target Target Definition: SIDefines.h:739 at compiles ldr LCPI1_0 ldr ldr mov lsr tst moveq r1 ldr LCPI1_1 and r0 bx lr It would be better to do something like to fold the shift into the conditional ldr LCPI1_0 ldr ldr tst movne lsr ldr LCPI1_1 and r0 bx lr it saves an instruction and a register It might be profitable to cse MOVi16 if there are lots of bit immediates with the same bottom half Robert Muth started working on an alternate jump table implementation that does not put the tables in line in the text This is more like the llvm default jump table implementation This might be useful sometime Several revisions of patches are on the mailing beginning at Definition: README.txt:582 that we should consider alternate ways to model stack dependencies Lots of things could be done in WebAssemblyTargetTransformInfo cpp there are numerous optimization related hooks that can be overridden in WebAssemblyTargetLowering Instead of the OptimizeReturned which should consider preserving the returned attribute through to MachineInstrs and extending the MemIntrinsicResults pass to do this optimization on calls too That would also let the WebAssemblyPeephole pass clean up dead defs for such as it does for stores Consider implementing and or getMachineCombinerPatterns Find a clean way to fix the problem which leads to the Shrink Wrapping pass being run after the WebAssembly PEI pass When setting multiple variables to the same we currently get code like const It could be done with a smaller encoding like local tee$pop5 local $pop6 WebAssembly registers are implicitly initialized to zero Explicit zeroing is therefore often redundant and could be optimized away Small indices may use smaller encodings than large indices WebAssemblyRegColoring and or WebAssemblyRegRenumbering should sort registers according to their usage frequency to maximize the usage of smaller encodings Many cases of irreducible control flow could be transformed more optimally than via the transform in WebAssemblyFixIrreducibleControlFlow cpp It may also be worthwhile to do transforms before register particularly when duplicating to allow register coloring to be aware of the duplication WebAssemblyRegStackify could use AliasAnalysis to reorder loads and stores more aggressively WebAssemblyRegStackify is currently a greedy algorithm This means that Definition: README.txt:130 llvm::lltok::less @ less Definition: LLToken.h:32 intrinsic QP Compare Ordered outs ins xscmpudp No intrinsic Definition: README_P9.txt:303 handle then ret i32 result Tail recursion elimination should handle Definition: README.txt:355 and We currently generate a but we really shouldn eax ecx xorl edx divl ecx eax divl ecx movl eax ret A similar code sequence works for division We currently compile i32 v2 eax eax jo LBB1_2 and Definition: README.txt:1271 llvm::Triple::mips64 @ mips64 Definition: Triple.h:64 llvm::LegalityPredicates::any Predicate any(Predicate P0, Predicate P1) True iff P0 or P1 are true. Definition: LegalizerInfo.h:241 extended we should consider alternate ways to model stack dependencies Lots of things could be done in WebAssemblyTargetTransformInfo cpp there are numerous optimization related hooks that can be overridden in WebAssemblyTargetLowering Instead of the OptimizeReturned which should consider preserving the returned attribute through to MachineInstrs and extending the MemIntrinsicResults pass to do this optimization on calls too That would also let the WebAssemblyPeephole pass clean up dead defs for such as it does for stores Consider implementing and or getMachineCombinerPatterns Find a clean way to fix the problem which leads to the Shrink Wrapping pass being run after the WebAssembly PEI pass When setting multiple variables to the same we currently get code like const It could be done with a smaller encoding like local tee$pop5 local $pop6 WebAssembly registers are implicitly initialized to zero Explicit zeroing is therefore often redundant and could be optimized away Small indices may use smaller encodings than large indices WebAssemblyRegColoring and or WebAssemblyRegRenumbering should sort registers according to their usage frequency to maximize the usage of smaller encodings Many cases of irreducible control flow could be transformed more optimally than via the transform in WebAssemblyFixIrreducibleControlFlow cpp It may also be worthwhile to do transforms before register particularly when duplicating to allow register coloring to be aware of the duplication WebAssemblyRegStackify could use AliasAnalysis to reorder loads and stores more aggressively WebAssemblyRegStackify is currently a greedy algorithm This means for a binary however wasm doesn t actually require this WebAssemblyRegStackify could be extended Definition: README.txt:149 llvm::MipsISD::Hi @ Hi Definition: MipsISelLowering.h:75 instructions print must be executed print the must be executed context for all instructions Definition: MustExecute.cpp:355 a =0.0 ? 0.0 :(a > 0.0 ? 1.0 :-1.0) a Definition: README.txt:489 different Code Generation Notes for reduce the size of the ISel and reduce repetition in the implementation In a small number of this can cause different(semantically equivalent) instructions to be used in place of the requested instruction result It looks like we only need to define PPCfmarto for these because according to these instructions perform RTO on fma s result Definition: README_P9.txt:256 clear static void clear(coro::Shape &Shape) Definition: Coroutines.cpp:233 here A predicate compare being used in a select_cc should have the same peephole applied to it as a predicate compare used by a br_cc There should be no mfcr here Definition: README_ALTIVEC.txt:147 llvm::sys::fs::equivalent bool equivalent(file_status A, file_status B) Do file_status's represent the same thing? llvm::ISD::GlobalAddress @ GlobalAddress Definition: ISDOpcodes.h:78 bits demanded bits Definition: DemandedBits.cpp:63 is Generic address nodes are lowered to some combination of target independent and machine specific and compilation options The choice of specific instructions that are to be used is delegated to ISel which in turn relies on TableGen patterns to choose subtarget specific instructions For in the pseudo code generated is Definition: Relocation.txt:41 llvm::Triple::mips64el @ mips64el Definition: Triple.h:65 Y static GCMetadataPrinterRegistry::Add< OcamlGCMetadataPrinter > Y("ocaml", "ocaml 3.10-compatible collector") B static GCRegistry::Add< OcamlGC > B("ocaml", "ocaml 3.10-compatible GC") llvm::ISD::NodeType NodeType ISD::NodeType enum - This enum defines the target-independent operators for a SelectionDAG. Definition: ISDOpcodes.h:40 in The object format emitted by the WebAssembly backed is documented in Definition: README.txt:11 llvm::MipsISD::Highest @ Highest Definition: MipsISelLowering.h:68 llvm::MCID::Flag Flag These should be considered private to the implementation of the MCInstrDesc class. Definition: MCInstrDesc.h:146 be Common register allocation spilling lr str ldr sxth r3 ldr mla r4 can be Definition: README.txt:14 IR Statically lint checks LLVM IR Definition: Lint.cpp:744 reduce loop reduce Definition: LoopStrengthReduce.cpp:6412 Register Promote Memory to Register Definition: Mem2Reg.cpp:110 only dot regions only Definition: RegionPrinter.cpp:205 place Common register allocation spilling lr str ldr sxth r3 ldr mla r4 can lr mov lr str ldr sxth r3 mla r4 and then merge mul and lr str ldr sxth r3 mla r4 It also increase the likelihood the store may become dead bb27 Successors according to LLVM ID Predecessors according to mbb< bb27, 0x8b0a7c0 > Note ADDri is not a two address instruction its result reg1037 is an operand of the PHI node in bb76 and its operand reg1039 is the result of the PHI node We should treat it as a two address code and make sure the ADDri is scheduled after any node that reads reg1039 Use info(i.e. register scavenger) to assign it a free register to allow reuse the collector could move the objects and invalidate the derived pointer This is bad enough in the first place Definition: README.txt:50 X static GCMetadataPrinterRegistry::Add< ErlangGCPrinter > X("erlang", "erlang-compatible garbage collector") will Common register allocation spilling lr str ldr sxth r3 ldr mla r4 can lr mov lr str ldr sxth r3 mla r4 and then merge mul and lr str ldr sxth r3 mla r4 It also increase the likelihood the store may become dead bb27 Successors according to LLVM ID Predecessors according to mbb< bb27, 0x8b0a7c0 > Note ADDri is not a two address instruction its result reg1037 is an operand of the PHI node in bb76 and its operand reg1039 is the result of the PHI node We should treat it as a two address code and make sure the ADDri is scheduled after any node that reads reg1039 Use info(i.e. register scavenger) to assign it a free register to allow reuse the collector could move the objects and invalidate the derived pointer This is bad enough in the first but safe points can crop up unpredictably **array_addr i32 n y store obj obj **nth_el If the i64 division is lowered to a then a safe point will(must) appear for the call site. If a collection occurs one the resulting code requires compare and branches when and if the revised code is with conditional branches instead of More there is a byte word extend before each where there should be only one Definition: README.txt:401 into Clang compiles this into Definition: README.txt:504 llvm::ISD::BlockAddress @ BlockAddress Definition: ISDOpcodes.h:84 llvm::tgtok::In @ In Definition: TGLexer.h:51 allocation Eliminate PHI nodes for register allocation Definition: PHIElimination.cpp:136 llvm::lto::backend Error backend(const Config &C, AddStreamFn AddStream, unsigned ParallelCodeGenParallelismLevel, Module &M, ModuleSummaryIndex &CombinedIndex) Runs a regular LTO backend. Definition: LTOBackend.cpp:505 file dot regions Print regions of function to dot file(with no function bodies)" nodes Unify divergent function exit nodes Definition: AMDGPUUnifyDivergentExitNodes.cpp:87 MSA Code Generation Notes for MSA Definition: MSA.txt:2 intrinsics expand Expand reduction intrinsics Definition: ExpandReductions.cpp:200 MipsTargetInfo.h LLVM_EXTERNAL_VISIBILITY #define LLVM_EXTERNAL_VISIBILITY Definition: Compiler.h:132 llvm::ARM_AM::add @ add Definition: ARMAddressingModes.h:39 Generic @ Generic Definition: AArch64MCAsmInfo.cpp:23 legalization Combine AArch64 machine instrs before legalization Definition: AArch64O0PreLegalizerCombiner.cpp:166 node This currently compiles esp xmm0 movsd esp eax eax esp ret We should use not the dag combiner This is because dagcombine2 needs to be able to see through the X86ISD::Wrapper node Definition: README-SSE.txt:406 size i< reg-> size Definition: README.txt:166 llvm::MipsISD::VSHF @ VSHF Definition: MipsISelLowering.h:225 generated The following code is currently generated Definition: README.txt:954 getAddrLocal Generic address nodes are lowered to some combination of target independent and machine specific and compilation options The choice of specific instructions that are to be used is delegated to ISel which in turn relies on TableGen patterns to choose subtarget specific instructions For in getAddrLocal Definition: Relocation.txt:38 llvm::shuffle void shuffle(Iterator first, Iterator last, RNG &&g) Definition: STLExtras.h:1378 called is currently compiled esp esp jne LBB1_1 esp ret esp esp jne L_abort$stub esp ret This can be applied to any no return function call that takes no arguments etc the stack save restore logic could be shrink producing something like esp jne LBB1_1 ret esp call L_abort \$stub Both are useful in different situations it could be shrink wrapped and tail called
LLVM currently emits rax rax movq rax rax ret It could narrow the loads and stores to emit rax rax movq rax rax ret The trouble is that there is a TokenFactor between the store and the load
llvm::getTheMipsTarget
Target & getTheMipsTarget()
Definition: MipsTargetInfo.cpp:13
example
llvm lib Support Unix the directory structure underneath this directory could look like only those directories actually needing to be created should be created further subdirectories could be created to reflect versions of the various standards For example
getTargetNode
static SDValue getTargetNode(GlobalAddressSDNode *N, SDLoc DL, EVT Ty, SelectionDAG &DAG, unsigned Flags)
Definition: RISCVISelLowering.cpp:2942
also
wrapper
amdgpu aa wrapper
Definition: AMDGPUAliasAnalysis.cpp:29
ConstantPool
MIPS Relocation Principles In there are several elements of the llvm::ISD::NodeType enum that deal with addresses and or relocations These are defined in include llvm Target TargetSelectionDAG td ConstantPool
Definition: Relocation.txt:6
elements
This compiles xmm1 mulss xmm1 xorps xmm0 movss xmm0 ret Because mulss doesn t modify the top elements
A
* A
instruction
Since we know that Vector is byte aligned and we know the element offset of we should change the load into a lve *x instruction
lowered
into xmm2 addss xmm2 xmm1 xmm3 addss xmm3 movaps xmm0 unpcklps xmm0 ret seems silly when it could just be one addps Expand libm rounding functions main should enable SSE DAZ mode and other fast SSE modes Think about doing i64 math in SSE regs on x86 This testcase should have no SSE instructions in and only one load from a constant double ret double C the select is being lowered
DL
MachineBasicBlock MachineBasicBlock::iterator DebugLoc DL
Definition: AArch64SLSHardening.cpp:76
llvm::getTheMips64elTarget
Target & getTheMips64elTarget()
Definition: MipsTargetInfo.cpp:25
llvm::getTheMips64Target
Target & getTheMips64Target()
Definition: MipsTargetInfo.cpp:21
not
Should compile r2 movcc movcs str strb mov lr r1 movcs movcc mov lr not
mode
*Add support for compiling functions in both ARM and Thumb mode
possible
the resulting code requires compare and branches when and if the revised code is with conditional branches instead of More there is a byte word extend before each where there should be only and the condition codes are not remembered when the same two values are compared twice More LSR enhancements possible
ExternalSymbol
MIPS Relocation Principles In there are several elements of the llvm::ISD::NodeType enum that deal with addresses and or relocations These are defined in include llvm Target TargetSelectionDAG td ExternalSymbol
Definition: Relocation.txt:7
bit
compiles ldr LCPI1_0 ldr ldr mov lsr tst moveq r1 ldr LCPI1_1 and r0 bx lr It would be better to do something like to fold the shift into the conditional ldr LCPI1_0 ldr ldr tst movne lsr ldr LCPI1_1 and r0 bx lr it saves an instruction and a register It might be profitable to cse MOVi16 if there are lots of bit immediates with the same bottom half Robert Muth started working on an alternate jump table implementation that does not put the tables in line in the text This is more like the llvm default jump table implementation This might be useful sometime Several revisions of patches are on the mailing beginning while CMP sets them like a subtract Therefore to be able to use CMN for comparisons other than the Z bit
code
*Add support for compiling functions in both ARM and Thumb then taking the smallest *Add support for compiling individual basic blocks in thumb when in a larger ARM function This can be used for presumed cold code
can
This might compile to this xmm1 xorps xmm0 movss xmm0 ret Now consider if the code caused xmm1 to get spilled This might produce this xmm1 movaps xmm0 movaps xmm1 movss xmm0 ret since the reload is only used by these we could fold it into the producing something like xmm1 movaps xmm0 ret saving two instructions The basic idea is that a reload from a spill can
or
compiles or
llvm::AMDGPU::SendMsg::Op
Op
Definition: SIDefines.h:321
JumpTable
MIPS Relocation Principles In there are several elements of the llvm::ISD::NodeType enum that deal with addresses and or relocations These are defined in include llvm Target TargetSelectionDAG td JumpTable
Definition: Relocation.txt:6
options
The object format emitted by the WebAssembly backed is documented see the home and packaging for producing WebAssembly applications that can run in browsers and other environments wasi sdk provides a more minimal C C SDK based on llvm and a libc based on for producing WebAssemmbly applictions that use the WASI ABI Rust provides WebAssembly support integrated into Cargo There are two main options
MIPS Relocation Principles In there are several elements of the llvm::ISD::NodeType enum that deal with addresses and or relocations These are defined in include llvm Target TargetSelectionDAG td GlobalTLSAddress
Definition: Relocation.txt:6
llvm::RegisterTarget
RegisterTarget - Helper template for registering a target, for use in the target's initialization fun...
Definition: TargetRegistry.h:1057
llvm::Triple::mipsel
@ mipsel
Definition: Triple.h:63
got
Generic address nodes are lowered to some combination of target independent and machine specific and compilation options The choice of specific instructions that are to be used is delegated to ISel which in turn relies on TableGen patterns to choose subtarget specific instructions For in the pseudo code generated got(sym))
than
So that lo16() r2 stb r3 blr Becomes r3 they should compile to something better than
SDNodes
Generic address nodes are lowered to some combination of target independent and machine specific SDNodes(for example:MipsISD::{Highest, Higher, Hi, Lo}) depending upon relocation model
This
the resulting code requires compare and branches when and if the revised code is with conditional branches instead of More there is a byte word extend before each where there should be only and the condition codes are not remembered when the same two values are compared twice More LSR enhancements i8 and i32 load store addressing modes are identical This
llvm::MipsISD::Higher
@ Higher
Definition: MipsISelLowering.h:71
used
This might compile to this xmm1 xorps xmm0 movss xmm0 ret Now consider if the code caused xmm1 to get spilled This might produce this xmm1 movaps xmm0 movaps xmm1 movss xmm0 ret since the reload is only used by these we could fold it into the producing something like xmm1 movaps xmm0 ret saving two instructions The basic idea is that a reload from a spill if only one byte chunk is used
However
Code Generation Notes for reduce the size of the ISel and reduce repetition in the implementation In a small number of this can cause even when no optimisation has taken place two instructions might be equally valid for some given IR and one is chosen in preference to the other bclri splat[bhwd] instructions will be selected instead of vshf[bhwd] Unlike the ilv and pck this is matched from MipsISD::VSHF instead of a special case MipsISD node ilvl pckev or pckev d since ilvev d covers the same shuffle ilvev d will be emitted instead ilvr ilvod pckod or pckod d since ilvod d covers the same shuffle ilvod d will be emitted instead splat[bhwd] The intrinsic will work as expected However
Definition: MSA.txt:44
N
#define N
LLVMInitializeMipsTargetInfo
LLVM_EXTERNAL_VISIBILITY void LLVMInitializeMipsTargetInfo()
Definition: MipsTargetInfo.cpp:30
LLVM
MIPS Relocation Principles In LLVM
Definition: Relocation.txt:3
calls
amdgpu Simplify well known AMD library calls
Definition: AMDGPULibCalls.cpp:199
lowering
amdgpu printf runtime AMDGPU Printf lowering
Definition: AMDGPUPrintfRuntimeBinding.cpp:87
execution
speculative execution
Definition: SpeculativeExecution.cpp:135
matcher
Code Generation Notes for reduce the size of the ISel matcher
Definition: MSA.txt:5
instructions
Code Generation Notes for reduce the size of the ISel and reduce repetition in the implementation In a small number of this can cause even when no optimisation has taken place two instructions might be equally valid for some given IR and one is chosen in preference to the other bclri splat[bhwd] instructions will be selected instead of vshf[bhwd] Unlike the ilv and pck * instructions
Definition: MSA.txt:31
namely
MIPS Relocation Principles In there are several elements of the llvm::ISD::NodeType enum that deal with addresses and or relocations These are defined in include llvm Target TargetSelectionDAG td namely
Definition: Relocation.txt:3
model
This currently compiles esp xmm0 movsd esp eax eax esp ret We should use not the dag combiner This is because dagcombine2 needs to be able to see through the X86ISD::Wrapper which DAGCombine can t really do The code for turning x load into a single vector load is target independent and should be moved to the dag combiner The code for turning x load into a vector load can only handle a direct load from a global or a direct load from the stack It should be generalized to handle any load from where P can be anything The alignment inference code cannot handle loads from globals in static non mode because it doesn t look through the extra dyld stub load If you try vec_align ll without relocation model
TargetRegistry.h
llvm::Triple::mips
@ mips
Definition: Triple.h:62
of