Go to the documentation of this file.
24 Mapping(Writer, Container) {}
27 assert(!CurrentSymbol &&
"Already in a symbol mapping!");
31 if (
auto EC = writeRecordPrefix(
Record.kind()))
34 CurrentSymbol =
Record.kind();
42 assert(CurrentSymbol &&
"Not in a symbol mapping!");
53 uint8_t *StableStorage = Storage.
Allocate<uint8_t>(RecordEnd);
54 ::memcpy(StableStorage, &RecordBuffer[0], RecordEnd);
56 CurrentSymbol.
reset();
This is an optimization pass for GlobalISel generic memory operations.
Error writeInteger(T Value)
Write the integer Value to the underlying stream in the specified endianness.
static ErrorSuccess success()
Create a success value.
void setOffset(uint64_t Off)
Error visitSymbolBegin(CVSymbol &Record) override
Error visitSymbolEnd(CVSymbol &Record) override
Error visitSymbolEnd(CVSymbol &Record) override
SymbolSerializer(BumpPtrAllocator &Storage, CodeViewContainer Container)
Allocate memory in an ever growing pool, as if by bump-pointer.
uint64_t getOffset() const
assert(ImpDefSCC.getReg()==AMDGPU::SCC &&ImpDefSCC.isDef())
<%struct.s * > cast struct s *S to sbyte *< sbyte * > sbyte uint cast struct s *agg result to sbyte *< sbyte * > sbyte uint cast struct s *memtmp to sbyte *< sbyte * > sbyte uint ret void llc ends up issuing two memcpy or custom lower memcpy(of small size) to be ldmia/stmia. I think option 2 is better but the current register allocator cannot allocate a chunk of registers at a time. A feasible temporary solution is to use specific physical registers at the lowering time for small(<
Error visitSymbolBegin(CVSymbol &Record) override
CVRecord is a fat pointer (base + size pair) to a symbol or type record.
Lightweight error class with error context and mandatory checking.
LLVM_ATTRIBUTE_RETURNS_NONNULL void * Allocate(size_t Size, Align Alignment)
Allocate space at the specified alignment.
Reimplement select in terms of SEL *We would really like to support but we need to prove that the add doesn t need to overflow between the two bit chunks *Implement pre post increment support(e.g. PR935) *Implement smarter const ant generation for binops with large immediates. A few ARMv6T2 ops should be pattern matched