LLVM 20.0.0git
InstrRefBasedImpl.cpp
Go to the documentation of this file.
1//===- InstrRefBasedImpl.cpp - Tracking Debug Value MIs -------------------===//
2//
3// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
4// See https://llvm.org/LICENSE.txt for license information.
5// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
6//
7//===----------------------------------------------------------------------===//
8/// \file InstrRefBasedImpl.cpp
9///
10/// This is a separate implementation of LiveDebugValues, see
11/// LiveDebugValues.cpp and VarLocBasedImpl.cpp for more information.
12///
13/// This pass propagates variable locations between basic blocks, resolving
14/// control flow conflicts between them. The problem is SSA construction, where
15/// each debug instruction assigns the *value* that a variable has, and every
16/// instruction where the variable is in scope uses that variable. The resulting
17/// map of instruction-to-value is then translated into a register (or spill)
18/// location for each variable over each instruction.
19///
20/// The primary difference from normal SSA construction is that we cannot
21/// _create_ PHI values that contain variable values. CodeGen has already
22/// completed, and we can't alter it just to make debug-info complete. Thus:
23/// we can identify function positions where we would like a PHI value for a
24/// variable, but must search the MachineFunction to see whether such a PHI is
25/// available. If no such PHI exists, the variable location must be dropped.
26///
27/// To achieve this, we perform two kinds of analysis. First, we identify
28/// every value defined by every instruction (ignoring those that only move
29/// another value), then re-compute an SSA-form representation of the
30/// MachineFunction, using value propagation to eliminate any un-necessary
31/// PHI values. This gives us a map of every value computed in the function,
32/// and its location within the register file / stack.
33///
34/// Secondly, for each variable we perform the same analysis, where each debug
35/// instruction is considered a def, and every instruction where the variable
36/// is in lexical scope as a use. Value propagation is used again to eliminate
37/// any un-necessary PHIs. This gives us a map of each variable to the value
38/// it should have in a block.
39///
40/// Once both are complete, we have two maps for each block:
41/// * Variables to the values they should have,
42/// * Values to the register / spill slot they are located in.
43/// After which we can marry-up variable values with a location, and emit
44/// DBG_VALUE instructions specifying those locations. Variable locations may
45/// be dropped in this process due to the desired variable value not being
46/// resident in any machine location, or because there is no PHI value in any
47/// location that accurately represents the desired value. The building of
48/// location lists for each block is left to DbgEntityHistoryCalculator.
49///
50/// This pass is kept efficient because the size of the first SSA problem
51/// is proportional to the working-set size of the function, which the compiler
52/// tries to keep small. (It's also proportional to the number of blocks).
53/// Additionally, we repeatedly perform the second SSA problem analysis with
54/// only the variables and blocks in a single lexical scope, exploiting their
55/// locality.
56///
57/// ### Terminology
58///
59/// A machine location is a register or spill slot, a value is something that's
60/// defined by an instruction or PHI node, while a variable value is the value
61/// assigned to a variable. A variable location is a machine location, that must
62/// contain the appropriate variable value. A value that is a PHI node is
63/// occasionally called an mphi.
64///
65/// The first SSA problem is the "machine value location" problem,
66/// because we're determining which machine locations contain which values.
67/// The "locations" are constant: what's unknown is what value they contain.
68///
69/// The second SSA problem (the one for variables) is the "variable value
70/// problem", because it's determining what values a variable has, rather than
71/// what location those values are placed in.
72///
73/// TODO:
74/// Overlapping fragments
75/// Entry values
76/// Add back DEBUG statements for debugging this
77/// Collect statistics
78///
79//===----------------------------------------------------------------------===//
80
81#include "llvm/ADT/DenseMap.h"
83#include "llvm/ADT/STLExtras.h"
85#include "llvm/ADT/SmallSet.h"
105#include "llvm/Config/llvm-config.h"
107#include "llvm/IR/DebugLoc.h"
108#include "llvm/IR/Function.h"
110#include "llvm/Support/Casting.h"
112#include "llvm/Support/Debug.h"
118#include <algorithm>
119#include <cassert>
120#include <climits>
121#include <cstdint>
122#include <functional>
123#include <queue>
124#include <tuple>
125#include <utility>
126#include <vector>
127
128#include "InstrRefBasedImpl.h"
129#include "LiveDebugValues.h"
130#include <optional>
131
132using namespace llvm;
133using namespace LiveDebugValues;
134
135// SSAUpdaterImple sets DEBUG_TYPE, change it.
136#undef DEBUG_TYPE
137#define DEBUG_TYPE "livedebugvalues"
138
139// Act more like the VarLoc implementation, by propagating some locations too
140// far and ignoring some transfers.
141static cl::opt<bool> EmulateOldLDV("emulate-old-livedebugvalues", cl::Hidden,
142 cl::desc("Act like old LiveDebugValues did"),
143 cl::init(false));
144
145// Limit for the maximum number of stack slots we should track, past which we
146// will ignore any spills. InstrRefBasedLDV gathers detailed information on all
147// stack slots which leads to high memory consumption, and in some scenarios
148// (such as asan with very many locals) the working set of the function can be
149// very large, causing many spills. In these scenarios, it is very unlikely that
150// the developer has hundreds of variables live at the same time that they're
151// carefully thinking about -- instead, they probably autogenerated the code.
152// When this happens, gracefully stop tracking excess spill slots, rather than
153// consuming all the developer's memory.
155 StackWorkingSetLimit("livedebugvalues-max-stack-slots", cl::Hidden,
156 cl::desc("livedebugvalues-stack-ws-limit"),
157 cl::init(250));
158
159DbgOpID DbgOpID::UndefID = DbgOpID(0xffffffff);
160
161/// Tracker for converting machine value locations and variable values into
162/// variable locations (the output of LiveDebugValues), recorded as DBG_VALUEs
163/// specifying block live-in locations and transfers within blocks.
164///
165/// Operating on a per-block basis, this class takes a (pre-loaded) MLocTracker
166/// and must be initialized with the set of variable values that are live-in to
167/// the block. The caller then repeatedly calls process(). TransferTracker picks
168/// out variable locations for the live-in variable values (if there _is_ a
169/// location) and creates the corresponding DBG_VALUEs. Then, as the block is
170/// stepped through, transfers of values between machine locations are
171/// identified and if profitable, a DBG_VALUE created.
172///
173/// This is where debug use-before-defs would be resolved: a variable with an
174/// unavailable value could materialize in the middle of a block, when the
175/// value becomes available. Or, we could detect clobbers and re-specify the
176/// variable in a backup location. (XXX these are unimplemented).
178public:
181 /// This machine location tracker is assumed to always contain the up-to-date
182 /// value mapping for all machine locations. TransferTracker only reads
183 /// information from it. (XXX make it const?)
188
189 /// Record of all changes in variable locations at a block position. Awkwardly
190 /// we allow inserting either before or after the point: MBB != nullptr
191 /// indicates it's before, otherwise after.
192 struct Transfer {
193 MachineBasicBlock::instr_iterator Pos; /// Position to insert DBG_VALUes
194 MachineBasicBlock *MBB; /// non-null if we should insert after.
195 /// Vector of DBG_VALUEs to insert. Store with their DebugVariableID so that
196 /// they can be sorted into a stable order for emission at a later time.
198 };
199
200 /// Stores the resolved operands (machine locations and constants) and
201 /// qualifying meta-information needed to construct a concrete DBG_VALUE-like
202 /// instruction.
206
209 : Ops(Ops.begin(), Ops.end()), Properties(Properties) {}
210
211 /// Returns all the LocIdx values used in this struct, in the order in which
212 /// they appear as operands in the debug value; may contain duplicates.
213 auto loc_indices() const {
214 return map_range(
216 Ops, [](const ResolvedDbgOp &Op) { return !Op.IsConst; }),
217 [](const ResolvedDbgOp &Op) { return Op.Loc; });
218 }
219 };
220
221 /// Collection of transfers (DBG_VALUEs) to be inserted.
223
224 /// Local cache of what-value-is-in-what-LocIdx. Used to identify differences
225 /// between TransferTrackers view of variable locations and MLocTrackers. For
226 /// example, MLocTracker observes all clobbers, but TransferTracker lazily
227 /// does not.
229
230 /// Map from LocIdxes to which DebugVariables are based that location.
231 /// Mantained while stepping through the block. Not accurate if
232 /// VarLocs[Idx] != MTracker->LocIdxToIDNum[Idx].
234
235 /// Map from DebugVariable to it's current location and qualifying meta
236 /// information. To be used in conjunction with ActiveMLocs to construct
237 /// enough information for the DBG_VALUEs for a particular LocIdx.
239
240 /// Temporary cache of DBG_VALUEs to be entered into the Transfers collection.
242
243 /// Record of a use-before-def: created when a value that's live-in to the
244 /// current block isn't available in any machine location, but it will be
245 /// defined in this block.
247 /// Value of this variable, def'd in block.
249 /// Identity of this variable.
251 /// Additional variable properties.
256 };
257
258 /// Map from instruction index (within the block) to the set of UseBeforeDefs
259 /// that become defined at that instruction.
261
262 /// The set of variables that are in UseBeforeDefs and can become a location
263 /// once the relevant value is defined. An element being erased from this
264 /// collection prevents the use-before-def materializing.
266
269
272 const TargetRegisterInfo &TRI,
273 const BitVector &CalleeSavedRegs, const TargetPassConfig &TPC)
277 auto &TM = TPC.getTM<TargetMachine>();
278 ShouldEmitDebugEntryValues = TM.Options.ShouldEmitDebugEntryValues();
279 }
280
281 bool isCalleeSaved(LocIdx L) const {
282 unsigned Reg = MTracker->LocIdxToLocID[L];
283 if (Reg >= MTracker->NumRegs)
284 return false;
285 for (MCRegAliasIterator RAI(Reg, &TRI, true); RAI.isValid(); ++RAI)
286 if (CalleeSavedRegs.test(*RAI))
287 return true;
288 return false;
289 };
290
291 // An estimate of the expected lifespan of values at a machine location, with
292 // a greater value corresponding to a longer expected lifespan, i.e. spill
293 // slots generally live longer than callee-saved registers which generally
294 // live longer than non-callee-saved registers. The minimum value of 0
295 // corresponds to an illegal location that cannot have a "lifespan" at all.
296 enum class LocationQuality : unsigned char {
297 Illegal = 0,
298 Register,
300 SpillSlot,
302 };
303
305 unsigned Location : 24;
306 unsigned Quality : 8;
307
308 public:
309 LocationAndQuality() : Location(0), Quality(0) {}
311 : Location(L.asU64()), Quality(static_cast<unsigned>(Q)) {}
312 LocIdx getLoc() const {
313 if (!Quality)
314 return LocIdx::MakeIllegalLoc();
315 return LocIdx(Location);
316 }
317 LocationQuality getQuality() const { return LocationQuality(Quality); }
318 bool isIllegal() const { return !Quality; }
319 bool isBest() const { return getQuality() == LocationQuality::Best; }
320 };
321
322 using ValueLocPair = std::pair<ValueIDNum, LocationAndQuality>;
323
324 static inline bool ValueToLocSort(const ValueLocPair &A,
325 const ValueLocPair &B) {
326 return A.first < B.first;
327 };
328
329 // Returns the LocationQuality for the location L iff the quality of L is
330 // is strictly greater than the provided minimum quality.
331 std::optional<LocationQuality>
333 if (L.isIllegal())
334 return std::nullopt;
336 return std::nullopt;
337 if (MTracker->isSpill(L))
340 return std::nullopt;
341 if (isCalleeSaved(L))
343 if (Min >= LocationQuality::Register)
344 return std::nullopt;
346 }
347
348 /// For a variable \p Var with the live-in value \p Value, attempts to resolve
349 /// the DbgValue to a concrete DBG_VALUE, emitting that value and loading the
350 /// tracking information to track Var throughout the block.
351 /// \p ValueToLoc is a map containing the best known location for every
352 /// ValueIDNum that Value may use.
353 /// \p MBB is the basic block that we are loading the live-in value for.
354 /// \p DbgOpStore is the map containing the DbgOpID->DbgOp mapping needed to
355 /// determine the values used by Value.
357 const SmallVectorImpl<ValueLocPair> &ValueToLoc,
359 SmallVector<DbgOp> DbgOps;
360 SmallVector<ResolvedDbgOp> ResolvedDbgOps;
361 bool IsValueValid = true;
362 unsigned LastUseBeforeDef = 0;
363
364 // If every value used by the incoming DbgValue is available at block
365 // entry, ResolvedDbgOps will contain the machine locations/constants for
366 // those values and will be used to emit a debug location.
367 // If one or more values are not yet available, but will all be defined in
368 // this block, then LastUseBeforeDef will track the instruction index in
369 // this BB at which the last of those values is defined, DbgOps will
370 // contain the values that we will emit when we reach that instruction.
371 // If one or more values are undef or not available throughout this block,
372 // and we can't recover as an entry value, we set IsValueValid=false and
373 // skip this variable.
374 for (DbgOpID ID : Value.getDbgOpIDs()) {
375 DbgOp Op = DbgOpStore.find(ID);
376 DbgOps.push_back(Op);
377 if (ID.isUndef()) {
378 IsValueValid = false;
379 break;
380 }
381 if (ID.isConst()) {
382 ResolvedDbgOps.push_back(Op.MO);
383 continue;
384 }
385
386 // Search for the desired ValueIDNum, to examine the best location found
387 // for it. Use an empty ValueLocPair to search for an entry in ValueToLoc.
388 const ValueIDNum &Num = Op.ID;
389 ValueLocPair Probe(Num, LocationAndQuality());
390 auto ValuesPreferredLoc = std::lower_bound(
391 ValueToLoc.begin(), ValueToLoc.end(), Probe, ValueToLocSort);
392
393 // There must be a legitimate entry found for Num.
394 assert(ValuesPreferredLoc != ValueToLoc.end() &&
395 ValuesPreferredLoc->first == Num);
396
397 if (ValuesPreferredLoc->second.isIllegal()) {
398 // If it's a def that occurs in this block, register it as a
399 // use-before-def to be resolved as we step through the block.
400 // Continue processing values so that we add any other UseBeforeDef
401 // entries needed for later.
402 if (Num.getBlock() == (unsigned)MBB.getNumber() && !Num.isPHI()) {
403 LastUseBeforeDef = std::max(LastUseBeforeDef,
404 static_cast<unsigned>(Num.getInst()));
405 continue;
406 }
407 recoverAsEntryValue(VarID, Value.Properties, Num);
408 IsValueValid = false;
409 break;
410 }
411
412 // Defer modifying ActiveVLocs until after we've confirmed we have a
413 // live range.
414 LocIdx M = ValuesPreferredLoc->second.getLoc();
415 ResolvedDbgOps.push_back(M);
416 }
417
418 // If we cannot produce a valid value for the LiveIn value within this
419 // block, skip this variable.
420 if (!IsValueValid)
421 return;
422
423 // Add UseBeforeDef entry for the last value to be defined in this block.
424 if (LastUseBeforeDef) {
425 addUseBeforeDef(VarID, Value.Properties, DbgOps, LastUseBeforeDef);
426 return;
427 }
428
429 // The LiveIn value is available at block entry, begin tracking and record
430 // the transfer.
431 for (const ResolvedDbgOp &Op : ResolvedDbgOps)
432 if (!Op.IsConst)
433 ActiveMLocs[Op.Loc].insert(VarID);
434 auto NewValue = ResolvedDbgValue{ResolvedDbgOps, Value.Properties};
435 auto Result = ActiveVLocs.insert(std::make_pair(VarID, NewValue));
436 if (!Result.second)
437 Result.first->second = NewValue;
438 auto &[Var, DILoc] = DVMap.lookupDVID(VarID);
440 std::make_pair(VarID, &*MTracker->emitLoc(ResolvedDbgOps, Var, DILoc,
441 Value.Properties)));
442 }
443
444 /// Load object with live-in variable values. \p mlocs contains the live-in
445 /// values in each machine location, while \p vlocs the live-in variable
446 /// values. This method picks variable locations for the live-in variables,
447 /// creates DBG_VALUEs and puts them in #Transfers, then prepares the other
448 /// object fields to track variable locations as we step through the block.
449 /// FIXME: could just examine mloctracker instead of passing in \p mlocs?
450 void
452 const SmallVectorImpl<std::pair<DebugVariableID, DbgValue>> &VLocs,
453 unsigned NumLocs) {
455 ActiveVLocs.clear();
456 VarLocs.clear();
457 VarLocs.reserve(NumLocs);
458 UseBeforeDefs.clear();
460
461 // Mapping of the preferred locations for each value. Collected into this
462 // vector then sorted for easy searching.
464
465 // Initialized the preferred-location map with illegal locations, to be
466 // filled in later.
467 for (const auto &VLoc : VLocs)
468 if (VLoc.second.Kind == DbgValue::Def)
469 for (DbgOpID OpID : VLoc.second.getDbgOpIDs())
470 if (!OpID.ID.IsConst)
471 ValueToLoc.push_back(
472 {DbgOpStore.find(OpID).ID, LocationAndQuality()});
473
474 llvm::sort(ValueToLoc, ValueToLocSort);
475 ActiveMLocs.reserve(VLocs.size());
476 ActiveVLocs.reserve(VLocs.size());
477
478 // Produce a map of value numbers to the current machine locs they live
479 // in. When emulating VarLocBasedImpl, there should only be one
480 // location; when not, we get to pick.
481 for (auto Location : MTracker->locations()) {
482 LocIdx Idx = Location.Idx;
483 ValueIDNum &VNum = MLocs[Idx.asU64()];
484 if (VNum == ValueIDNum::EmptyValue)
485 continue;
486 VarLocs.push_back(VNum);
487
488 // Is there a variable that wants a location for this value? If not, skip.
489 ValueLocPair Probe(VNum, LocationAndQuality());
490 auto VIt = std::lower_bound(ValueToLoc.begin(), ValueToLoc.end(), Probe,
492 if (VIt == ValueToLoc.end() || VIt->first != VNum)
493 continue;
494
495 auto &Previous = VIt->second;
496 // If this is the first location with that value, pick it. Otherwise,
497 // consider whether it's a "longer term" location.
498 std::optional<LocationQuality> ReplacementQuality =
499 getLocQualityIfBetter(Idx, Previous.getQuality());
500 if (ReplacementQuality)
501 Previous = LocationAndQuality(Idx, *ReplacementQuality);
502 }
503
504 // Now map variables to their picked LocIdxes.
505 for (const auto &Var : VLocs) {
506 loadVarInloc(MBB, DbgOpStore, ValueToLoc, Var.first, Var.second);
507 }
509 }
510
511 /// Record that \p Var has value \p ID, a value that becomes available
512 /// later in the function.
514 const DbgValueProperties &Properties,
515 const SmallVectorImpl<DbgOp> &DbgOps, unsigned Inst) {
516 UseBeforeDefs[Inst].emplace_back(DbgOps, VarID, Properties);
518 }
519
520 /// After the instruction at index \p Inst and position \p pos has been
521 /// processed, check whether it defines a variable value in a use-before-def.
522 /// If so, and the variable value hasn't changed since the start of the
523 /// block, create a DBG_VALUE.
525 auto MIt = UseBeforeDefs.find(Inst);
526 if (MIt == UseBeforeDefs.end())
527 return;
528
529 // Map of values to the locations that store them for every value used by
530 // the variables that may have become available.
532
533 // Populate ValueToLoc with illegal default mappings for every value used by
534 // any UseBeforeDef variables for this instruction.
535 for (auto &Use : MIt->second) {
536 if (!UseBeforeDefVariables.count(Use.VarID))
537 continue;
538
539 for (DbgOp &Op : Use.Values) {
540 assert(!Op.isUndef() && "UseBeforeDef erroneously created for a "
541 "DbgValue with undef values.");
542 if (Op.IsConst)
543 continue;
544
545 ValueToLoc.insert({Op.ID, LocationAndQuality()});
546 }
547 }
548
549 // Exit early if we have no DbgValues to produce.
550 if (ValueToLoc.empty())
551 return;
552
553 // Determine the best location for each desired value.
554 for (auto Location : MTracker->locations()) {
555 LocIdx Idx = Location.Idx;
556 ValueIDNum &LocValueID = Location.Value;
557
558 // Is there a variable that wants a location for this value? If not, skip.
559 auto VIt = ValueToLoc.find(LocValueID);
560 if (VIt == ValueToLoc.end())
561 continue;
562
563 auto &Previous = VIt->second;
564 // If this is the first location with that value, pick it. Otherwise,
565 // consider whether it's a "longer term" location.
566 std::optional<LocationQuality> ReplacementQuality =
567 getLocQualityIfBetter(Idx, Previous.getQuality());
568 if (ReplacementQuality)
569 Previous = LocationAndQuality(Idx, *ReplacementQuality);
570 }
571
572 // Using the map of values to locations, produce a final set of values for
573 // this variable.
574 for (auto &Use : MIt->second) {
575 if (!UseBeforeDefVariables.count(Use.VarID))
576 continue;
577
579
580 for (DbgOp &Op : Use.Values) {
581 if (Op.IsConst) {
582 DbgOps.push_back(Op.MO);
583 continue;
584 }
585 LocIdx NewLoc = ValueToLoc.find(Op.ID)->second.getLoc();
586 if (NewLoc.isIllegal())
587 break;
588 DbgOps.push_back(NewLoc);
589 }
590
591 // If at least one value used by this debug value is no longer available,
592 // i.e. one of the values was killed before we finished defining all of
593 // the values used by this variable, discard.
594 if (DbgOps.size() != Use.Values.size())
595 continue;
596
597 // Otherwise, we're good to go.
598 auto &[Var, DILoc] = DVMap.lookupDVID(Use.VarID);
599 PendingDbgValues.push_back(std::make_pair(
600 Use.VarID, MTracker->emitLoc(DbgOps, Var, DILoc, Use.Properties)));
601 }
602 flushDbgValues(pos, nullptr);
603 }
604
605 /// Helper to move created DBG_VALUEs into Transfers collection.
607 if (PendingDbgValues.size() == 0)
608 return;
609
610 // Pick out the instruction start position.
612 if (MBB && Pos == MBB->begin())
613 BundleStart = MBB->instr_begin();
614 else
615 BundleStart = getBundleStart(Pos->getIterator());
616
617 Transfers.push_back({BundleStart, MBB, PendingDbgValues});
619 }
620
622 const DIExpression *Expr) const {
623 if (!Var.getVariable()->isParameter())
624 return false;
625
626 if (Var.getInlinedAt())
627 return false;
628
629 if (Expr->getNumElements() > 0 && !Expr->isDeref())
630 return false;
631
632 return true;
633 }
634
635 bool isEntryValueValue(const ValueIDNum &Val) const {
636 // Must be in entry block (block number zero), and be a PHI / live-in value.
637 if (Val.getBlock() || !Val.isPHI())
638 return false;
639
640 // Entry values must enter in a register.
641 if (MTracker->isSpill(Val.getLoc()))
642 return false;
643
647 return Reg != SP && Reg != FP;
648 }
649
651 const DbgValueProperties &Prop,
652 const ValueIDNum &Num) {
653 // Is this variable location a candidate to be an entry value. First,
654 // should we be trying this at all?
656 return false;
657
658 const DIExpression *DIExpr = Prop.DIExpr;
659
660 // We don't currently emit entry values for DBG_VALUE_LISTs.
661 if (Prop.IsVariadic) {
662 // If this debug value can be converted to be non-variadic, then do so;
663 // otherwise give up.
664 auto NonVariadicExpression =
666 if (!NonVariadicExpression)
667 return false;
668 DIExpr = *NonVariadicExpression;
669 }
670
671 auto &[Var, DILoc] = DVMap.lookupDVID(VarID);
672
673 // Is the variable appropriate for entry values (i.e., is a parameter).
674 if (!isEntryValueVariable(Var, DIExpr))
675 return false;
676
677 // Is the value assigned to this variable still the entry value?
678 if (!isEntryValueValue(Num))
679 return false;
680
681 // Emit a variable location using an entry value expression.
686 PendingDbgValues.push_back(std::make_pair(
687 VarID, &*emitMOLoc(MO, Var, {NewExpr, Prop.Indirect, false})));
688 return true;
689 }
690
691 /// Change a variable value after encountering a DBG_VALUE inside a block.
692 void redefVar(const MachineInstr &MI) {
693 DebugVariable Var(MI.getDebugVariable(), MI.getDebugExpression(),
694 MI.getDebugLoc()->getInlinedAt());
695 DbgValueProperties Properties(MI);
696 DebugVariableID VarID = DVMap.getDVID(Var);
697
698 // Ignore non-register locations, we don't transfer those.
699 if (MI.isUndefDebugValue() ||
700 all_of(MI.debug_operands(),
701 [](const MachineOperand &MO) { return !MO.isReg(); })) {
702 auto It = ActiveVLocs.find(VarID);
703 if (It != ActiveVLocs.end()) {
704 for (LocIdx Loc : It->second.loc_indices())
705 ActiveMLocs[Loc].erase(VarID);
706 ActiveVLocs.erase(It);
707 }
708 // Any use-before-defs no longer apply.
710 return;
711 }
712
714 for (const MachineOperand &MO : MI.debug_operands()) {
715 if (MO.isReg()) {
716 // Any undef regs have already been filtered out above.
717 Register Reg = MO.getReg();
718 LocIdx NewLoc = MTracker->getRegMLoc(Reg);
719 NewLocs.push_back(NewLoc);
720 } else {
721 NewLocs.push_back(MO);
722 }
723 }
724
725 redefVar(MI, Properties, NewLocs);
726 }
727
728 /// Handle a change in variable location within a block. Terminate the
729 /// variables current location, and record the value it now refers to, so
730 /// that we can detect location transfers later on.
731 void redefVar(const MachineInstr &MI, const DbgValueProperties &Properties,
733 DebugVariable Var(MI.getDebugVariable(), MI.getDebugExpression(),
734 MI.getDebugLoc()->getInlinedAt());
735 DebugVariableID VarID = DVMap.getDVID(Var);
736 // Any use-before-defs no longer apply.
738
739 // Erase any previous location.
740 auto It = ActiveVLocs.find(VarID);
741 if (It != ActiveVLocs.end()) {
742 for (LocIdx Loc : It->second.loc_indices())
743 ActiveMLocs[Loc].erase(VarID);
744 }
745
746 // If there _is_ no new location, all we had to do was erase.
747 if (NewLocs.empty()) {
748 if (It != ActiveVLocs.end())
749 ActiveVLocs.erase(It);
750 return;
751 }
752
754 for (ResolvedDbgOp &Op : NewLocs) {
755 if (Op.IsConst)
756 continue;
757
758 LocIdx NewLoc = Op.Loc;
759
760 // Check whether our local copy of values-by-location in #VarLocs is out
761 // of date. Wipe old tracking data for the location if it's been clobbered
762 // in the meantime.
763 if (MTracker->readMLoc(NewLoc) != VarLocs[NewLoc.asU64()]) {
764 for (const auto &P : ActiveMLocs[NewLoc]) {
765 auto LostVLocIt = ActiveVLocs.find(P);
766 if (LostVLocIt != ActiveVLocs.end()) {
767 for (LocIdx Loc : LostVLocIt->second.loc_indices()) {
768 // Every active variable mapping for NewLoc will be cleared, no
769 // need to track individual variables.
770 if (Loc == NewLoc)
771 continue;
772 LostMLocs.emplace_back(Loc, P);
773 }
774 }
775 ActiveVLocs.erase(P);
776 }
777 for (const auto &LostMLoc : LostMLocs)
778 ActiveMLocs[LostMLoc.first].erase(LostMLoc.second);
779 LostMLocs.clear();
780 It = ActiveVLocs.find(VarID);
781 ActiveMLocs[NewLoc.asU64()].clear();
782 VarLocs[NewLoc.asU64()] = MTracker->readMLoc(NewLoc);
783 }
784
785 ActiveMLocs[NewLoc].insert(VarID);
786 }
787
788 if (It == ActiveVLocs.end()) {
789 ActiveVLocs.insert(
790 std::make_pair(VarID, ResolvedDbgValue(NewLocs, Properties)));
791 } else {
792 It->second.Ops.assign(NewLocs);
793 It->second.Properties = Properties;
794 }
795 }
796
797 /// Account for a location \p mloc being clobbered. Examine the variable
798 /// locations that will be terminated: and try to recover them by using
799 /// another location. Optionally, given \p MakeUndef, emit a DBG_VALUE to
800 /// explicitly terminate a location if it can't be recovered.
802 bool MakeUndef = true) {
803 auto ActiveMLocIt = ActiveMLocs.find(MLoc);
804 if (ActiveMLocIt == ActiveMLocs.end())
805 return;
806
807 // What was the old variable value?
808 ValueIDNum OldValue = VarLocs[MLoc.asU64()];
809 clobberMloc(MLoc, OldValue, Pos, MakeUndef);
810 }
811 /// Overload that takes an explicit value \p OldValue for when the value in
812 /// \p MLoc has changed and the TransferTracker's locations have not been
813 /// updated yet.
814 void clobberMloc(LocIdx MLoc, ValueIDNum OldValue,
815 MachineBasicBlock::iterator Pos, bool MakeUndef = true) {
816 auto ActiveMLocIt = ActiveMLocs.find(MLoc);
817 if (ActiveMLocIt == ActiveMLocs.end())
818 return;
819
821
822 // Examine the remaining variable locations: if we can find the same value
823 // again, we can recover the location.
824 std::optional<LocIdx> NewLoc;
825 for (auto Loc : MTracker->locations())
826 if (Loc.Value == OldValue)
827 NewLoc = Loc.Idx;
828
829 // If there is no location, and we weren't asked to make the variable
830 // explicitly undef, then stop here.
831 if (!NewLoc && !MakeUndef) {
832 // Try and recover a few more locations with entry values.
833 for (DebugVariableID VarID : ActiveMLocIt->second) {
834 auto &Prop = ActiveVLocs.find(VarID)->second.Properties;
835 recoverAsEntryValue(VarID, Prop, OldValue);
836 }
837 flushDbgValues(Pos, nullptr);
838 return;
839 }
840
841 // Examine all the variables based on this location.
843 // If no new location has been found, every variable that depends on this
844 // MLoc is dead, so end their existing MLoc->Var mappings as well.
846 for (DebugVariableID VarID : ActiveMLocIt->second) {
847 auto ActiveVLocIt = ActiveVLocs.find(VarID);
848 // Re-state the variable location: if there's no replacement then NewLoc
849 // is std::nullopt and a $noreg DBG_VALUE will be created. Otherwise, a
850 // DBG_VALUE identifying the alternative location will be emitted.
851 const DbgValueProperties &Properties = ActiveVLocIt->second.Properties;
852
853 // Produce the new list of debug ops - an empty list if no new location
854 // was found, or the existing list with the substitution MLoc -> NewLoc
855 // otherwise.
857 if (NewLoc) {
858 ResolvedDbgOp OldOp(MLoc);
859 ResolvedDbgOp NewOp(*NewLoc);
860 // Insert illegal ops to overwrite afterwards.
861 DbgOps.insert(DbgOps.begin(), ActiveVLocIt->second.Ops.size(),
863 replace_copy(ActiveVLocIt->second.Ops, DbgOps.begin(), OldOp, NewOp);
864 }
865
866 auto &[Var, DILoc] = DVMap.lookupDVID(VarID);
867 PendingDbgValues.push_back(std::make_pair(
868 VarID, &*MTracker->emitLoc(DbgOps, Var, DILoc, Properties)));
869
870 // Update machine locations <=> variable locations maps. Defer updating
871 // ActiveMLocs to avoid invalidating the ActiveMLocIt iterator.
872 if (!NewLoc) {
873 for (LocIdx Loc : ActiveVLocIt->second.loc_indices()) {
874 if (Loc != MLoc)
875 LostMLocs.emplace_back(Loc, VarID);
876 }
877 ActiveVLocs.erase(ActiveVLocIt);
878 } else {
879 ActiveVLocIt->second.Ops = DbgOps;
880 NewMLocs.insert(VarID);
881 }
882 }
883
884 // Remove variables from ActiveMLocs if they no longer use any other MLocs
885 // due to being killed by this clobber.
886 for (auto &LocVarIt : LostMLocs) {
887 auto LostMLocIt = ActiveMLocs.find(LocVarIt.first);
888 assert(LostMLocIt != ActiveMLocs.end() &&
889 "Variable was using this MLoc, but ActiveMLocs[MLoc] has no "
890 "entries?");
891 LostMLocIt->second.erase(LocVarIt.second);
892 }
893
894 // We lazily track what locations have which values; if we've found a new
895 // location for the clobbered value, remember it.
896 if (NewLoc)
897 VarLocs[NewLoc->asU64()] = OldValue;
898
899 flushDbgValues(Pos, nullptr);
900
901 // Commit ActiveMLoc changes.
902 ActiveMLocIt->second.clear();
903 if (!NewMLocs.empty())
904 for (DebugVariableID VarID : NewMLocs)
905 ActiveMLocs[*NewLoc].insert(VarID);
906 }
907
908 /// Transfer variables based on \p Src to be based on \p Dst. This handles
909 /// both register copies as well as spills and restores. Creates DBG_VALUEs
910 /// describing the movement.
912 // Does Src still contain the value num we expect? If not, it's been
913 // clobbered in the meantime, and our variable locations are stale.
914 if (VarLocs[Src.asU64()] != MTracker->readMLoc(Src))
915 return;
916
917 // assert(ActiveMLocs[Dst].size() == 0);
918 //^^^ Legitimate scenario on account of un-clobbered slot being assigned to?
919
920 // Move set of active variables from one location to another.
921 auto MovingVars = ActiveMLocs[Src];
922 ActiveMLocs[Dst].insert(MovingVars.begin(), MovingVars.end());
923 VarLocs[Dst.asU64()] = VarLocs[Src.asU64()];
924
925 // For each variable based on Src; create a location at Dst.
926 ResolvedDbgOp SrcOp(Src);
927 ResolvedDbgOp DstOp(Dst);
928 for (DebugVariableID VarID : MovingVars) {
929 auto ActiveVLocIt = ActiveVLocs.find(VarID);
930 assert(ActiveVLocIt != ActiveVLocs.end());
931
932 // Update all instances of Src in the variable's tracked values to Dst.
933 std::replace(ActiveVLocIt->second.Ops.begin(),
934 ActiveVLocIt->second.Ops.end(), SrcOp, DstOp);
935
936 auto &[Var, DILoc] = DVMap.lookupDVID(VarID);
937 MachineInstr *MI = MTracker->emitLoc(ActiveVLocIt->second.Ops, Var, DILoc,
938 ActiveVLocIt->second.Properties);
939 PendingDbgValues.push_back(std::make_pair(VarID, MI));
940 }
941 ActiveMLocs[Src].clear();
942 flushDbgValues(Pos, nullptr);
943
944 // XXX XXX XXX "pretend to be old LDV" means dropping all tracking data
945 // about the old location.
946 if (EmulateOldLDV)
947 VarLocs[Src.asU64()] = ValueIDNum::EmptyValue;
948 }
949
951 const DebugVariable &Var,
952 const DbgValueProperties &Properties) {
953 DebugLoc DL = DILocation::get(Var.getVariable()->getContext(), 0, 0,
954 Var.getVariable()->getScope(),
955 const_cast<DILocation *>(Var.getInlinedAt()));
956 auto MIB = BuildMI(MF, DL, TII->get(TargetOpcode::DBG_VALUE));
957 MIB.add(MO);
958 if (Properties.Indirect)
959 MIB.addImm(0);
960 else
961 MIB.addReg(0);
962 MIB.addMetadata(Var.getVariable());
963 MIB.addMetadata(Properties.DIExpr);
964 return MIB;
965 }
966};
967
968//===----------------------------------------------------------------------===//
969// Implementation
970//===----------------------------------------------------------------------===//
971
972ValueIDNum ValueIDNum::EmptyValue = {UINT_MAX, UINT_MAX, UINT_MAX};
973ValueIDNum ValueIDNum::TombstoneValue = {UINT_MAX, UINT_MAX, UINT_MAX - 1};
974
975#ifndef NDEBUG
976void ResolvedDbgOp::dump(const MLocTracker *MTrack) const {
977 if (IsConst) {
978 dbgs() << MO;
979 } else {
980 dbgs() << MTrack->LocIdxToName(Loc);
981 }
982}
983void DbgOp::dump(const MLocTracker *MTrack) const {
984 if (IsConst) {
985 dbgs() << MO;
986 } else if (!isUndef()) {
987 dbgs() << MTrack->IDAsString(ID);
988 }
989}
990void DbgOpID::dump(const MLocTracker *MTrack, const DbgOpIDMap *OpStore) const {
991 if (!OpStore) {
992 dbgs() << "ID(" << asU32() << ")";
993 } else {
994 OpStore->find(*this).dump(MTrack);
995 }
996}
997void DbgValue::dump(const MLocTracker *MTrack,
998 const DbgOpIDMap *OpStore) const {
999 if (Kind == NoVal) {
1000 dbgs() << "NoVal(" << BlockNo << ")";
1001 } else if (Kind == VPHI || Kind == Def) {
1002 if (Kind == VPHI)
1003 dbgs() << "VPHI(" << BlockNo << ",";
1004 else
1005 dbgs() << "Def(";
1006 for (unsigned Idx = 0; Idx < getDbgOpIDs().size(); ++Idx) {
1007 getDbgOpID(Idx).dump(MTrack, OpStore);
1008 if (Idx != 0)
1009 dbgs() << ",";
1010 }
1011 dbgs() << ")";
1012 }
1013 if (Properties.Indirect)
1014 dbgs() << " indir";
1015 if (Properties.DIExpr)
1016 dbgs() << " " << *Properties.DIExpr;
1017}
1018#endif
1019
1021 const TargetRegisterInfo &TRI,
1022 const TargetLowering &TLI)
1023 : MF(MF), TII(TII), TRI(TRI), TLI(TLI),
1024 LocIdxToIDNum(ValueIDNum::EmptyValue), LocIdxToLocID(0) {
1026 reset();
1028 assert(NumRegs < (1u << NUM_LOC_BITS)); // Detect bit packing failure
1029
1030 // Always track SP. This avoids the implicit clobbering caused by regmasks
1031 // from affectings its values. (LiveDebugValues disbelieves calls and
1032 // regmasks that claim to clobber SP).
1034 if (SP) {
1035 unsigned ID = getLocID(SP);
1037
1038 for (MCRegAliasIterator RAI(SP, &TRI, true); RAI.isValid(); ++RAI)
1039 SPAliases.insert(*RAI);
1040 }
1041
1042 // Build some common stack positions -- full registers being spilt to the
1043 // stack.
1044 StackSlotIdxes.insert({{8, 0}, 0});
1045 StackSlotIdxes.insert({{16, 0}, 1});
1046 StackSlotIdxes.insert({{32, 0}, 2});
1047 StackSlotIdxes.insert({{64, 0}, 3});
1048 StackSlotIdxes.insert({{128, 0}, 4});
1049 StackSlotIdxes.insert({{256, 0}, 5});
1050 StackSlotIdxes.insert({{512, 0}, 6});
1051
1052 // Traverse all the subregister idxes, and ensure there's an index for them.
1053 // Duplicates are no problem: we're interested in their position in the
1054 // stack slot, we don't want to type the slot.
1055 for (unsigned int I = 1; I < TRI.getNumSubRegIndices(); ++I) {
1056 unsigned Size = TRI.getSubRegIdxSize(I);
1057 unsigned Offs = TRI.getSubRegIdxOffset(I);
1058 unsigned Idx = StackSlotIdxes.size();
1059
1060 // Some subregs have -1, -2 and so forth fed into their fields, to mean
1061 // special backend things. Ignore those.
1062 if (Size > 60000 || Offs > 60000)
1063 continue;
1064
1065 StackSlotIdxes.insert({{Size, Offs}, Idx});
1066 }
1067
1068 // There may also be strange register class sizes (think x86 fp80s).
1069 for (const TargetRegisterClass *RC : TRI.regclasses()) {
1070 unsigned Size = TRI.getRegSizeInBits(*RC);
1071
1072 // We might see special reserved values as sizes, and classes for other
1073 // stuff the machine tries to model. If it's more than 512 bits, then it
1074 // is very unlikely to be a register than can be spilt.
1075 if (Size > 512)
1076 continue;
1077
1078 unsigned Idx = StackSlotIdxes.size();
1079 StackSlotIdxes.insert({{Size, 0}, Idx});
1080 }
1081
1082 for (auto &Idx : StackSlotIdxes)
1083 StackIdxesToPos[Idx.second] = Idx.first;
1084
1086}
1087
1089 assert(ID != 0);
1090 LocIdx NewIdx = LocIdx(LocIdxToIDNum.size());
1091 LocIdxToIDNum.grow(NewIdx);
1092 LocIdxToLocID.grow(NewIdx);
1093
1094 // Default: it's an mphi.
1095 ValueIDNum ValNum = {CurBB, 0, NewIdx};
1096 // Was this reg ever touched by a regmask?
1097 for (const auto &MaskPair : reverse(Masks)) {
1098 if (MaskPair.first->clobbersPhysReg(ID)) {
1099 // There was an earlier def we skipped.
1100 ValNum = {CurBB, MaskPair.second, NewIdx};
1101 break;
1102 }
1103 }
1104
1105 LocIdxToIDNum[NewIdx] = ValNum;
1106 LocIdxToLocID[NewIdx] = ID;
1107 return NewIdx;
1108}
1109
1110void MLocTracker::writeRegMask(const MachineOperand *MO, unsigned CurBB,
1111 unsigned InstID) {
1112 // Def any register we track have that isn't preserved. The regmask
1113 // terminates the liveness of a register, meaning its value can't be
1114 // relied upon -- we represent this by giving it a new value.
1115 for (auto Location : locations()) {
1116 unsigned ID = LocIdxToLocID[Location.Idx];
1117 // Don't clobber SP, even if the mask says it's clobbered.
1118 if (ID < NumRegs && !SPAliases.count(ID) && MO->clobbersPhysReg(ID))
1119 defReg(ID, CurBB, InstID);
1120 }
1121 Masks.push_back(std::make_pair(MO, InstID));
1122}
1123
1124std::optional<SpillLocationNo> MLocTracker::getOrTrackSpillLoc(SpillLoc L) {
1125 SpillLocationNo SpillID(SpillLocs.idFor(L));
1126
1127 if (SpillID.id() == 0) {
1128 // If there is no location, and we have reached the limit of how many stack
1129 // slots to track, then don't track this one.
1130 if (SpillLocs.size() >= StackWorkingSetLimit)
1131 return std::nullopt;
1132
1133 // Spill location is untracked: create record for this one, and all
1134 // subregister slots too.
1135 SpillID = SpillLocationNo(SpillLocs.insert(L));
1136 for (unsigned StackIdx = 0; StackIdx < NumSlotIdxes; ++StackIdx) {
1137 unsigned L = getSpillIDWithIdx(SpillID, StackIdx);
1138 LocIdx Idx = LocIdx(LocIdxToIDNum.size()); // New idx
1140 LocIdxToLocID.grow(Idx);
1141 LocIDToLocIdx.push_back(Idx);
1142 LocIdxToLocID[Idx] = L;
1143 // Initialize to PHI value; corresponds to the location's live-in value
1144 // during transfer function construction.
1146 }
1147 }
1148 return SpillID;
1149}
1150
1152 unsigned ID = LocIdxToLocID[Idx];
1153 if (ID >= NumRegs) {
1155 ID -= NumRegs;
1156 unsigned Slot = ID / NumSlotIdxes;
1157 return Twine("slot ")
1158 .concat(Twine(Slot).concat(Twine(" sz ").concat(Twine(Pos.first)
1159 .concat(Twine(" offs ").concat(Twine(Pos.second))))))
1160 .str();
1161 } else {
1162 return TRI.getRegAsmName(ID).str();
1163 }
1164}
1165
1166std::string MLocTracker::IDAsString(const ValueIDNum &Num) const {
1167 std::string DefName = LocIdxToName(Num.getLoc());
1168 return Num.asString(DefName);
1169}
1170
1171#ifndef NDEBUG
1173 for (auto Location : locations()) {
1174 std::string MLocName = LocIdxToName(Location.Value.getLoc());
1175 std::string DefName = Location.Value.asString(MLocName);
1176 dbgs() << LocIdxToName(Location.Idx) << " --> " << DefName << "\n";
1177 }
1178}
1179
1181 for (auto Location : locations()) {
1182 std::string foo = LocIdxToName(Location.Idx);
1183 dbgs() << "Idx " << Location.Idx.asU64() << " " << foo << "\n";
1184 }
1185}
1186#endif
1187
1190 const DebugVariable &Var, const DILocation *DILoc,
1191 const DbgValueProperties &Properties) {
1192 DebugLoc DL = DebugLoc(DILoc);
1193
1194 const MCInstrDesc &Desc = Properties.IsVariadic
1195 ? TII.get(TargetOpcode::DBG_VALUE_LIST)
1196 : TII.get(TargetOpcode::DBG_VALUE);
1197
1198#ifdef EXPENSIVE_CHECKS
1199 assert(all_of(DbgOps,
1200 [](const ResolvedDbgOp &Op) {
1201 return Op.IsConst || !Op.Loc.isIllegal();
1202 }) &&
1203 "Did not expect illegal ops in DbgOps.");
1204 assert((DbgOps.size() == 0 ||
1205 DbgOps.size() == Properties.getLocationOpCount()) &&
1206 "Expected to have either one DbgOp per MI LocationOp, or none.");
1207#endif
1208
1209 auto GetRegOp = [](unsigned Reg) -> MachineOperand {
1211 /* Reg */ Reg, /* isDef */ false, /* isImp */ false,
1212 /* isKill */ false, /* isDead */ false,
1213 /* isUndef */ false, /* isEarlyClobber */ false,
1214 /* SubReg */ 0, /* isDebug */ true);
1215 };
1216
1218
1219 auto EmitUndef = [&]() {
1220 MOs.clear();
1221 MOs.assign(Properties.getLocationOpCount(), GetRegOp(0));
1222 return BuildMI(MF, DL, Desc, false, MOs, Var.getVariable(),
1223 Properties.DIExpr);
1224 };
1225
1226 // Don't bother passing any real operands to BuildMI if any of them would be
1227 // $noreg.
1228 if (DbgOps.empty())
1229 return EmitUndef();
1230
1231 bool Indirect = Properties.Indirect;
1232
1233 const DIExpression *Expr = Properties.DIExpr;
1234
1235 assert(DbgOps.size() == Properties.getLocationOpCount());
1236
1237 // If all locations are valid, accumulate them into our list of
1238 // MachineOperands. For any spilled locations, either update the indirectness
1239 // register or apply the appropriate transformations in the DIExpression.
1240 for (size_t Idx = 0; Idx < Properties.getLocationOpCount(); ++Idx) {
1241 const ResolvedDbgOp &Op = DbgOps[Idx];
1242
1243 if (Op.IsConst) {
1244 MOs.push_back(Op.MO);
1245 continue;
1246 }
1247
1248 LocIdx MLoc = Op.Loc;
1249 unsigned LocID = LocIdxToLocID[MLoc];
1250 if (LocID >= NumRegs) {
1251 SpillLocationNo SpillID = locIDToSpill(LocID);
1252 StackSlotPos StackIdx = locIDToSpillIdx(LocID);
1253 unsigned short Offset = StackIdx.second;
1254
1255 // TODO: support variables that are located in spill slots, with non-zero
1256 // offsets from the start of the spill slot. It would require some more
1257 // complex DIExpression calculations. This doesn't seem to be produced by
1258 // LLVM right now, so don't try and support it.
1259 // Accept no-subregister slots and subregisters where the offset is zero.
1260 // The consumer should already have type information to work out how large
1261 // the variable is.
1262 if (Offset == 0) {
1263 const SpillLoc &Spill = SpillLocs[SpillID.id()];
1264 unsigned Base = Spill.SpillBase;
1265
1266 // There are several ways we can dereference things, and several inputs
1267 // to consider:
1268 // * NRVO variables will appear with IsIndirect set, but should have
1269 // nothing else in their DIExpressions,
1270 // * Variables with DW_OP_stack_value in their expr already need an
1271 // explicit dereference of the stack location,
1272 // * Values that don't match the variable size need DW_OP_deref_size,
1273 // * Everything else can just become a simple location expression.
1274
1275 // We need to use deref_size whenever there's a mismatch between the
1276 // size of value and the size of variable portion being read.
1277 // Additionally, we should use it whenever dealing with stack_value
1278 // fragments, to avoid the consumer having to determine the deref size
1279 // from DW_OP_piece.
1280 bool UseDerefSize = false;
1281 unsigned ValueSizeInBits = getLocSizeInBits(MLoc);
1282 unsigned DerefSizeInBytes = ValueSizeInBits / 8;
1283 if (auto Fragment = Var.getFragment()) {
1284 unsigned VariableSizeInBits = Fragment->SizeInBits;
1285 if (VariableSizeInBits != ValueSizeInBits || Expr->isComplex())
1286 UseDerefSize = true;
1287 } else if (auto Size = Var.getVariable()->getSizeInBits()) {
1288 if (*Size != ValueSizeInBits) {
1289 UseDerefSize = true;
1290 }
1291 }
1292
1293 SmallVector<uint64_t, 5> OffsetOps;
1294 TRI.getOffsetOpcodes(Spill.SpillOffset, OffsetOps);
1295 bool StackValue = false;
1296
1297 if (Properties.Indirect) {
1298 // This is something like an NRVO variable, where the pointer has been
1299 // spilt to the stack. It should end up being a memory location, with
1300 // the pointer to the variable loaded off the stack with a deref:
1301 assert(!Expr->isImplicit());
1302 OffsetOps.push_back(dwarf::DW_OP_deref);
1303 } else if (UseDerefSize && Expr->isSingleLocationExpression()) {
1304 // TODO: Figure out how to handle deref size issues for variadic
1305 // values.
1306 // We're loading a value off the stack that's not the same size as the
1307 // variable. Add / subtract stack offset, explicitly deref with a
1308 // size, and add DW_OP_stack_value if not already present.
1309 OffsetOps.push_back(dwarf::DW_OP_deref_size);
1310 OffsetOps.push_back(DerefSizeInBytes);
1311 StackValue = true;
1312 } else if (Expr->isComplex() || Properties.IsVariadic) {
1313 // A variable with no size ambiguity, but with extra elements in it's
1314 // expression. Manually dereference the stack location.
1315 OffsetOps.push_back(dwarf::DW_OP_deref);
1316 } else {
1317 // A plain value that has been spilt to the stack, with no further
1318 // context. Request a location expression, marking the DBG_VALUE as
1319 // IsIndirect.
1320 Indirect = true;
1321 }
1322
1323 Expr = DIExpression::appendOpsToArg(Expr, OffsetOps, Idx, StackValue);
1324 MOs.push_back(GetRegOp(Base));
1325 } else {
1326 // This is a stack location with a weird subregister offset: emit an
1327 // undef DBG_VALUE instead.
1328 return EmitUndef();
1329 }
1330 } else {
1331 // Non-empty, non-stack slot, must be a plain register.
1332 MOs.push_back(GetRegOp(LocID));
1333 }
1334 }
1335
1336 return BuildMI(MF, DL, Desc, Indirect, MOs, Var.getVariable(), Expr);
1337}
1338
1339/// Default construct and initialize the pass.
1341
1343 unsigned Reg = MTracker->LocIdxToLocID[L];
1344 return isCalleeSavedReg(Reg);
1345}
1347 for (MCRegAliasIterator RAI(R, TRI, true); RAI.isValid(); ++RAI)
1348 if (CalleeSavedRegs.test(*RAI))
1349 return true;
1350 return false;
1351}
1352
1353//===----------------------------------------------------------------------===//
1354// Debug Range Extension Implementation
1355//===----------------------------------------------------------------------===//
1356
1357#ifndef NDEBUG
1358// Something to restore in the future.
1359// void InstrRefBasedLDV::printVarLocInMBB(..)
1360#endif
1361
1362std::optional<SpillLocationNo>
1363InstrRefBasedLDV::extractSpillBaseRegAndOffset(const MachineInstr &MI) {
1364 assert(MI.hasOneMemOperand() &&
1365 "Spill instruction does not have exactly one memory operand?");
1366 auto MMOI = MI.memoperands_begin();
1367 const PseudoSourceValue *PVal = (*MMOI)->getPseudoValue();
1369 "Inconsistent memory operand in spill instruction");
1370 int FI = cast<FixedStackPseudoSourceValue>(PVal)->getFrameIndex();
1371 const MachineBasicBlock *MBB = MI.getParent();
1372 Register Reg;
1374 return MTracker->getOrTrackSpillLoc({Reg, Offset});
1375}
1376
1377std::optional<LocIdx>
1379 std::optional<SpillLocationNo> SpillLoc = extractSpillBaseRegAndOffset(MI);
1380 if (!SpillLoc)
1381 return std::nullopt;
1382
1383 // Where in the stack slot is this value defined -- i.e., what size of value
1384 // is this? An important question, because it could be loaded into a register
1385 // from the stack at some point. Happily the memory operand will tell us
1386 // the size written to the stack.
1387 auto *MemOperand = *MI.memoperands_begin();
1388 LocationSize SizeInBits = MemOperand->getSizeInBits();
1389 assert(SizeInBits.hasValue() && "Expected to find a valid size!");
1390
1391 // Find that position in the stack indexes we're tracking.
1392 auto IdxIt = MTracker->StackSlotIdxes.find({SizeInBits.getValue(), 0});
1393 if (IdxIt == MTracker->StackSlotIdxes.end())
1394 // That index is not tracked. This is suprising, and unlikely to ever
1395 // occur, but the safe action is to indicate the variable is optimised out.
1396 return std::nullopt;
1397
1398 unsigned SpillID = MTracker->getSpillIDWithIdx(*SpillLoc, IdxIt->second);
1399 return MTracker->getSpillMLoc(SpillID);
1400}
1401
1402/// End all previous ranges related to @MI and start a new range from @MI
1403/// if it is a DBG_VALUE instr.
1404bool InstrRefBasedLDV::transferDebugValue(const MachineInstr &MI) {
1405 if (!MI.isDebugValue())
1406 return false;
1407
1408 assert(MI.getDebugVariable()->isValidLocationForIntrinsic(MI.getDebugLoc()) &&
1409 "Expected inlined-at fields to agree");
1410
1411 // If there are no instructions in this lexical scope, do no location tracking
1412 // at all, this variable shouldn't get a legitimate location range.
1413 auto *Scope = LS.findLexicalScope(MI.getDebugLoc().get());
1414 if (Scope == nullptr)
1415 return true; // handled it; by doing nothing
1416
1417 // MLocTracker needs to know that this register is read, even if it's only
1418 // read by a debug inst.
1419 for (const MachineOperand &MO : MI.debug_operands())
1420 if (MO.isReg() && MO.getReg() != 0)
1421 (void)MTracker->readReg(MO.getReg());
1422
1423 // If we're preparing for the second analysis (variables), the machine value
1424 // locations are already solved, and we report this DBG_VALUE and the value
1425 // it refers to to VLocTracker.
1426 if (VTracker) {
1427 SmallVector<DbgOpID> DebugOps;
1428 // Feed defVar the new variable location, or if this is a DBG_VALUE $noreg,
1429 // feed defVar None.
1430 if (!MI.isUndefDebugValue()) {
1431 for (const MachineOperand &MO : MI.debug_operands()) {
1432 // There should be no undef registers here, as we've screened for undef
1433 // debug values.
1434 if (MO.isReg()) {
1435 DebugOps.push_back(DbgOpStore.insert(MTracker->readReg(MO.getReg())));
1436 } else if (MO.isImm() || MO.isFPImm() || MO.isCImm()) {
1437 DebugOps.push_back(DbgOpStore.insert(MO));
1438 } else {
1439 llvm_unreachable("Unexpected debug operand type.");
1440 }
1441 }
1442 }
1443 VTracker->defVar(MI, DbgValueProperties(MI), DebugOps);
1444 }
1445
1446 // If performing final tracking of transfers, report this variable definition
1447 // to the TransferTracker too.
1448 if (TTracker)
1449 TTracker->redefVar(MI);
1450 return true;
1451}
1452
1453std::optional<ValueIDNum> InstrRefBasedLDV::getValueForInstrRef(
1454 unsigned InstNo, unsigned OpNo, MachineInstr &MI,
1455 const FuncValueTable *MLiveOuts, const FuncValueTable *MLiveIns) {
1456 // Various optimizations may have happened to the value during codegen,
1457 // recorded in the value substitution table. Apply any substitutions to
1458 // the instruction / operand number in this DBG_INSTR_REF, and collect
1459 // any subregister extractions performed during optimization.
1460 const MachineFunction &MF = *MI.getParent()->getParent();
1461
1462 // Create dummy substitution with Src set, for lookup.
1463 auto SoughtSub =
1464 MachineFunction::DebugSubstitution({InstNo, OpNo}, {0, 0}, 0);
1465
1466 SmallVector<unsigned, 4> SeenSubregs;
1467 auto LowerBoundIt = llvm::lower_bound(MF.DebugValueSubstitutions, SoughtSub);
1468 while (LowerBoundIt != MF.DebugValueSubstitutions.end() &&
1469 LowerBoundIt->Src == SoughtSub.Src) {
1470 std::tie(InstNo, OpNo) = LowerBoundIt->Dest;
1471 SoughtSub.Src = LowerBoundIt->Dest;
1472 if (unsigned Subreg = LowerBoundIt->Subreg)
1473 SeenSubregs.push_back(Subreg);
1474 LowerBoundIt = llvm::lower_bound(MF.DebugValueSubstitutions, SoughtSub);
1475 }
1476
1477 // Default machine value number is <None> -- if no instruction defines
1478 // the corresponding value, it must have been optimized out.
1479 std::optional<ValueIDNum> NewID;
1480
1481 // Try to lookup the instruction number, and find the machine value number
1482 // that it defines. It could be an instruction, or a PHI.
1483 auto InstrIt = DebugInstrNumToInstr.find(InstNo);
1484 auto PHIIt = llvm::lower_bound(DebugPHINumToValue, InstNo);
1485 if (InstrIt != DebugInstrNumToInstr.end()) {
1486 const MachineInstr &TargetInstr = *InstrIt->second.first;
1487 uint64_t BlockNo = TargetInstr.getParent()->getNumber();
1488
1489 // Pick out the designated operand. It might be a memory reference, if
1490 // a register def was folded into a stack store.
1492 TargetInstr.hasOneMemOperand()) {
1493 std::optional<LocIdx> L = findLocationForMemOperand(TargetInstr);
1494 if (L)
1495 NewID = ValueIDNum(BlockNo, InstrIt->second.second, *L);
1496 } else if (OpNo != MachineFunction::DebugOperandMemNumber) {
1497 // Permit the debug-info to be completely wrong: identifying a nonexistant
1498 // operand, or one that is not a register definition, means something
1499 // unexpected happened during optimisation. Broken debug-info, however,
1500 // shouldn't crash the compiler -- instead leave the variable value as
1501 // None, which will make it appear "optimised out".
1502 if (OpNo < TargetInstr.getNumOperands()) {
1503 const MachineOperand &MO = TargetInstr.getOperand(OpNo);
1504
1505 if (MO.isReg() && MO.isDef() && MO.getReg()) {
1506 unsigned LocID = MTracker->getLocID(MO.getReg());
1507 LocIdx L = MTracker->LocIDToLocIdx[LocID];
1508 NewID = ValueIDNum(BlockNo, InstrIt->second.second, L);
1509 }
1510 }
1511
1512 if (!NewID) {
1513 LLVM_DEBUG(
1514 { dbgs() << "Seen instruction reference to illegal operand\n"; });
1515 }
1516 }
1517 // else: NewID is left as None.
1518 } else if (PHIIt != DebugPHINumToValue.end() && PHIIt->InstrNum == InstNo) {
1519 // It's actually a PHI value. Which value it is might not be obvious, use
1520 // the resolver helper to find out.
1521 assert(MLiveOuts && MLiveIns);
1522 NewID = resolveDbgPHIs(*MI.getParent()->getParent(), *MLiveOuts, *MLiveIns,
1523 MI, InstNo);
1524 }
1525
1526 // Apply any subregister extractions, in reverse. We might have seen code
1527 // like this:
1528 // CALL64 @foo, implicit-def $rax
1529 // %0:gr64 = COPY $rax
1530 // %1:gr32 = COPY %0.sub_32bit
1531 // %2:gr16 = COPY %1.sub_16bit
1532 // %3:gr8 = COPY %2.sub_8bit
1533 // In which case each copy would have been recorded as a substitution with
1534 // a subregister qualifier. Apply those qualifiers now.
1535 if (NewID && !SeenSubregs.empty()) {
1536 unsigned Offset = 0;
1537 unsigned Size = 0;
1538
1539 // Look at each subregister that we passed through, and progressively
1540 // narrow in, accumulating any offsets that occur. Substitutions should
1541 // only ever be the same or narrower width than what they read from;
1542 // iterate in reverse order so that we go from wide to small.
1543 for (unsigned Subreg : reverse(SeenSubregs)) {
1544 unsigned ThisSize = TRI->getSubRegIdxSize(Subreg);
1545 unsigned ThisOffset = TRI->getSubRegIdxOffset(Subreg);
1546 Offset += ThisOffset;
1547 Size = (Size == 0) ? ThisSize : std::min(Size, ThisSize);
1548 }
1549
1550 // If that worked, look for an appropriate subregister with the register
1551 // where the define happens. Don't look at values that were defined during
1552 // a stack write: we can't currently express register locations within
1553 // spills.
1554 LocIdx L = NewID->getLoc();
1555 if (NewID && !MTracker->isSpill(L)) {
1556 // Find the register class for the register where this def happened.
1557 // FIXME: no index for this?
1558 Register Reg = MTracker->LocIdxToLocID[L];
1559 const TargetRegisterClass *TRC = nullptr;
1560 for (const auto *TRCI : TRI->regclasses())
1561 if (TRCI->contains(Reg))
1562 TRC = TRCI;
1563 assert(TRC && "Couldn't find target register class?");
1564
1565 // If the register we have isn't the right size or in the right place,
1566 // Try to find a subregister inside it.
1567 unsigned MainRegSize = TRI->getRegSizeInBits(*TRC);
1568 if (Size != MainRegSize || Offset) {
1569 // Enumerate all subregisters, searching.
1570 Register NewReg = 0;
1571 for (MCPhysReg SR : TRI->subregs(Reg)) {
1572 unsigned Subreg = TRI->getSubRegIndex(Reg, SR);
1573 unsigned SubregSize = TRI->getSubRegIdxSize(Subreg);
1574 unsigned SubregOffset = TRI->getSubRegIdxOffset(Subreg);
1575 if (SubregSize == Size && SubregOffset == Offset) {
1576 NewReg = SR;
1577 break;
1578 }
1579 }
1580
1581 // If we didn't find anything: there's no way to express our value.
1582 if (!NewReg) {
1583 NewID = std::nullopt;
1584 } else {
1585 // Re-state the value as being defined within the subregister
1586 // that we found.
1587 LocIdx NewLoc = MTracker->lookupOrTrackRegister(NewReg);
1588 NewID = ValueIDNum(NewID->getBlock(), NewID->getInst(), NewLoc);
1589 }
1590 }
1591 } else {
1592 // If we can't handle subregisters, unset the new value.
1593 NewID = std::nullopt;
1594 }
1595 }
1596
1597 return NewID;
1598}
1599
1600bool InstrRefBasedLDV::transferDebugInstrRef(MachineInstr &MI,
1601 const FuncValueTable *MLiveOuts,
1602 const FuncValueTable *MLiveIns) {
1603 if (!MI.isDebugRef())
1604 return false;
1605
1606 // Only handle this instruction when we are building the variable value
1607 // transfer function.
1608 if (!VTracker && !TTracker)
1609 return false;
1610
1611 const DILocalVariable *Var = MI.getDebugVariable();
1612 const DIExpression *Expr = MI.getDebugExpression();
1613 const DILocation *DebugLoc = MI.getDebugLoc();
1614 const DILocation *InlinedAt = DebugLoc->getInlinedAt();
1616 "Expected inlined-at fields to agree");
1617
1618 DebugVariable V(Var, Expr, InlinedAt);
1619
1620 auto *Scope = LS.findLexicalScope(MI.getDebugLoc().get());
1621 if (Scope == nullptr)
1622 return true; // Handled by doing nothing. This variable is never in scope.
1623
1624 SmallVector<DbgOpID> DbgOpIDs;
1625 for (const MachineOperand &MO : MI.debug_operands()) {
1626 if (!MO.isDbgInstrRef()) {
1627 assert(!MO.isReg() && "DBG_INSTR_REF should not contain registers");
1628 DbgOpID ConstOpID = DbgOpStore.insert(DbgOp(MO));
1629 DbgOpIDs.push_back(ConstOpID);
1630 continue;
1631 }
1632
1633 unsigned InstNo = MO.getInstrRefInstrIndex();
1634 unsigned OpNo = MO.getInstrRefOpIndex();
1635
1636 // Default machine value number is <None> -- if no instruction defines
1637 // the corresponding value, it must have been optimized out.
1638 std::optional<ValueIDNum> NewID =
1639 getValueForInstrRef(InstNo, OpNo, MI, MLiveOuts, MLiveIns);
1640 // We have a value number or std::nullopt. If the latter, then kill the
1641 // entire debug value.
1642 if (NewID) {
1643 DbgOpIDs.push_back(DbgOpStore.insert(*NewID));
1644 } else {
1645 DbgOpIDs.clear();
1646 break;
1647 }
1648 }
1649
1650 // We have a DbgOpID for every value or for none. Tell the variable value
1651 // tracker about it. The rest of this LiveDebugValues implementation acts
1652 // exactly the same for DBG_INSTR_REFs as DBG_VALUEs (just, the former can
1653 // refer to values that aren't immediately available).
1654 DbgValueProperties Properties(Expr, false, true);
1655 if (VTracker)
1656 VTracker->defVar(MI, Properties, DbgOpIDs);
1657
1658 // If we're on the final pass through the function, decompose this INSTR_REF
1659 // into a plain DBG_VALUE.
1660 if (!TTracker)
1661 return true;
1662
1663 // Fetch the concrete DbgOps now, as we will need them later.
1664 SmallVector<DbgOp> DbgOps;
1665 for (DbgOpID OpID : DbgOpIDs) {
1666 DbgOps.push_back(DbgOpStore.find(OpID));
1667 }
1668
1669 // Pick a location for the machine value number, if such a location exists.
1670 // (This information could be stored in TransferTracker to make it faster).
1672 SmallVector<ValueIDNum> ValuesToFind;
1673 // Initialized the preferred-location map with illegal locations, to be
1674 // filled in later.
1675 for (const DbgOp &Op : DbgOps) {
1676 if (!Op.IsConst)
1677 if (FoundLocs.insert({Op.ID, TransferTracker::LocationAndQuality()})
1678 .second)
1679 ValuesToFind.push_back(Op.ID);
1680 }
1681
1682 for (auto Location : MTracker->locations()) {
1683 LocIdx CurL = Location.Idx;
1684 ValueIDNum ID = MTracker->readMLoc(CurL);
1685 auto ValueToFindIt = find(ValuesToFind, ID);
1686 if (ValueToFindIt == ValuesToFind.end())
1687 continue;
1688 auto &Previous = FoundLocs.find(ID)->second;
1689 // If this is the first location with that value, pick it. Otherwise,
1690 // consider whether it's a "longer term" location.
1691 std::optional<TransferTracker::LocationQuality> ReplacementQuality =
1692 TTracker->getLocQualityIfBetter(CurL, Previous.getQuality());
1693 if (ReplacementQuality) {
1694 Previous = TransferTracker::LocationAndQuality(CurL, *ReplacementQuality);
1695 if (Previous.isBest()) {
1696 ValuesToFind.erase(ValueToFindIt);
1697 if (ValuesToFind.empty())
1698 break;
1699 }
1700 }
1701 }
1702
1704 for (const DbgOp &DbgOp : DbgOps) {
1705 if (DbgOp.IsConst) {
1706 NewLocs.push_back(DbgOp.MO);
1707 continue;
1708 }
1709 LocIdx FoundLoc = FoundLocs.find(DbgOp.ID)->second.getLoc();
1710 if (FoundLoc.isIllegal()) {
1711 NewLocs.clear();
1712 break;
1713 }
1714 NewLocs.push_back(FoundLoc);
1715 }
1716 // Tell transfer tracker that the variable value has changed.
1717 TTracker->redefVar(MI, Properties, NewLocs);
1718
1719 // If there were values with no location, but all such values are defined in
1720 // later instructions in this block, this is a block-local use-before-def.
1721 if (!DbgOps.empty() && NewLocs.empty()) {
1722 bool IsValidUseBeforeDef = true;
1723 uint64_t LastUseBeforeDef = 0;
1724 for (auto ValueLoc : FoundLocs) {
1725 ValueIDNum NewID = ValueLoc.first;
1726 LocIdx FoundLoc = ValueLoc.second.getLoc();
1727 if (!FoundLoc.isIllegal())
1728 continue;
1729 // If we have an value with no location that is not defined in this block,
1730 // then it has no location in this block, leaving this value undefined.
1731 if (NewID.getBlock() != CurBB || NewID.getInst() <= CurInst) {
1732 IsValidUseBeforeDef = false;
1733 break;
1734 }
1735 LastUseBeforeDef = std::max(LastUseBeforeDef, NewID.getInst());
1736 }
1737 if (IsValidUseBeforeDef) {
1738 DebugVariableID VID = DVMap.insertDVID(V, MI.getDebugLoc().get());
1739 TTracker->addUseBeforeDef(VID, {MI.getDebugExpression(), false, true},
1740 DbgOps, LastUseBeforeDef);
1741 }
1742 }
1743
1744 // Produce a DBG_VALUE representing what this DBG_INSTR_REF meant.
1745 // This DBG_VALUE is potentially a $noreg / undefined location, if
1746 // FoundLoc is illegal.
1747 // (XXX -- could morph the DBG_INSTR_REF in the future).
1748 MachineInstr *DbgMI =
1749 MTracker->emitLoc(NewLocs, V, MI.getDebugLoc().get(), Properties);
1750 DebugVariableID ID = DVMap.getDVID(V);
1751
1752 TTracker->PendingDbgValues.push_back(std::make_pair(ID, DbgMI));
1753 TTracker->flushDbgValues(MI.getIterator(), nullptr);
1754 return true;
1755}
1756
1757bool InstrRefBasedLDV::transferDebugPHI(MachineInstr &MI) {
1758 if (!MI.isDebugPHI())
1759 return false;
1760
1761 // Analyse these only when solving the machine value location problem.
1762 if (VTracker || TTracker)
1763 return true;
1764
1765 // First operand is the value location, either a stack slot or register.
1766 // Second is the debug instruction number of the original PHI.
1767 const MachineOperand &MO = MI.getOperand(0);
1768 unsigned InstrNum = MI.getOperand(1).getImm();
1769
1770 auto EmitBadPHI = [this, &MI, InstrNum]() -> bool {
1771 // Helper lambda to do any accounting when we fail to find a location for
1772 // a DBG_PHI. This can happen if DBG_PHIs are malformed, or refer to a
1773 // dead stack slot, for example.
1774 // Record a DebugPHIRecord with an empty value + location.
1775 DebugPHINumToValue.push_back(
1776 {InstrNum, MI.getParent(), std::nullopt, std::nullopt});
1777 return true;
1778 };
1779
1780 if (MO.isReg() && MO.getReg()) {
1781 // The value is whatever's currently in the register. Read and record it,
1782 // to be analysed later.
1783 Register Reg = MO.getReg();
1784 ValueIDNum Num = MTracker->readReg(Reg);
1785 auto PHIRec = DebugPHIRecord(
1786 {InstrNum, MI.getParent(), Num, MTracker->lookupOrTrackRegister(Reg)});
1787 DebugPHINumToValue.push_back(PHIRec);
1788
1789 // Ensure this register is tracked.
1790 for (MCRegAliasIterator RAI(MO.getReg(), TRI, true); RAI.isValid(); ++RAI)
1791 MTracker->lookupOrTrackRegister(*RAI);
1792 } else if (MO.isFI()) {
1793 // The value is whatever's in this stack slot.
1794 unsigned FI = MO.getIndex();
1795
1796 // If the stack slot is dead, then this was optimized away.
1797 // FIXME: stack slot colouring should account for slots that get merged.
1798 if (MFI->isDeadObjectIndex(FI))
1799 return EmitBadPHI();
1800
1801 // Identify this spill slot, ensure it's tracked.
1802 Register Base;
1803 StackOffset Offs = TFI->getFrameIndexReference(*MI.getMF(), FI, Base);
1804 SpillLoc SL = {Base, Offs};
1805 std::optional<SpillLocationNo> SpillNo = MTracker->getOrTrackSpillLoc(SL);
1806
1807 // We might be able to find a value, but have chosen not to, to avoid
1808 // tracking too much stack information.
1809 if (!SpillNo)
1810 return EmitBadPHI();
1811
1812 // Any stack location DBG_PHI should have an associate bit-size.
1813 assert(MI.getNumOperands() == 3 && "Stack DBG_PHI with no size?");
1814 unsigned slotBitSize = MI.getOperand(2).getImm();
1815
1816 unsigned SpillID = MTracker->getLocID(*SpillNo, {slotBitSize, 0});
1817 LocIdx SpillLoc = MTracker->getSpillMLoc(SpillID);
1818 ValueIDNum Result = MTracker->readMLoc(SpillLoc);
1819
1820 // Record this DBG_PHI for later analysis.
1821 auto DbgPHI = DebugPHIRecord({InstrNum, MI.getParent(), Result, SpillLoc});
1822 DebugPHINumToValue.push_back(DbgPHI);
1823 } else {
1824 // Else: if the operand is neither a legal register or a stack slot, then
1825 // we're being fed illegal debug-info. Record an empty PHI, so that any
1826 // debug users trying to read this number will be put off trying to
1827 // interpret the value.
1828 LLVM_DEBUG(
1829 { dbgs() << "Seen DBG_PHI with unrecognised operand format\n"; });
1830 return EmitBadPHI();
1831 }
1832
1833 return true;
1834}
1835
1836void InstrRefBasedLDV::transferRegisterDef(MachineInstr &MI) {
1837 // Meta Instructions do not affect the debug liveness of any register they
1838 // define.
1839 if (MI.isImplicitDef()) {
1840 // Except when there's an implicit def, and the location it's defining has
1841 // no value number. The whole point of an implicit def is to announce that
1842 // the register is live, without be specific about it's value. So define
1843 // a value if there isn't one already.
1844 ValueIDNum Num = MTracker->readReg(MI.getOperand(0).getReg());
1845 // Has a legitimate value -> ignore the implicit def.
1846 if (Num.getLoc() != 0)
1847 return;
1848 // Otherwise, def it here.
1849 } else if (MI.isMetaInstruction())
1850 return;
1851
1852 // We always ignore SP defines on call instructions, they don't actually
1853 // change the value of the stack pointer... except for win32's _chkstk. This
1854 // is rare: filter quickly for the common case (no stack adjustments, not a
1855 // call, etc). If it is a call that modifies SP, recognise the SP register
1856 // defs.
1857 bool CallChangesSP = false;
1858 if (AdjustsStackInCalls && MI.isCall() && MI.getOperand(0).isSymbol() &&
1859 !strcmp(MI.getOperand(0).getSymbolName(), StackProbeSymbolName.data()))
1860 CallChangesSP = true;
1861
1862 // Test whether we should ignore a def of this register due to it being part
1863 // of the stack pointer.
1864 auto IgnoreSPAlias = [this, &MI, CallChangesSP](Register R) -> bool {
1865 if (CallChangesSP)
1866 return false;
1867 return MI.isCall() && MTracker->SPAliases.count(R);
1868 };
1869
1870 // Find the regs killed by MI, and find regmasks of preserved regs.
1871 // Max out the number of statically allocated elements in `DeadRegs`, as this
1872 // prevents fallback to std::set::count() operations.
1873 SmallSet<uint32_t, 32> DeadRegs;
1876 for (const MachineOperand &MO : MI.operands()) {
1877 // Determine whether the operand is a register def.
1878 if (MO.isReg() && MO.isDef() && MO.getReg() && MO.getReg().isPhysical() &&
1879 !IgnoreSPAlias(MO.getReg())) {
1880 // Remove ranges of all aliased registers.
1881 for (MCRegAliasIterator RAI(MO.getReg(), TRI, true); RAI.isValid(); ++RAI)
1882 // FIXME: Can we break out of this loop early if no insertion occurs?
1883 DeadRegs.insert(*RAI);
1884 } else if (MO.isRegMask()) {
1885 RegMasks.push_back(MO.getRegMask());
1886 RegMaskPtrs.push_back(&MO);
1887 }
1888 }
1889
1890 // Tell MLocTracker about all definitions, of regmasks and otherwise.
1891 for (uint32_t DeadReg : DeadRegs)
1892 MTracker->defReg(DeadReg, CurBB, CurInst);
1893
1894 for (const auto *MO : RegMaskPtrs)
1895 MTracker->writeRegMask(MO, CurBB, CurInst);
1896
1897 // If this instruction writes to a spill slot, def that slot.
1898 if (hasFoldedStackStore(MI)) {
1899 if (std::optional<SpillLocationNo> SpillNo =
1900 extractSpillBaseRegAndOffset(MI)) {
1901 for (unsigned int I = 0; I < MTracker->NumSlotIdxes; ++I) {
1902 unsigned SpillID = MTracker->getSpillIDWithIdx(*SpillNo, I);
1903 LocIdx L = MTracker->getSpillMLoc(SpillID);
1904 MTracker->setMLoc(L, ValueIDNum(CurBB, CurInst, L));
1905 }
1906 }
1907 }
1908
1909 if (!TTracker)
1910 return;
1911
1912 // When committing variable values to locations: tell transfer tracker that
1913 // we've clobbered things. It may be able to recover the variable from a
1914 // different location.
1915
1916 // Inform TTracker about any direct clobbers.
1917 for (uint32_t DeadReg : DeadRegs) {
1918 LocIdx Loc = MTracker->lookupOrTrackRegister(DeadReg);
1919 TTracker->clobberMloc(Loc, MI.getIterator(), false);
1920 }
1921
1922 // Look for any clobbers performed by a register mask. Only test locations
1923 // that are actually being tracked.
1924 if (!RegMaskPtrs.empty()) {
1925 for (auto L : MTracker->locations()) {
1926 // Stack locations can't be clobbered by regmasks.
1927 if (MTracker->isSpill(L.Idx))
1928 continue;
1929
1930 Register Reg = MTracker->LocIdxToLocID[L.Idx];
1931 if (IgnoreSPAlias(Reg))
1932 continue;
1933
1934 for (const auto *MO : RegMaskPtrs)
1935 if (MO->clobbersPhysReg(Reg))
1936 TTracker->clobberMloc(L.Idx, MI.getIterator(), false);
1937 }
1938 }
1939
1940 // Tell TTracker about any folded stack store.
1941 if (hasFoldedStackStore(MI)) {
1942 if (std::optional<SpillLocationNo> SpillNo =
1943 extractSpillBaseRegAndOffset(MI)) {
1944 for (unsigned int I = 0; I < MTracker->NumSlotIdxes; ++I) {
1945 unsigned SpillID = MTracker->getSpillIDWithIdx(*SpillNo, I);
1946 LocIdx L = MTracker->getSpillMLoc(SpillID);
1947 TTracker->clobberMloc(L, MI.getIterator(), true);
1948 }
1949 }
1950 }
1951}
1952
1953void InstrRefBasedLDV::performCopy(Register SrcRegNum, Register DstRegNum) {
1954 // In all circumstances, re-def all aliases. It's definitely a new value now.
1955 for (MCRegAliasIterator RAI(DstRegNum, TRI, true); RAI.isValid(); ++RAI)
1956 MTracker->defReg(*RAI, CurBB, CurInst);
1957
1958 ValueIDNum SrcValue = MTracker->readReg(SrcRegNum);
1959 MTracker->setReg(DstRegNum, SrcValue);
1960
1961 // Copy subregisters from one location to another.
1962 for (MCSubRegIndexIterator SRI(SrcRegNum, TRI); SRI.isValid(); ++SRI) {
1963 unsigned SrcSubReg = SRI.getSubReg();
1964 unsigned SubRegIdx = SRI.getSubRegIndex();
1965 unsigned DstSubReg = TRI->getSubReg(DstRegNum, SubRegIdx);
1966 if (!DstSubReg)
1967 continue;
1968
1969 // Do copy. There are two matching subregisters, the source value should
1970 // have been def'd when the super-reg was, the latter might not be tracked
1971 // yet.
1972 // This will force SrcSubReg to be tracked, if it isn't yet. Will read
1973 // mphi values if it wasn't tracked.
1974 LocIdx SrcL = MTracker->lookupOrTrackRegister(SrcSubReg);
1975 LocIdx DstL = MTracker->lookupOrTrackRegister(DstSubReg);
1976 (void)SrcL;
1977 (void)DstL;
1978 ValueIDNum CpyValue = MTracker->readReg(SrcSubReg);
1979
1980 MTracker->setReg(DstSubReg, CpyValue);
1981 }
1982}
1983
1984std::optional<SpillLocationNo>
1985InstrRefBasedLDV::isSpillInstruction(const MachineInstr &MI,
1986 MachineFunction *MF) {
1987 // TODO: Handle multiple stores folded into one.
1988 if (!MI.hasOneMemOperand())
1989 return std::nullopt;
1990
1991 // Reject any memory operand that's aliased -- we can't guarantee its value.
1992 auto MMOI = MI.memoperands_begin();
1993 const PseudoSourceValue *PVal = (*MMOI)->getPseudoValue();
1994 if (PVal->isAliased(MFI))
1995 return std::nullopt;
1996
1997 if (!MI.getSpillSize(TII) && !MI.getFoldedSpillSize(TII))
1998 return std::nullopt; // This is not a spill instruction, since no valid size
1999 // was returned from either function.
2000
2001 return extractSpillBaseRegAndOffset(MI);
2002}
2003
2004bool InstrRefBasedLDV::isLocationSpill(const MachineInstr &MI,
2005 MachineFunction *MF, unsigned &Reg) {
2006 if (!isSpillInstruction(MI, MF))
2007 return false;
2008
2009 int FI;
2010 Reg = TII->isStoreToStackSlotPostFE(MI, FI);
2011 return Reg != 0;
2012}
2013
2014std::optional<SpillLocationNo>
2015InstrRefBasedLDV::isRestoreInstruction(const MachineInstr &MI,
2016 MachineFunction *MF, unsigned &Reg) {
2017 if (!MI.hasOneMemOperand())
2018 return std::nullopt;
2019
2020 // FIXME: Handle folded restore instructions with more than one memory
2021 // operand.
2022 if (MI.getRestoreSize(TII)) {
2023 Reg = MI.getOperand(0).getReg();
2024 return extractSpillBaseRegAndOffset(MI);
2025 }
2026 return std::nullopt;
2027}
2028
2029bool InstrRefBasedLDV::transferSpillOrRestoreInst(MachineInstr &MI) {
2030 // XXX -- it's too difficult to implement VarLocBasedImpl's stack location
2031 // limitations under the new model. Therefore, when comparing them, compare
2032 // versions that don't attempt spills or restores at all.
2033 if (EmulateOldLDV)
2034 return false;
2035
2036 // Strictly limit ourselves to plain loads and stores, not all instructions
2037 // that can access the stack.
2038 int DummyFI = -1;
2039 if (!TII->isStoreToStackSlotPostFE(MI, DummyFI) &&
2040 !TII->isLoadFromStackSlotPostFE(MI, DummyFI))
2041 return false;
2042
2043 MachineFunction *MF = MI.getMF();
2044 unsigned Reg;
2045
2046 LLVM_DEBUG(dbgs() << "Examining instruction: "; MI.dump(););
2047
2048 // Strictly limit ourselves to plain loads and stores, not all instructions
2049 // that can access the stack.
2050 int FIDummy;
2051 if (!TII->isStoreToStackSlotPostFE(MI, FIDummy) &&
2052 !TII->isLoadFromStackSlotPostFE(MI, FIDummy))
2053 return false;
2054
2055 // First, if there are any DBG_VALUEs pointing at a spill slot that is
2056 // written to, terminate that variable location. The value in memory
2057 // will have changed. DbgEntityHistoryCalculator doesn't try to detect this.
2058 if (std::optional<SpillLocationNo> Loc = isSpillInstruction(MI, MF)) {
2059 // Un-set this location and clobber, so that earlier locations don't
2060 // continue past this store.
2061 for (unsigned SlotIdx = 0; SlotIdx < MTracker->NumSlotIdxes; ++SlotIdx) {
2062 unsigned SpillID = MTracker->getSpillIDWithIdx(*Loc, SlotIdx);
2063 std::optional<LocIdx> MLoc = MTracker->getSpillMLoc(SpillID);
2064 if (!MLoc)
2065 continue;
2066
2067 // We need to over-write the stack slot with something (here, a def at
2068 // this instruction) to ensure no values are preserved in this stack slot
2069 // after the spill. It also prevents TTracker from trying to recover the
2070 // location and re-installing it in the same place.
2071 ValueIDNum Def(CurBB, CurInst, *MLoc);
2072 MTracker->setMLoc(*MLoc, Def);
2073 if (TTracker)
2074 TTracker->clobberMloc(*MLoc, MI.getIterator());
2075 }
2076 }
2077
2078 // Try to recognise spill and restore instructions that may transfer a value.
2079 if (isLocationSpill(MI, MF, Reg)) {
2080 // isLocationSpill returning true should guarantee we can extract a
2081 // location.
2082 SpillLocationNo Loc = *extractSpillBaseRegAndOffset(MI);
2083
2084 auto DoTransfer = [&](Register SrcReg, unsigned SpillID) {
2085 auto ReadValue = MTracker->readReg(SrcReg);
2086 LocIdx DstLoc = MTracker->getSpillMLoc(SpillID);
2087 MTracker->setMLoc(DstLoc, ReadValue);
2088
2089 if (TTracker) {
2090 LocIdx SrcLoc = MTracker->getRegMLoc(SrcReg);
2091 TTracker->transferMlocs(SrcLoc, DstLoc, MI.getIterator());
2092 }
2093 };
2094
2095 // Then, transfer subreg bits.
2096 for (MCPhysReg SR : TRI->subregs(Reg)) {
2097 // Ensure this reg is tracked,
2098 (void)MTracker->lookupOrTrackRegister(SR);
2099 unsigned SubregIdx = TRI->getSubRegIndex(Reg, SR);
2100 unsigned SpillID = MTracker->getLocID(Loc, SubregIdx);
2101 DoTransfer(SR, SpillID);
2102 }
2103
2104 // Directly lookup size of main source reg, and transfer.
2105 unsigned Size = TRI->getRegSizeInBits(Reg, *MRI);
2106 unsigned SpillID = MTracker->getLocID(Loc, {Size, 0});
2107 DoTransfer(Reg, SpillID);
2108 } else {
2109 std::optional<SpillLocationNo> Loc = isRestoreInstruction(MI, MF, Reg);
2110 if (!Loc)
2111 return false;
2112
2113 // Assumption: we're reading from the base of the stack slot, not some
2114 // offset into it. It seems very unlikely LLVM would ever generate
2115 // restores where this wasn't true. This then becomes a question of what
2116 // subregisters in the destination register line up with positions in the
2117 // stack slot.
2118
2119 // Def all registers that alias the destination.
2120 for (MCRegAliasIterator RAI(Reg, TRI, true); RAI.isValid(); ++RAI)
2121 MTracker->defReg(*RAI, CurBB, CurInst);
2122
2123 // Now find subregisters within the destination register, and load values
2124 // from stack slot positions.
2125 auto DoTransfer = [&](Register DestReg, unsigned SpillID) {
2126 LocIdx SrcIdx = MTracker->getSpillMLoc(SpillID);
2127 auto ReadValue = MTracker->readMLoc(SrcIdx);
2128 MTracker->setReg(DestReg, ReadValue);
2129 };
2130
2131 for (MCPhysReg SR : TRI->subregs(Reg)) {
2132 unsigned Subreg = TRI->getSubRegIndex(Reg, SR);
2133 unsigned SpillID = MTracker->getLocID(*Loc, Subreg);
2134 DoTransfer(SR, SpillID);
2135 }
2136
2137 // Directly look up this registers slot idx by size, and transfer.
2138 unsigned Size = TRI->getRegSizeInBits(Reg, *MRI);
2139 unsigned SpillID = MTracker->getLocID(*Loc, {Size, 0});
2140 DoTransfer(Reg, SpillID);
2141 }
2142 return true;
2143}
2144
2145bool InstrRefBasedLDV::transferRegisterCopy(MachineInstr &MI) {
2146 auto DestSrc = TII->isCopyLikeInstr(MI);
2147 if (!DestSrc)
2148 return false;
2149
2150 const MachineOperand *DestRegOp = DestSrc->Destination;
2151 const MachineOperand *SrcRegOp = DestSrc->Source;
2152
2153 Register SrcReg = SrcRegOp->getReg();
2154 Register DestReg = DestRegOp->getReg();
2155
2156 // Ignore identity copies. Yep, these make it as far as LiveDebugValues.
2157 if (SrcReg == DestReg)
2158 return true;
2159
2160 // For emulating VarLocBasedImpl:
2161 // We want to recognize instructions where destination register is callee
2162 // saved register. If register that could be clobbered by the call is
2163 // included, there would be a great chance that it is going to be clobbered
2164 // soon. It is more likely that previous register, which is callee saved, is
2165 // going to stay unclobbered longer, even if it is killed.
2166 //
2167 // For InstrRefBasedImpl, we can track multiple locations per value, so
2168 // ignore this condition.
2169 if (EmulateOldLDV && !isCalleeSavedReg(DestReg))
2170 return false;
2171
2172 // InstrRefBasedImpl only followed killing copies.
2173 if (EmulateOldLDV && !SrcRegOp->isKill())
2174 return false;
2175
2176 // Before we update MTracker, remember which values were present in each of
2177 // the locations about to be overwritten, so that we can recover any
2178 // potentially clobbered variables.
2179 DenseMap<LocIdx, ValueIDNum> ClobberedLocs;
2180 if (TTracker) {
2181 for (MCRegAliasIterator RAI(DestReg, TRI, true); RAI.isValid(); ++RAI) {
2182 LocIdx ClobberedLoc = MTracker->getRegMLoc(*RAI);
2183 auto MLocIt = TTracker->ActiveMLocs.find(ClobberedLoc);
2184 // If ActiveMLocs isn't tracking this location or there are no variables
2185 // using it, don't bother remembering.
2186 if (MLocIt == TTracker->ActiveMLocs.end() || MLocIt->second.empty())
2187 continue;
2188 ValueIDNum Value = MTracker->readReg(*RAI);
2189 ClobberedLocs[ClobberedLoc] = Value;
2190 }
2191 }
2192
2193 // Copy MTracker info, including subregs if available.
2194 InstrRefBasedLDV::performCopy(SrcReg, DestReg);
2195
2196 // The copy might have clobbered variables based on the destination register.
2197 // Tell TTracker about it, passing the old ValueIDNum to search for
2198 // alternative locations (or else terminating those variables).
2199 if (TTracker) {
2200 for (auto LocVal : ClobberedLocs) {
2201 TTracker->clobberMloc(LocVal.first, LocVal.second, MI.getIterator(), false);
2202 }
2203 }
2204
2205 // Only produce a transfer of DBG_VALUE within a block where old LDV
2206 // would have. We might make use of the additional value tracking in some
2207 // other way, later.
2208 if (TTracker && isCalleeSavedReg(DestReg) && SrcRegOp->isKill())
2209 TTracker->transferMlocs(MTracker->getRegMLoc(SrcReg),
2210 MTracker->getRegMLoc(DestReg), MI.getIterator());
2211
2212 // VarLocBasedImpl would quit tracking the old location after copying.
2213 if (EmulateOldLDV && SrcReg != DestReg)
2214 MTracker->defReg(SrcReg, CurBB, CurInst);
2215
2216 return true;
2217}
2218
2219/// Accumulate a mapping between each DILocalVariable fragment and other
2220/// fragments of that DILocalVariable which overlap. This reduces work during
2221/// the data-flow stage from "Find any overlapping fragments" to "Check if the
2222/// known-to-overlap fragments are present".
2223/// \param MI A previously unprocessed debug instruction to analyze for
2224/// fragment usage.
2225void InstrRefBasedLDV::accumulateFragmentMap(MachineInstr &MI) {
2226 assert(MI.isDebugValueLike());
2227 DebugVariable MIVar(MI.getDebugVariable(), MI.getDebugExpression(),
2228 MI.getDebugLoc()->getInlinedAt());
2229 FragmentInfo ThisFragment = MIVar.getFragmentOrDefault();
2230
2231 // If this is the first sighting of this variable, then we are guaranteed
2232 // there are currently no overlapping fragments either. Initialize the set
2233 // of seen fragments, record no overlaps for the current one, and return.
2234 auto SeenIt = SeenFragments.find(MIVar.getVariable());
2235 if (SeenIt == SeenFragments.end()) {
2236 SmallSet<FragmentInfo, 4> OneFragment;
2237 OneFragment.insert(ThisFragment);
2238 SeenFragments.insert({MIVar.getVariable(), OneFragment});
2239
2240 OverlapFragments.insert({{MIVar.getVariable(), ThisFragment}, {}});
2241 return;
2242 }
2243
2244 // If this particular Variable/Fragment pair already exists in the overlap
2245 // map, it has already been accounted for.
2246 auto IsInOLapMap =
2247 OverlapFragments.insert({{MIVar.getVariable(), ThisFragment}, {}});
2248 if (!IsInOLapMap.second)
2249 return;
2250
2251 auto &ThisFragmentsOverlaps = IsInOLapMap.first->second;
2252 auto &AllSeenFragments = SeenIt->second;
2253
2254 // Otherwise, examine all other seen fragments for this variable, with "this"
2255 // fragment being a previously unseen fragment. Record any pair of
2256 // overlapping fragments.
2257 for (const auto &ASeenFragment : AllSeenFragments) {
2258 // Does this previously seen fragment overlap?
2259 if (DIExpression::fragmentsOverlap(ThisFragment, ASeenFragment)) {
2260 // Yes: Mark the current fragment as being overlapped.
2261 ThisFragmentsOverlaps.push_back(ASeenFragment);
2262 // Mark the previously seen fragment as being overlapped by the current
2263 // one.
2264 auto ASeenFragmentsOverlaps =
2265 OverlapFragments.find({MIVar.getVariable(), ASeenFragment});
2266 assert(ASeenFragmentsOverlaps != OverlapFragments.end() &&
2267 "Previously seen var fragment has no vector of overlaps");
2268 ASeenFragmentsOverlaps->second.push_back(ThisFragment);
2269 }
2270 }
2271
2272 AllSeenFragments.insert(ThisFragment);
2273}
2274
2275void InstrRefBasedLDV::process(MachineInstr &MI,
2276 const FuncValueTable *MLiveOuts,
2277 const FuncValueTable *MLiveIns) {
2278 // Try to interpret an MI as a debug or transfer instruction. Only if it's
2279 // none of these should we interpret it's register defs as new value
2280 // definitions.
2281 if (transferDebugValue(MI))
2282 return;
2283 if (transferDebugInstrRef(MI, MLiveOuts, MLiveIns))
2284 return;
2285 if (transferDebugPHI(MI))
2286 return;
2287 if (transferRegisterCopy(MI))
2288 return;
2289 if (transferSpillOrRestoreInst(MI))
2290 return;
2291 transferRegisterDef(MI);
2292}
2293
2294void InstrRefBasedLDV::produceMLocTransferFunction(
2296 unsigned MaxNumBlocks) {
2297 // Because we try to optimize around register mask operands by ignoring regs
2298 // that aren't currently tracked, we set up something ugly for later: RegMask
2299 // operands that are seen earlier than the first use of a register, still need
2300 // to clobber that register in the transfer function. But this information
2301 // isn't actively recorded. Instead, we track each RegMask used in each block,
2302 // and accumulated the clobbered but untracked registers in each block into
2303 // the following bitvector. Later, if new values are tracked, we can add
2304 // appropriate clobbers.
2305 SmallVector<BitVector, 32> BlockMasks;
2306 BlockMasks.resize(MaxNumBlocks);
2307
2308 // Reserve one bit per register for the masks described above.
2309 unsigned BVWords = MachineOperand::getRegMaskSize(TRI->getNumRegs());
2310 for (auto &BV : BlockMasks)
2311 BV.resize(TRI->getNumRegs(), true);
2312
2313 // Step through all instructions and inhale the transfer function.
2314 for (auto &MBB : MF) {
2315 // Object fields that are read by trackers to know where we are in the
2316 // function.
2317 CurBB = MBB.getNumber();
2318 CurInst = 1;
2319
2320 // Set all machine locations to a PHI value. For transfer function
2321 // production only, this signifies the live-in value to the block.
2322 MTracker->reset();
2323 MTracker->setMPhis(CurBB);
2324
2325 // Step through each instruction in this block.
2326 for (auto &MI : MBB) {
2327 // Pass in an empty unique_ptr for the value tables when accumulating the
2328 // machine transfer function.
2329 process(MI, nullptr, nullptr);
2330
2331 // Also accumulate fragment map.
2332 if (MI.isDebugValueLike())
2333 accumulateFragmentMap(MI);
2334
2335 // Create a map from the instruction number (if present) to the
2336 // MachineInstr and its position.
2337 if (uint64_t InstrNo = MI.peekDebugInstrNum()) {
2338 auto InstrAndPos = std::make_pair(&MI, CurInst);
2339 auto InsertResult =
2340 DebugInstrNumToInstr.insert(std::make_pair(InstrNo, InstrAndPos));
2341
2342 // There should never be duplicate instruction numbers.
2343 assert(InsertResult.second);
2344 (void)InsertResult;
2345 }
2346
2347 ++CurInst;
2348 }
2349
2350 // Produce the transfer function, a map of machine location to new value. If
2351 // any machine location has the live-in phi value from the start of the
2352 // block, it's live-through and doesn't need recording in the transfer
2353 // function.
2354 for (auto Location : MTracker->locations()) {
2355 LocIdx Idx = Location.Idx;
2356 ValueIDNum &P = Location.Value;
2357 if (P.isPHI() && P.getLoc() == Idx.asU64())
2358 continue;
2359
2360 // Insert-or-update.
2361 auto &TransferMap = MLocTransfer[CurBB];
2362 auto Result = TransferMap.insert(std::make_pair(Idx.asU64(), P));
2363 if (!Result.second)
2364 Result.first->second = P;
2365 }
2366
2367 // Accumulate any bitmask operands into the clobbered reg mask for this
2368 // block.
2369 for (auto &P : MTracker->Masks) {
2370 BlockMasks[CurBB].clearBitsNotInMask(P.first->getRegMask(), BVWords);
2371 }
2372 }
2373
2374 // Compute a bitvector of all the registers that are tracked in this block.
2375 BitVector UsedRegs(TRI->getNumRegs());
2376 for (auto Location : MTracker->locations()) {
2377 unsigned ID = MTracker->LocIdxToLocID[Location.Idx];
2378 // Ignore stack slots, and aliases of the stack pointer.
2379 if (ID >= TRI->getNumRegs() || MTracker->SPAliases.count(ID))
2380 continue;
2381 UsedRegs.set(ID);
2382 }
2383
2384 // Check that any regmask-clobber of a register that gets tracked, is not
2385 // live-through in the transfer function. It needs to be clobbered at the
2386 // very least.
2387 for (unsigned int I = 0; I < MaxNumBlocks; ++I) {
2388 BitVector &BV = BlockMasks[I];
2389 BV.flip();
2390 BV &= UsedRegs;
2391 // This produces all the bits that we clobber, but also use. Check that
2392 // they're all clobbered or at least set in the designated transfer
2393 // elem.
2394 for (unsigned Bit : BV.set_bits()) {
2395 unsigned ID = MTracker->getLocID(Bit);
2396 LocIdx Idx = MTracker->LocIDToLocIdx[ID];
2397 auto &TransferMap = MLocTransfer[I];
2398
2399 // Install a value representing the fact that this location is effectively
2400 // written to in this block. As there's no reserved value, instead use
2401 // a value number that is never generated. Pick the value number for the
2402 // first instruction in the block, def'ing this location, which we know
2403 // this block never used anyway.
2404 ValueIDNum NotGeneratedNum = ValueIDNum(I, 1, Idx);
2405 auto Result =
2406 TransferMap.insert(std::make_pair(Idx.asU64(), NotGeneratedNum));
2407 if (!Result.second) {
2408 ValueIDNum &ValueID = Result.first->second;
2409 if (ValueID.getBlock() == I && ValueID.isPHI())
2410 // It was left as live-through. Set it to clobbered.
2411 ValueID = NotGeneratedNum;
2412 }
2413 }
2414 }
2415}
2416
2417bool InstrRefBasedLDV::mlocJoin(
2419 FuncValueTable &OutLocs, ValueTable &InLocs) {
2420 LLVM_DEBUG(dbgs() << "join MBB: " << MBB.getNumber() << "\n");
2421 bool Changed = false;
2422
2423 // Handle value-propagation when control flow merges on entry to a block. For
2424 // any location without a PHI already placed, the location has the same value
2425 // as its predecessors. If a PHI is placed, test to see whether it's now a
2426 // redundant PHI that we can eliminate.
2427
2429
2430 // Visit predecessors in RPOT order.
2431 auto Cmp = [&](const MachineBasicBlock *A, const MachineBasicBlock *B) {
2432 return BBToOrder.find(A)->second < BBToOrder.find(B)->second;
2433 };
2434 llvm::sort(BlockOrders, Cmp);
2435
2436 // Skip entry block.
2437 if (BlockOrders.size() == 0) {
2438 // FIXME: We don't use assert here to prevent instr-ref-unreachable.mir
2439 // failing.
2441 << "Found not reachable block " << MBB.getFullName()
2442 << " from entry which may lead out of "
2443 "bound access to VarLocs\n");
2444 return false;
2445 }
2446
2447 // Step through all machine locations, look at each predecessor and test
2448 // whether we can eliminate redundant PHIs.
2449 for (auto Location : MTracker->locations()) {
2450 LocIdx Idx = Location.Idx;
2451
2452 // Pick out the first predecessors live-out value for this location. It's
2453 // guaranteed to not be a backedge, as we order by RPO.
2454 ValueIDNum FirstVal = OutLocs[*BlockOrders[0]][Idx.asU64()];
2455
2456 // If we've already eliminated a PHI here, do no further checking, just
2457 // propagate the first live-in value into this block.
2458 if (InLocs[Idx.asU64()] != ValueIDNum(MBB.getNumber(), 0, Idx)) {
2459 if (InLocs[Idx.asU64()] != FirstVal) {
2460 InLocs[Idx.asU64()] = FirstVal;
2461 Changed |= true;
2462 }
2463 continue;
2464 }
2465
2466 // We're now examining a PHI to see whether it's un-necessary. Loop around
2467 // the other live-in values and test whether they're all the same.
2468 bool Disagree = false;
2469 for (unsigned int I = 1; I < BlockOrders.size(); ++I) {
2470 const MachineBasicBlock *PredMBB = BlockOrders[I];
2471 const ValueIDNum &PredLiveOut = OutLocs[*PredMBB][Idx.asU64()];
2472
2473 // Incoming values agree, continue trying to eliminate this PHI.
2474 if (FirstVal == PredLiveOut)
2475 continue;
2476
2477 // We can also accept a PHI value that feeds back into itself.
2478 if (PredLiveOut == ValueIDNum(MBB.getNumber(), 0, Idx))
2479 continue;
2480
2481 // Live-out of a predecessor disagrees with the first predecessor.
2482 Disagree = true;
2483 }
2484
2485 // No disagreement? No PHI. Otherwise, leave the PHI in live-ins.
2486 if (!Disagree) {
2487 InLocs[Idx.asU64()] = FirstVal;
2488 Changed |= true;
2489 }
2490 }
2491
2492 // TODO: Reimplement NumInserted and NumRemoved.
2493 return Changed;
2494}
2495
2496void InstrRefBasedLDV::findStackIndexInterference(
2498 // We could spend a bit of time finding the exact, minimal, set of stack
2499 // indexes that interfere with each other, much like reg units. Or, we can
2500 // rely on the fact that:
2501 // * The smallest / lowest index will interfere with everything at zero
2502 // offset, which will be the largest set of registers,
2503 // * Most indexes with non-zero offset will end up being interference units
2504 // anyway.
2505 // So just pick those out and return them.
2506
2507 // We can rely on a single-byte stack index existing already, because we
2508 // initialize them in MLocTracker.
2509 auto It = MTracker->StackSlotIdxes.find({8, 0});
2510 assert(It != MTracker->StackSlotIdxes.end());
2511 Slots.push_back(It->second);
2512
2513 // Find anything that has a non-zero offset and add that too.
2514 for (auto &Pair : MTracker->StackSlotIdxes) {
2515 // Is offset zero? If so, ignore.
2516 if (!Pair.first.second)
2517 continue;
2518 Slots.push_back(Pair.second);
2519 }
2520}
2521
2522void InstrRefBasedLDV::placeMLocPHIs(
2524 FuncValueTable &MInLocs, SmallVectorImpl<MLocTransferMap> &MLocTransfer) {
2525 SmallVector<unsigned, 4> StackUnits;
2526 findStackIndexInterference(StackUnits);
2527
2528 // To avoid repeatedly running the PHI placement algorithm, leverage the
2529 // fact that a def of register MUST also def its register units. Find the
2530 // units for registers, place PHIs for them, and then replicate them for
2531 // aliasing registers. Some inputs that are never def'd (DBG_PHIs of
2532 // arguments) don't lead to register units being tracked, just place PHIs for
2533 // those registers directly. Stack slots have their own form of "unit",
2534 // store them to one side.
2535 SmallSet<Register, 32> RegUnitsToPHIUp;
2536 SmallSet<LocIdx, 32> NormalLocsToPHI;
2538 for (auto Location : MTracker->locations()) {
2539 LocIdx L = Location.Idx;
2540 if (MTracker->isSpill(L)) {
2541 StackSlots.insert(MTracker->locIDToSpill(MTracker->LocIdxToLocID[L]));
2542 continue;
2543 }
2544
2545 Register R = MTracker->LocIdxToLocID[L];
2546 SmallSet<Register, 8> FoundRegUnits;
2547 bool AnyIllegal = false;
2548 for (MCRegUnit Unit : TRI->regunits(R.asMCReg())) {
2549 for (MCRegUnitRootIterator URoot(Unit, TRI); URoot.isValid(); ++URoot) {
2550 if (!MTracker->isRegisterTracked(*URoot)) {
2551 // Not all roots were loaded into the tracking map: this register
2552 // isn't actually def'd anywhere, we only read from it. Generate PHIs
2553 // for this reg, but don't iterate units.
2554 AnyIllegal = true;
2555 } else {
2556 FoundRegUnits.insert(*URoot);
2557 }
2558 }
2559 }
2560
2561 if (AnyIllegal) {
2562 NormalLocsToPHI.insert(L);
2563 continue;
2564 }
2565
2566 RegUnitsToPHIUp.insert(FoundRegUnits.begin(), FoundRegUnits.end());
2567 }
2568
2569 // Lambda to fetch PHIs for a given location, and write into the PHIBlocks
2570 // collection.
2572 auto CollectPHIsForLoc = [&](LocIdx L) {
2573 // Collect the set of defs.
2575 for (unsigned int I = 0; I < OrderToBB.size(); ++I) {
2576 MachineBasicBlock *MBB = OrderToBB[I];
2577 const auto &TransferFunc = MLocTransfer[MBB->getNumber()];
2578 if (TransferFunc.contains(L))
2579 DefBlocks.insert(MBB);
2580 }
2581
2582 // The entry block defs the location too: it's the live-in / argument value.
2583 // Only insert if there are other defs though; everything is trivially live
2584 // through otherwise.
2585 if (!DefBlocks.empty())
2586 DefBlocks.insert(&*MF.begin());
2587
2588 // Ask the SSA construction algorithm where we should put PHIs. Clear
2589 // anything that might have been hanging around from earlier.
2590 PHIBlocks.clear();
2591 BlockPHIPlacement(AllBlocks, DefBlocks, PHIBlocks);
2592 };
2593
2594 auto InstallPHIsAtLoc = [&PHIBlocks, &MInLocs](LocIdx L) {
2595 for (const MachineBasicBlock *MBB : PHIBlocks)
2596 MInLocs[*MBB][L.asU64()] = ValueIDNum(MBB->getNumber(), 0, L);
2597 };
2598
2599 // For locations with no reg units, just place PHIs.
2600 for (LocIdx L : NormalLocsToPHI) {
2601 CollectPHIsForLoc(L);
2602 // Install those PHI values into the live-in value array.
2603 InstallPHIsAtLoc(L);
2604 }
2605
2606 // For stack slots, calculate PHIs for the equivalent of the units, then
2607 // install for each index.
2608 for (SpillLocationNo Slot : StackSlots) {
2609 for (unsigned Idx : StackUnits) {
2610 unsigned SpillID = MTracker->getSpillIDWithIdx(Slot, Idx);
2611 LocIdx L = MTracker->getSpillMLoc(SpillID);
2612 CollectPHIsForLoc(L);
2613 InstallPHIsAtLoc(L);
2614
2615 // Find anything that aliases this stack index, install PHIs for it too.
2616 unsigned Size, Offset;
2617 std::tie(Size, Offset) = MTracker->StackIdxesToPos[Idx];
2618 for (auto &Pair : MTracker->StackSlotIdxes) {
2619 unsigned ThisSize, ThisOffset;
2620 std::tie(ThisSize, ThisOffset) = Pair.first;
2621 if (ThisSize + ThisOffset <= Offset || Size + Offset <= ThisOffset)
2622 continue;
2623
2624 unsigned ThisID = MTracker->getSpillIDWithIdx(Slot, Pair.second);
2625 LocIdx ThisL = MTracker->getSpillMLoc(ThisID);
2626 InstallPHIsAtLoc(ThisL);
2627 }
2628 }
2629 }
2630
2631 // For reg units, place PHIs, and then place them for any aliasing registers.
2632 for (Register R : RegUnitsToPHIUp) {
2633 LocIdx L = MTracker->lookupOrTrackRegister(R);
2634 CollectPHIsForLoc(L);
2635
2636 // Install those PHI values into the live-in value array.
2637 InstallPHIsAtLoc(L);
2638
2639 // Now find aliases and install PHIs for those.
2640 for (MCRegAliasIterator RAI(R, TRI, true); RAI.isValid(); ++RAI) {
2641 // Super-registers that are "above" the largest register read/written by
2642 // the function will alias, but will not be tracked.
2643 if (!MTracker->isRegisterTracked(*RAI))
2644 continue;
2645
2646 LocIdx AliasLoc = MTracker->lookupOrTrackRegister(*RAI);
2647 InstallPHIsAtLoc(AliasLoc);
2648 }
2649 }
2650}
2651
2652void InstrRefBasedLDV::buildMLocValueMap(
2653 MachineFunction &MF, FuncValueTable &MInLocs, FuncValueTable &MOutLocs,
2654 SmallVectorImpl<MLocTransferMap> &MLocTransfer) {
2655 std::priority_queue<unsigned int, std::vector<unsigned int>,
2656 std::greater<unsigned int>>
2657 Worklist, Pending;
2658
2659 // We track what is on the current and pending worklist to avoid inserting
2660 // the same thing twice. We could avoid this with a custom priority queue,
2661 // but this is probably not worth it.
2662 SmallPtrSet<MachineBasicBlock *, 16> OnPending, OnWorklist;
2663
2664 // Initialize worklist with every block to be visited. Also produce list of
2665 // all blocks.
2667 for (unsigned int I = 0; I < BBToOrder.size(); ++I) {
2668 Worklist.push(I);
2669 OnWorklist.insert(OrderToBB[I]);
2670 AllBlocks.insert(OrderToBB[I]);
2671 }
2672
2673 // Initialize entry block to PHIs. These represent arguments.
2674 for (auto Location : MTracker->locations())
2675 MInLocs.tableForEntryMBB()[Location.Idx.asU64()] =
2676 ValueIDNum(0, 0, Location.Idx);
2677
2678 MTracker->reset();
2679
2680 // Start by placing PHIs, using the usual SSA constructor algorithm. Consider
2681 // any machine-location that isn't live-through a block to be def'd in that
2682 // block.
2683 placeMLocPHIs(MF, AllBlocks, MInLocs, MLocTransfer);
2684
2685 // Propagate values to eliminate redundant PHIs. At the same time, this
2686 // produces the table of Block x Location => Value for the entry to each
2687 // block.
2688 // The kind of PHIs we can eliminate are, for example, where one path in a
2689 // conditional spills and restores a register, and the register still has
2690 // the same value once control flow joins, unbeknowns to the PHI placement
2691 // code. Propagating values allows us to identify such un-necessary PHIs and
2692 // remove them.
2694 while (!Worklist.empty() || !Pending.empty()) {
2695 // Vector for storing the evaluated block transfer function.
2697
2698 while (!Worklist.empty()) {
2699 MachineBasicBlock *MBB = OrderToBB[Worklist.top()];
2700 CurBB = MBB->getNumber();
2701 Worklist.pop();
2702
2703 // Join the values in all predecessor blocks.
2704 bool InLocsChanged;
2705 InLocsChanged = mlocJoin(*MBB, Visited, MOutLocs, MInLocs[*MBB]);
2706 InLocsChanged |= Visited.insert(MBB).second;
2707
2708 // Don't examine transfer function if we've visited this loc at least
2709 // once, and inlocs haven't changed.
2710 if (!InLocsChanged)
2711 continue;
2712
2713 // Load the current set of live-ins into MLocTracker.
2714 MTracker->loadFromArray(MInLocs[*MBB], CurBB);
2715
2716 // Each element of the transfer function can be a new def, or a read of
2717 // a live-in value. Evaluate each element, and store to "ToRemap".
2718 ToRemap.clear();
2719 for (auto &P : MLocTransfer[CurBB]) {
2720 if (P.second.getBlock() == CurBB && P.second.isPHI()) {
2721 // This is a movement of whatever was live in. Read it.
2722 ValueIDNum NewID = MTracker->readMLoc(P.second.getLoc());
2723 ToRemap.push_back(std::make_pair(P.first, NewID));
2724 } else {
2725 // It's a def. Just set it.
2726 assert(P.second.getBlock() == CurBB);
2727 ToRemap.push_back(std::make_pair(P.first, P.second));
2728 }
2729 }
2730
2731 // Commit the transfer function changes into mloc tracker, which
2732 // transforms the contents of the MLocTracker into the live-outs.
2733 for (auto &P : ToRemap)
2734 MTracker->setMLoc(P.first, P.second);
2735
2736 // Now copy out-locs from mloc tracker into out-loc vector, checking
2737 // whether changes have occurred. These changes can have come from both
2738 // the transfer function, and mlocJoin.
2739 bool OLChanged = false;
2740 for (auto Location : MTracker->locations()) {
2741 OLChanged |= MOutLocs[*MBB][Location.Idx.asU64()] != Location.Value;
2742 MOutLocs[*MBB][Location.Idx.asU64()] = Location.Value;
2743 }
2744
2745 MTracker->reset();
2746
2747 // No need to examine successors again if out-locs didn't change.
2748 if (!OLChanged)
2749 continue;
2750
2751 // All successors should be visited: put any back-edges on the pending
2752 // list for the next pass-through, and any other successors to be
2753 // visited this pass, if they're not going to be already.
2754 for (auto *s : MBB->successors()) {
2755 // Does branching to this successor represent a back-edge?
2756 if (BBToOrder[s] > BBToOrder[MBB]) {
2757 // No: visit it during this dataflow iteration.
2758 if (OnWorklist.insert(s).second)
2759 Worklist.push(BBToOrder[s]);
2760 } else {
2761 // Yes: visit it on the next iteration.
2762 if (OnPending.insert(s).second)
2763 Pending.push(BBToOrder[s]);
2764 }
2765 }
2766 }
2767
2768 Worklist.swap(Pending);
2769 std::swap(OnPending, OnWorklist);
2770 OnPending.clear();
2771 // At this point, pending must be empty, since it was just the empty
2772 // worklist
2773 assert(Pending.empty() && "Pending should be empty");
2774 }
2775
2776 // Once all the live-ins don't change on mlocJoin(), we've eliminated all
2777 // redundant PHIs.
2778}
2779
2780void InstrRefBasedLDV::BlockPHIPlacement(
2781 const SmallPtrSetImpl<MachineBasicBlock *> &AllBlocks,
2782 const SmallPtrSetImpl<MachineBasicBlock *> &DefBlocks,
2784 // Apply IDF calculator to the designated set of location defs, storing
2785 // required PHIs into PHIBlocks. Uses the dominator tree stored in the
2786 // InstrRefBasedLDV object.
2788
2789 IDF.setLiveInBlocks(AllBlocks);
2790 IDF.setDefiningBlocks(DefBlocks);
2791 IDF.calculate(PHIBlocks);
2792}
2793
2794bool InstrRefBasedLDV::pickVPHILoc(
2796 const LiveIdxT &LiveOuts, FuncValueTable &MOutLocs,
2797 const SmallVectorImpl<const MachineBasicBlock *> &BlockOrders) {
2798
2799 // No predecessors means no PHIs.
2800 if (BlockOrders.empty())
2801 return false;
2802
2803 // All the location operands that do not already agree need to be joined,
2804 // track the indices of each such location operand here.
2805 SmallDenseSet<unsigned> LocOpsToJoin;
2806
2807 auto FirstValueIt = LiveOuts.find(BlockOrders[0]);
2808 if (FirstValueIt == LiveOuts.end())
2809 return false;
2810 const DbgValue &FirstValue = *FirstValueIt->second;
2811
2812 for (const auto p : BlockOrders) {
2813 auto OutValIt = LiveOuts.find(p);
2814 if (OutValIt == LiveOuts.end())
2815 // If we have a predecessor not in scope, we'll never find a PHI position.
2816 return false;
2817 const DbgValue &OutVal = *OutValIt->second;
2818
2819 // No-values cannot have locations we can join on.
2820 if (OutVal.Kind == DbgValue::NoVal)
2821 return false;
2822
2823 // For unjoined VPHIs where we don't know the location, we definitely
2824 // can't find a join loc unless the VPHI is a backedge.
2825 if (OutVal.isUnjoinedPHI() && OutVal.BlockNo != MBB.getNumber())
2826 return false;
2827
2828 if (!FirstValue.Properties.isJoinable(OutVal.Properties))
2829 return false;
2830
2831 for (unsigned Idx = 0; Idx < FirstValue.getLocationOpCount(); ++Idx) {
2832 // An unjoined PHI has no defined locations, and so a shared location must
2833 // be found for every operand.
2834 if (OutVal.isUnjoinedPHI()) {
2835 LocOpsToJoin.insert(Idx);
2836 continue;
2837 }
2838 DbgOpID FirstValOp = FirstValue.getDbgOpID(Idx);
2839 DbgOpID OutValOp = OutVal.getDbgOpID(Idx);
2840 if (FirstValOp != OutValOp) {
2841 // We can never join constant ops - the ops must either both be equal
2842 // constant ops or non-const ops.
2843 if (FirstValOp.isConst() || OutValOp.isConst())
2844 return false;
2845 else
2846 LocOpsToJoin.insert(Idx);
2847 }
2848 }
2849 }
2850
2851 SmallVector<DbgOpID> NewDbgOps;
2852
2853 for (unsigned Idx = 0; Idx < FirstValue.getLocationOpCount(); ++Idx) {
2854 // If this op doesn't need to be joined because the values agree, use that
2855 // already-agreed value.
2856 if (!LocOpsToJoin.contains(Idx)) {
2857 NewDbgOps.push_back(FirstValue.getDbgOpID(Idx));
2858 continue;
2859 }
2860
2861 std::optional<ValueIDNum> JoinedOpLoc =
2862 pickOperandPHILoc(Idx, MBB, LiveOuts, MOutLocs, BlockOrders);
2863
2864 if (!JoinedOpLoc)
2865 return false;
2866
2867 NewDbgOps.push_back(DbgOpStore.insert(*JoinedOpLoc));
2868 }
2869
2870 OutValues.append(NewDbgOps);
2871 return true;
2872}
2873
2874std::optional<ValueIDNum> InstrRefBasedLDV::pickOperandPHILoc(
2875 unsigned DbgOpIdx, const MachineBasicBlock &MBB, const LiveIdxT &LiveOuts,
2876 FuncValueTable &MOutLocs,
2877 const SmallVectorImpl<const MachineBasicBlock *> &BlockOrders) {
2878
2879 // Collect a set of locations from predecessor where its live-out value can
2880 // be found.
2882 unsigned NumLocs = MTracker->getNumLocs();
2883
2884 for (const auto p : BlockOrders) {
2885 auto OutValIt = LiveOuts.find(p);
2886 assert(OutValIt != LiveOuts.end());
2887 const DbgValue &OutVal = *OutValIt->second;
2888 DbgOpID OutValOpID = OutVal.getDbgOpID(DbgOpIdx);
2889 DbgOp OutValOp = DbgOpStore.find(OutValOpID);
2890 assert(!OutValOp.IsConst);
2891
2892 // Create new empty vector of locations.
2893 Locs.resize(Locs.size() + 1);
2894
2895 // If the live-in value is a def, find the locations where that value is
2896 // present. Do the same for VPHIs where we know the VPHI value.
2897 if (OutVal.Kind == DbgValue::Def ||
2898 (OutVal.Kind == DbgValue::VPHI && OutVal.BlockNo != MBB.getNumber() &&
2899 !OutValOp.isUndef())) {
2900 ValueIDNum ValToLookFor = OutValOp.ID;
2901 // Search the live-outs of the predecessor for the specified value.
2902 for (unsigned int I = 0; I < NumLocs; ++I) {
2903 if (MOutLocs[*p][I] == ValToLookFor)
2904 Locs.back().push_back(LocIdx(I));
2905 }
2906 } else {
2907 assert(OutVal.Kind == DbgValue::VPHI);
2908 // Otherwise: this is a VPHI on a backedge feeding back into itself, i.e.
2909 // a value that's live-through the whole loop. (It has to be a backedge,
2910 // because a block can't dominate itself). We can accept as a PHI location
2911 // any location where the other predecessors agree, _and_ the machine
2912 // locations feed back into themselves. Therefore, add all self-looping
2913 // machine-value PHI locations.
2914 for (unsigned int I = 0; I < NumLocs; ++I) {
2915 ValueIDNum MPHI(MBB.getNumber(), 0, LocIdx(I));
2916 if (MOutLocs[*p][I] == MPHI)
2917 Locs.back().push_back(LocIdx(I));
2918 }
2919 }
2920 }
2921 // We should have found locations for all predecessors, or returned.
2922 assert(Locs.size() == BlockOrders.size());
2923
2924 // Starting with the first set of locations, take the intersection with
2925 // subsequent sets.
2926 SmallVector<LocIdx, 4> CandidateLocs = Locs[0];
2927 for (unsigned int I = 1; I < Locs.size(); ++I) {
2928 auto &LocVec = Locs[I];
2929 SmallVector<LocIdx, 4> NewCandidates;
2930 std::set_intersection(CandidateLocs.begin(), CandidateLocs.end(),
2931 LocVec.begin(), LocVec.end(), std::inserter(NewCandidates, NewCandidates.begin()));
2932 CandidateLocs = NewCandidates;
2933 }
2934 if (CandidateLocs.empty())
2935 return std::nullopt;
2936
2937 // We now have a set of LocIdxes that contain the right output value in
2938 // each of the predecessors. Pick the lowest; if there's a register loc,
2939 // that'll be it.
2940 LocIdx L = *CandidateLocs.begin();
2941
2942 // Return a PHI-value-number for the found location.
2943 ValueIDNum PHIVal = {(unsigned)MBB.getNumber(), 0, L};
2944 return PHIVal;
2945}
2946
2947bool InstrRefBasedLDV::vlocJoin(
2948 MachineBasicBlock &MBB, LiveIdxT &VLOCOutLocs,
2950 DbgValue &LiveIn) {
2951 LLVM_DEBUG(dbgs() << "join MBB: " << MBB.getNumber() << "\n");
2952 bool Changed = false;
2953
2954 // Order predecessors by RPOT order, for exploring them in that order.
2956
2957 auto Cmp = [&](MachineBasicBlock *A, MachineBasicBlock *B) {
2958 return BBToOrder[A] < BBToOrder[B];
2959 };
2960
2961 llvm::sort(BlockOrders, Cmp);
2962
2963 unsigned CurBlockRPONum = BBToOrder[&MBB];
2964
2965 // Collect all the incoming DbgValues for this variable, from predecessor
2966 // live-out values.
2968 bool Bail = false;
2969 int BackEdgesStart = 0;
2970 for (auto *p : BlockOrders) {
2971 // If the predecessor isn't in scope / to be explored, we'll never be
2972 // able to join any locations.
2973 if (!BlocksToExplore.contains(p)) {
2974 Bail = true;
2975 break;
2976 }
2977
2978 // All Live-outs will have been initialized.
2979 DbgValue &OutLoc = *VLOCOutLocs.find(p)->second;
2980
2981 // Keep track of where back-edges begin in the Values vector. Relies on
2982 // BlockOrders being sorted by RPO.
2983 unsigned ThisBBRPONum = BBToOrder[p];
2984 if (ThisBBRPONum < CurBlockRPONum)
2985 ++BackEdgesStart;
2986
2987 Values.push_back(std::make_pair(p, &OutLoc));
2988 }
2989
2990 // If there were no values, or one of the predecessors couldn't have a
2991 // value, then give up immediately. It's not safe to produce a live-in
2992 // value. Leave as whatever it was before.
2993 if (Bail || Values.size() == 0)
2994 return false;
2995
2996 // All (non-entry) blocks have at least one non-backedge predecessor.
2997 // Pick the variable value from the first of these, to compare against
2998 // all others.
2999 const DbgValue &FirstVal = *Values[0].second;
3000
3001 // If the old live-in value is not a PHI then either a) no PHI is needed
3002 // here, or b) we eliminated the PHI that was here. If so, we can just
3003 // propagate in the first parent's incoming value.
3004 if (LiveIn.Kind != DbgValue::VPHI || LiveIn.BlockNo != MBB.getNumber()) {
3005 Changed = LiveIn != FirstVal;
3006 if (Changed)
3007 LiveIn = FirstVal;
3008 return Changed;
3009 }
3010
3011 // Scan for variable values that can never be resolved: if they have
3012 // different DIExpressions, different indirectness, or are mixed constants /
3013 // non-constants.
3014 for (const auto &V : Values) {
3015 if (!V.second->Properties.isJoinable(FirstVal.Properties))
3016 return false;
3017 if (V.second->Kind == DbgValue::NoVal)
3018 return false;
3019 if (!V.second->hasJoinableLocOps(FirstVal))
3020 return false;
3021 }
3022
3023 // Try to eliminate this PHI. Do the incoming values all agree?
3024 bool Disagree = false;
3025 for (auto &V : Values) {
3026 if (*V.second == FirstVal)
3027 continue; // No disagreement.
3028
3029 // If both values are not equal but have equal non-empty IDs then they refer
3030 // to the same value from different sources (e.g. one is VPHI and the other
3031 // is Def), which does not cause disagreement.
3032 if (V.second->hasIdenticalValidLocOps(FirstVal))
3033 continue;
3034
3035 // Eliminate if a backedge feeds a VPHI back into itself.
3036 if (V.second->Kind == DbgValue::VPHI &&
3037 V.second->BlockNo == MBB.getNumber() &&
3038 // Is this a backedge?
3039 std::distance(Values.begin(), &V) >= BackEdgesStart)
3040 continue;
3041
3042 Disagree = true;
3043 }
3044
3045 // No disagreement -> live-through value.
3046 if (!Disagree) {
3047 Changed = LiveIn != FirstVal;
3048 if (Changed)
3049 LiveIn = FirstVal;
3050 return Changed;
3051 } else {
3052 // Otherwise use a VPHI.
3053 DbgValue VPHI(MBB.getNumber(), FirstVal.Properties, DbgValue::VPHI);
3054 Changed = LiveIn != VPHI;
3055 if (Changed)
3056 LiveIn = VPHI;
3057 return Changed;
3058 }
3059}
3060
3061void InstrRefBasedLDV::getBlocksForScope(
3062 const DILocation *DILoc,
3064 const SmallPtrSetImpl<MachineBasicBlock *> &AssignBlocks) {
3065 // Get the set of "normal" in-lexical-scope blocks.
3066 LS.getMachineBasicBlocks(DILoc, BlocksToExplore);
3067
3068 // VarLoc LiveDebugValues tracks variable locations that are defined in
3069 // blocks not in scope. This is something we could legitimately ignore, but
3070 // lets allow it for now for the sake of coverage.
3071 BlocksToExplore.insert(AssignBlocks.begin(), AssignBlocks.end());
3072
3073 // Storage for artificial blocks we intend to add to BlocksToExplore.
3075
3076 // To avoid needlessly dropping large volumes of variable locations, propagate
3077 // variables through aritifical blocks, i.e. those that don't have any
3078 // instructions in scope at all. To accurately replicate VarLoc
3079 // LiveDebugValues, this means exploring all artificial successors too.
3080 // Perform a depth-first-search to enumerate those blocks.
3081 for (const auto *MBB : BlocksToExplore) {
3082 // Depth-first-search state: each node is a block and which successor
3083 // we're currently exploring.
3084 SmallVector<std::pair<const MachineBasicBlock *,
3086 8>
3087 DFS;
3088
3089 // Find any artificial successors not already tracked.
3090 for (auto *succ : MBB->successors()) {
3091 if (BlocksToExplore.count(succ))
3092 continue;
3093 if (!ArtificialBlocks.count(succ))
3094 continue;
3095 ToAdd.insert(succ);
3096 DFS.push_back({succ, succ->succ_begin()});
3097 }
3098
3099 // Search all those blocks, depth first.
3100 while (!DFS.empty()) {
3101 const MachineBasicBlock *CurBB = DFS.back().first;
3102 MachineBasicBlock::const_succ_iterator &CurSucc = DFS.back().second;
3103 // Walk back if we've explored this blocks successors to the end.
3104 if (CurSucc == CurBB->succ_end()) {
3105 DFS.pop_back();
3106 continue;
3107 }
3108
3109 // If the current successor is artificial and unexplored, descend into
3110 // it.
3111 if (!ToAdd.count(*CurSucc) && ArtificialBlocks.count(*CurSucc)) {
3112 ToAdd.insert(*CurSucc);
3113 DFS.push_back({*CurSucc, (*CurSucc)->succ_begin()});
3114 continue;
3115 }
3116
3117 ++CurSucc;
3118 }
3119 };
3120
3121 BlocksToExplore.insert(ToAdd.begin(), ToAdd.end());
3122}
3123
3124void InstrRefBasedLDV::buildVLocValueMap(
3125 const DILocation *DILoc,
3126 const SmallSet<DebugVariableID, 4> &VarsWeCareAbout,
3127 SmallPtrSetImpl<MachineBasicBlock *> &AssignBlocks, LiveInsT &Output,
3128 FuncValueTable &MOutLocs, FuncValueTable &MInLocs,
3129 SmallVectorImpl<VLocTracker> &AllTheVLocs) {
3130 // This method is much like buildMLocValueMap: but focuses on a single
3131 // LexicalScope at a time. Pick out a set of blocks and variables that are
3132 // to have their value assignments solved, then run our dataflow algorithm
3133 // until a fixedpoint is reached.
3134 std::priority_queue<unsigned int, std::vector<unsigned int>,
3135 std::greater<unsigned int>>
3136 Worklist, Pending;
3137 SmallPtrSet<MachineBasicBlock *, 16> OnWorklist, OnPending;
3138
3139 // The set of blocks we'll be examining.
3141
3142 // The order in which to examine them (RPO).
3144 SmallVector<unsigned, 32> BlockOrderNums;
3145
3146 getBlocksForScope(DILoc, BlocksToExplore, AssignBlocks);
3147
3148 // Single block scope: not interesting! No propagation at all. Note that
3149 // this could probably go above ArtificialBlocks without damage, but
3150 // that then produces output differences from original-live-debug-values,
3151 // which propagates from a single block into many artificial ones.
3152 if (BlocksToExplore.size() == 1)
3153 return;
3154
3155 // Convert a const set to a non-const set. LexicalScopes
3156 // getMachineBasicBlocks returns const MBB pointers, IDF wants mutable ones.
3157 // (Neither of them mutate anything).
3158 SmallPtrSet<MachineBasicBlock *, 8> MutBlocksToExplore;
3159 for (const auto *MBB : BlocksToExplore)
3160 MutBlocksToExplore.insert(const_cast<MachineBasicBlock *>(MBB));
3161
3162 // Picks out relevants blocks RPO order and sort them. Sort their
3163 // order-numbers and map back to MBB pointers later, to avoid repeated
3164 // DenseMap queries during comparisons.
3165 for (const auto *MBB : BlocksToExplore)
3166 BlockOrderNums.push_back(BBToOrder[MBB]);
3167
3168 llvm::sort(BlockOrderNums);
3169 for (unsigned int I : BlockOrderNums)
3170 BlockOrders.push_back(OrderToBB[I]);
3171 BlockOrderNums.clear();
3172 unsigned NumBlocks = BlockOrders.size();
3173
3174 // Allocate some vectors for storing the live ins and live outs. Large.
3175 SmallVector<DbgValue, 32> LiveIns, LiveOuts;
3176 LiveIns.reserve(NumBlocks);
3177 LiveOuts.reserve(NumBlocks);
3178
3179 // Initialize all values to start as NoVals. This signifies "it's live
3180 // through, but we don't know what it is".
3181 DbgValueProperties EmptyProperties(EmptyExpr, false, false);
3182 for (unsigned int I = 0; I < NumBlocks; ++I) {
3183 DbgValue EmptyDbgValue(I, EmptyProperties, DbgValue::NoVal);
3184 LiveIns.push_back(EmptyDbgValue);
3185 LiveOuts.push_back(EmptyDbgValue);
3186 }
3187
3188 // Produce by-MBB indexes of live-in/live-outs, to ease lookup within
3189 // vlocJoin.
3190 LiveIdxT LiveOutIdx, LiveInIdx;
3191 LiveOutIdx.reserve(NumBlocks);
3192 LiveInIdx.reserve(NumBlocks);
3193 for (unsigned I = 0; I < NumBlocks; ++I) {
3194 LiveOutIdx[BlockOrders[I]] = &LiveOuts[I];
3195 LiveInIdx[BlockOrders[I]] = &LiveIns[I];
3196 }
3197
3198 // Loop over each variable and place PHIs for it, then propagate values
3199 // between blocks. This keeps the locality of working on one lexical scope at
3200 // at time, but avoids re-processing variable values because some other
3201 // variable has been assigned.
3202 for (DebugVariableID VarID : VarsWeCareAbout) {
3203 // Re-initialize live-ins and live-outs, to clear the remains of previous
3204 // variables live-ins / live-outs.
3205 for (unsigned int I = 0; I < NumBlocks; ++I) {
3206 DbgValue EmptyDbgValue(I, EmptyProperties, DbgValue::NoVal);
3207 LiveIns[I] = EmptyDbgValue;
3208 LiveOuts[I] = EmptyDbgValue;
3209 }
3210
3211 // Place PHIs for variable values, using the LLVM IDF calculator.
3212 // Collect the set of blocks where variables are def'd.
3214 for (const MachineBasicBlock *ExpMBB : BlocksToExplore) {
3215 auto &TransferFunc = AllTheVLocs[ExpMBB->getNumber()].Vars;
3216 if (TransferFunc.contains(VarID))
3217 DefBlocks.insert(const_cast<MachineBasicBlock *>(ExpMBB));
3218 }
3219
3221
3222 // Request the set of PHIs we should insert for this variable. If there's
3223 // only one value definition, things are very simple.
3224 if (DefBlocks.size() == 1) {
3225 placePHIsForSingleVarDefinition(MutBlocksToExplore, *DefBlocks.begin(),
3226 AllTheVLocs, VarID, Output);
3227 continue;
3228 }
3229
3230 // Otherwise: we need to place PHIs through SSA and propagate values.
3231 BlockPHIPlacement(MutBlocksToExplore, DefBlocks, PHIBlocks);
3232
3233 // Insert PHIs into the per-block live-in tables for this variable.
3234 for (MachineBasicBlock *PHIMBB : PHIBlocks) {
3235 unsigned BlockNo = PHIMBB->getNumber();
3236 DbgValue *LiveIn = LiveInIdx[PHIMBB];
3237 *LiveIn = DbgValue(BlockNo, EmptyProperties, DbgValue::VPHI);
3238 }
3239
3240 for (auto *MBB : BlockOrders) {
3241 Worklist.push(BBToOrder[MBB]);
3242 OnWorklist.insert(MBB);
3243 }
3244
3245 // Iterate over all the blocks we selected, propagating the variables value.
3246 // This loop does two things:
3247 // * Eliminates un-necessary VPHIs in vlocJoin,
3248 // * Evaluates the blocks transfer function (i.e. variable assignments) and
3249 // stores the result to the blocks live-outs.
3250 // Always evaluate the transfer function on the first iteration, and when
3251 // the live-ins change thereafter.
3252 bool FirstTrip = true;
3253 while (!Worklist.empty() || !Pending.empty()) {
3254 while (!Worklist.empty()) {
3255 auto *MBB = OrderToBB[Worklist.top()];
3256 CurBB = MBB->getNumber();
3257 Worklist.pop();
3258
3259 auto LiveInsIt = LiveInIdx.find(MBB);
3260 assert(LiveInsIt != LiveInIdx.end());
3261 DbgValue *LiveIn = LiveInsIt->second;
3262
3263 // Join values from predecessors. Updates LiveInIdx, and writes output
3264 // into JoinedInLocs.
3265 bool InLocsChanged =
3266 vlocJoin(*MBB, LiveOutIdx, BlocksToExplore, *LiveIn);
3267
3269
3270 // If this block's live-in value is a VPHI, try to pick a machine-value
3271 // for it. This makes the machine-value available and propagated
3272 // through all blocks by the time value propagation finishes. We can't
3273 // do this any earlier as it needs to read the block live-outs.
3274 if (LiveIn->Kind == DbgValue::VPHI && LiveIn->BlockNo == (int)CurBB) {
3275 // There's a small possibility that on a preceeding path, a VPHI is
3276 // eliminated and transitions from VPHI-with-location to
3277 // live-through-value. As a result, the selected location of any VPHI
3278 // might change, so we need to re-compute it on each iteration.
3279 SmallVector<DbgOpID> JoinedOps;
3280
3281 if (pickVPHILoc(JoinedOps, *MBB, LiveOutIdx, MOutLocs, Preds)) {
3282 bool NewLocPicked = !equal(LiveIn->getDbgOpIDs(), JoinedOps);
3283 InLocsChanged |= NewLocPicked;
3284 if (NewLocPicked)
3285 LiveIn->setDbgOpIDs(JoinedOps);
3286 }
3287 }
3288
3289 if (!InLocsChanged && !FirstTrip)
3290 continue;
3291
3292 DbgValue *LiveOut = LiveOutIdx[MBB];
3293 bool OLChanged = false;
3294
3295 // Do transfer function.
3296 auto &VTracker = AllTheVLocs[MBB->getNumber()];
3297 auto TransferIt = VTracker.Vars.find(VarID);
3298 if (TransferIt != VTracker.Vars.end()) {
3299 // Erase on empty transfer (DBG_VALUE $noreg).
3300 if (TransferIt->second.Kind == DbgValue::Undef) {
3301 DbgValue NewVal(MBB->getNumber(), EmptyProperties, DbgValue::NoVal);
3302 if (*LiveOut != NewVal) {
3303 *LiveOut = NewVal;
3304 OLChanged = true;
3305 }
3306 } else {
3307 // Insert new variable value; or overwrite.
3308 if (*LiveOut != TransferIt->second) {
3309 *LiveOut = TransferIt->second;
3310 OLChanged = true;
3311 }
3312 }
3313 } else {
3314 // Just copy live-ins to live-outs, for anything not transferred.
3315 if (*LiveOut != *LiveIn) {
3316 *LiveOut = *LiveIn;
3317 OLChanged = true;
3318 }
3319 }
3320
3321 // If no live-out value changed, there's no need to explore further.
3322 if (!OLChanged)
3323 continue;
3324
3325 // We should visit all successors. Ensure we'll visit any non-backedge
3326 // successors during this dataflow iteration; book backedge successors
3327 // to be visited next time around.
3328 for (auto *s : MBB->successors()) {
3329 // Ignore out of scope / not-to-be-explored successors.
3330 if (!LiveInIdx.contains(s))
3331 continue;
3332
3333 if (BBToOrder[s] > BBToOrder[MBB]) {
3334 if (OnWorklist.insert(s).second)
3335 Worklist.push(BBToOrder[s]);
3336 } else if (OnPending.insert(s).second && (FirstTrip || OLChanged)) {
3337 Pending.push(BBToOrder[s]);
3338 }
3339 }
3340 }
3341 Worklist.swap(Pending);
3342 std::swap(OnWorklist, OnPending);
3343 OnPending.clear();
3344 assert(Pending.empty());
3345 FirstTrip = false;
3346 }
3347
3348 // Save live-ins to output vector. Ignore any that are still marked as being
3349 // VPHIs with no location -- those are variables that we know the value of,
3350 // but are not actually available in the register file.
3351 for (auto *MBB : BlockOrders) {
3352 DbgValue *BlockLiveIn = LiveInIdx[MBB];
3353 if (BlockLiveIn->Kind == DbgValue::NoVal)
3354 continue;
3355 if (BlockLiveIn->isUnjoinedPHI())
3356 continue;
3357 if (BlockLiveIn->Kind == DbgValue::VPHI)
3358 BlockLiveIn->Kind = DbgValue::Def;
3359 [[maybe_unused]] auto &[Var, DILoc] = DVMap.lookupDVID(VarID);
3360 assert(BlockLiveIn->Properties.DIExpr->getFragmentInfo() ==
3361 Var.getFragment() &&
3362 "Fragment info missing during value prop");
3363 Output[MBB->getNumber()].push_back(std::make_pair(VarID, *BlockLiveIn));
3364 }
3365 } // Per-variable loop.
3366
3367 BlockOrders.clear();
3368 BlocksToExplore.clear();
3369}
3370
3371void InstrRefBasedLDV::placePHIsForSingleVarDefinition(
3372 const SmallPtrSetImpl<MachineBasicBlock *> &InScopeBlocks,
3373 MachineBasicBlock *AssignMBB, SmallVectorImpl<VLocTracker> &AllTheVLocs,
3374 DebugVariableID VarID, LiveInsT &Output) {
3375 // If there is a single definition of the variable, then working out it's
3376 // value everywhere is very simple: it's every block dominated by the
3377 // definition. At the dominance frontier, the usual algorithm would:
3378 // * Place PHIs,
3379 // * Propagate values into them,
3380 // * Find there's no incoming variable value from the other incoming branches
3381 // of the dominance frontier,
3382 // * Specify there's no variable value in blocks past the frontier.
3383 // This is a common case, hence it's worth special-casing it.
3384
3385 // Pick out the variables value from the block transfer function.
3386 VLocTracker &VLocs = AllTheVLocs[AssignMBB->getNumber()];
3387 auto ValueIt = VLocs.Vars.find(VarID);
3388 const DbgValue &Value = ValueIt->second;
3389
3390 // If it's an explicit assignment of "undef", that means there is no location
3391 // anyway, anywhere.
3392 if (Value.Kind == DbgValue::Undef)
3393 return;
3394
3395 // Assign the variable value to entry to each dominated block that's in scope.
3396 // Skip the definition block -- it's assigned the variable value in the middle
3397 // of the block somewhere.
3398 for (auto *ScopeBlock : InScopeBlocks) {
3399 if (!DomTree->properlyDominates(AssignMBB, ScopeBlock))
3400 continue;
3401
3402 Output[ScopeBlock->getNumber()].push_back({VarID, Value});
3403 }
3404
3405 // All blocks that aren't dominated have no live-in value, thus no variable
3406 // value will be given to them.
3407}
3408
3409#if !defined(NDEBUG) || defined(LLVM_ENABLE_DUMP)
3411 const MLocTransferMap &mloc_transfer) const {
3412 for (const auto &P : mloc_transfer) {
3413 std::string foo = MTracker->LocIdxToName(P.first);
3414 std::string bar = MTracker->IDAsString(P.second);
3415 dbgs() << "Loc " << foo << " --> " << bar << "\n";
3416 }
3417}
3418#endif
3419
3420void InstrRefBasedLDV::initialSetup(MachineFunction &MF) {
3421 // Build some useful data structures.
3422
3423 LLVMContext &Context = MF.getFunction().getContext();
3424 EmptyExpr = DIExpression::get(Context, {});
3425
3426 auto hasNonArtificialLocation = [](const MachineInstr &MI) -> bool {
3427 if (const DebugLoc &DL = MI.getDebugLoc())
3428 return DL.getLine() != 0;
3429 return false;
3430 };
3431
3432 // Collect a set of all the artificial blocks. Collect the size too, ilist
3433 // size calls are O(n).
3434 unsigned int Size = 0;
3435 for (auto &MBB : MF) {
3436 ++Size;
3437 if (none_of(MBB.instrs(), hasNonArtificialLocation))
3438 ArtificialBlocks.insert(&MBB);
3439 }
3440
3441 // Compute mappings of block <=> RPO order.
3443 unsigned int RPONumber = 0;
3444 OrderToBB.reserve(Size);
3445 BBToOrder.reserve(Size);
3446 BBNumToRPO.reserve(Size);
3447 auto processMBB = [&](MachineBasicBlock *MBB) {
3448 OrderToBB.push_back(MBB);
3449 BBToOrder[MBB] = RPONumber;
3450 BBNumToRPO[MBB->getNumber()] = RPONumber;
3451 ++RPONumber;
3452 };
3453 for (MachineBasicBlock *MBB : RPOT)
3454 processMBB(MBB);
3455 for (MachineBasicBlock &MBB : MF)
3456 if (!BBToOrder.contains(&MBB))
3457 processMBB(&MBB);
3458
3459 // Order value substitutions by their "source" operand pair, for quick lookup.
3460 llvm::sort(MF.DebugValueSubstitutions);
3461
3462#ifdef EXPENSIVE_CHECKS
3463 // As an expensive check, test whether there are any duplicate substitution
3464 // sources in the collection.
3465 if (MF.DebugValueSubstitutions.size() > 2) {
3466 for (auto It = MF.DebugValueSubstitutions.begin();
3467 It != std::prev(MF.DebugValueSubstitutions.end()); ++It) {
3468 assert(It->Src != std::next(It)->Src && "Duplicate variable location "
3469 "substitution seen");
3470 }
3471 }
3472#endif
3473}
3474
3475// Produce an "ejection map" for blocks, i.e., what's the highest-numbered
3476// lexical scope it's used in. When exploring in DFS order and we pass that
3477// scope, the block can be processed and any tracking information freed.
3478void InstrRefBasedLDV::makeDepthFirstEjectionMap(
3479 SmallVectorImpl<unsigned> &EjectionMap,
3480 const ScopeToDILocT &ScopeToDILocation,
3481 ScopeToAssignBlocksT &ScopeToAssignBlocks) {
3484 auto *TopScope = LS.getCurrentFunctionScope();
3485
3486 // Unlike lexical scope explorers, we explore in reverse order, to find the
3487 // "last" lexical scope used for each block early.
3488 WorkStack.push_back({TopScope, TopScope->getChildren().size() - 1});
3489
3490 while (!WorkStack.empty()) {
3491 auto &ScopePosition = WorkStack.back();
3492 LexicalScope *WS = ScopePosition.first;
3493 ssize_t ChildNum = ScopePosition.second--;
3494
3496 if (ChildNum >= 0) {
3497 // If ChildNum is positive, there are remaining children to explore.
3498 // Push the child and its children-count onto the stack.
3499 auto &ChildScope = Children[ChildNum];
3500 WorkStack.push_back(
3501 std::make_pair(ChildScope, ChildScope->getChildren().size() - 1));
3502 } else {
3503 WorkStack.pop_back();
3504
3505 // We've explored all children and any later blocks: examine all blocks
3506 // in our scope. If they haven't yet had an ejection number set, then
3507 // this scope will be the last to use that block.
3508 auto DILocationIt = ScopeToDILocation.find(WS);
3509 if (DILocationIt != ScopeToDILocation.end()) {
3510 getBlocksForScope(DILocationIt->second, BlocksToExplore,
3511 ScopeToAssignBlocks.find(WS)->second);
3512 for (const auto *MBB : BlocksToExplore) {
3513 unsigned BBNum = MBB->getNumber();
3514 if (EjectionMap[BBNum] == 0)
3515 EjectionMap[BBNum] = WS->getDFSOut();
3516 }
3517
3518 BlocksToExplore.clear();
3519 }
3520 }
3521 }
3522}
3523
3524bool InstrRefBasedLDV::depthFirstVLocAndEmit(
3525 unsigned MaxNumBlocks, const ScopeToDILocT &ScopeToDILocation,
3526 const ScopeToVarsT &ScopeToVars, ScopeToAssignBlocksT &ScopeToAssignBlocks,
3527 LiveInsT &Output, FuncValueTable &MOutLocs, FuncValueTable &MInLocs,
3529 const TargetPassConfig &TPC) {
3530 TTracker =
3531 new TransferTracker(TII, MTracker, MF, DVMap, *TRI, CalleeSavedRegs, TPC);
3532 unsigned NumLocs = MTracker->getNumLocs();
3533 VTracker = nullptr;
3534
3535 // No scopes? No variable locations.
3536 if (!LS.getCurrentFunctionScope())
3537 return false;
3538
3539 // Build map from block number to the last scope that uses the block.
3540 SmallVector<unsigned, 16> EjectionMap;
3541 EjectionMap.resize(MaxNumBlocks, 0);
3542 makeDepthFirstEjectionMap(EjectionMap, ScopeToDILocation,
3543 ScopeToAssignBlocks);
3544
3545 // Helper lambda for ejecting a block -- if nothing is going to use the block,
3546 // we can translate the variable location information into DBG_VALUEs and then
3547 // free all of InstrRefBasedLDV's data structures.
3548 auto EjectBlock = [&](MachineBasicBlock &MBB) -> void {
3549 unsigned BBNum = MBB.getNumber();
3550 AllTheVLocs[BBNum].clear();
3551
3552 // Prime the transfer-tracker, and then step through all the block
3553 // instructions, installing transfers.
3554 MTracker->reset();
3555 MTracker->loadFromArray(MInLocs[MBB], BBNum);
3556 TTracker->loadInlocs(MBB, MInLocs[MBB], DbgOpStore, Output[BBNum], NumLocs);
3557
3558 CurBB = BBNum;
3559 CurInst = 1;
3560 for (auto &MI : MBB) {
3561 process(MI, &MOutLocs, &MInLocs);
3562 TTracker->checkInstForNewValues(CurInst, MI.getIterator());
3563 ++CurInst;
3564 }
3565
3566 // Free machine-location tables for this block.
3567 MInLocs.ejectTableForBlock(MBB);
3568 MOutLocs.ejectTableForBlock(MBB);
3569 // We don't need live-in variable values for this block either.
3570 Output[BBNum].clear();
3571 AllTheVLocs[BBNum].clear();
3572 };
3573
3576 WorkStack.push_back({LS.getCurrentFunctionScope(), 0});
3577 unsigned HighestDFSIn = 0;
3578
3579 // Proceed to explore in depth first order.
3580 while (!WorkStack.empty()) {
3581 auto &ScopePosition = WorkStack.back();
3582 LexicalScope *WS = ScopePosition.first;
3583 ssize_t ChildNum = ScopePosition.second++;
3584
3585 // We obesrve scopes with children twice here, once descending in, once
3586 // ascending out of the scope nest. Use HighestDFSIn as a ratchet to ensure
3587 // we don't process a scope twice. Additionally, ignore scopes that don't
3588 // have a DILocation -- by proxy, this means we never tracked any variable
3589 // assignments in that scope.
3590 auto DILocIt = ScopeToDILocation.find(WS);
3591 if (HighestDFSIn <= WS->getDFSIn() && DILocIt != ScopeToDILocation.end()) {
3592 const DILocation *DILoc = DILocIt->second;
3593 auto &VarsWeCareAbout = ScopeToVars.find(WS)->second;
3594 auto &BlocksInScope = ScopeToAssignBlocks.find(WS)->second;
3595
3596 buildVLocValueMap(DILoc, VarsWeCareAbout, BlocksInScope, Output, MOutLocs,
3597 MInLocs, AllTheVLocs);
3598 }
3599
3600 HighestDFSIn = std::max(HighestDFSIn, WS->getDFSIn());
3601
3602 // Descend into any scope nests.
3604 if (ChildNum < (ssize_t)Children.size()) {
3605 // There are children to explore -- push onto stack and continue.
3606 auto &ChildScope = Children[ChildNum];
3607 WorkStack.push_back(std::make_pair(ChildScope, 0));
3608 } else {
3609 WorkStack.pop_back();
3610
3611 // We've explored a leaf, or have explored all the children of a scope.
3612 // Try to eject any blocks where this is the last scope it's relevant to.
3613 auto DILocationIt = ScopeToDILocation.find(WS);
3614 if (DILocationIt == ScopeToDILocation.end())
3615 continue;
3616
3617 getBlocksForScope(DILocationIt->second, BlocksToExplore,
3618 ScopeToAssignBlocks.find(WS)->second);
3619 for (const auto *MBB : BlocksToExplore)
3620 if (WS->getDFSOut() == EjectionMap[MBB->getNumber()])
3621 EjectBlock(const_cast<MachineBasicBlock &>(*MBB));
3622
3623 BlocksToExplore.clear();
3624 }
3625 }
3626
3627 // Some artificial blocks may not have been ejected, meaning they're not
3628 // connected to an actual legitimate scope. This can technically happen
3629 // with things like the entry block. In theory, we shouldn't need to do
3630 // anything for such out-of-scope blocks, but for the sake of being similar
3631 // to VarLocBasedLDV, eject these too.
3632 for (auto *MBB : ArtificialBlocks)
3633 if (MInLocs.hasTableFor(*MBB))
3634 EjectBlock(*MBB);
3635
3636 return emitTransfers();
3637}
3638
3639bool InstrRefBasedLDV::emitTransfers() {
3640 // Go through all the transfers recorded in the TransferTracker -- this is
3641 // both the live-ins to a block, and any movements of values that happen
3642 // in the middle.
3643 for (auto &P : TTracker->Transfers) {
3644 // We have to insert DBG_VALUEs in a consistent order, otherwise they
3645 // appear in DWARF in different orders. Use the order that they appear
3646 // when walking through each block / each instruction, stored in
3647 // DVMap.
3648 llvm::sort(P.Insts, llvm::less_first());
3649
3650 // Insert either before or after the designated point...
3651 if (P.MBB) {
3652 MachineBasicBlock &MBB = *P.MBB;
3653 for (const auto &Pair : P.Insts)
3654 MBB.insert(P.Pos, Pair.second);
3655 } else {
3656 // Terminators, like tail calls, can clobber things. Don't try and place
3657 // transfers after them.
3658 if (P.Pos->isTerminator())
3659 continue;
3660
3661 MachineBasicBlock &MBB = *P.Pos->getParent();
3662 for (const auto &Pair : P.Insts)
3663 MBB.insertAfterBundle(P.Pos, Pair.second);
3664 }
3665 }
3666
3667 return TTracker->Transfers.size() != 0;
3668}
3669
3670/// Calculate the liveness information for the given machine function and
3671/// extend ranges across basic blocks.
3672bool InstrRefBasedLDV::ExtendRanges(MachineFunction &MF,
3673 MachineDominatorTree *DomTree,
3674 TargetPassConfig *TPC,
3675 unsigned InputBBLimit,
3676 unsigned InputDbgValLimit) {
3677 // No subprogram means this function contains no debuginfo.
3678 if (!MF.getFunction().getSubprogram())
3679 return false;
3680
3681 LLVM_DEBUG(dbgs() << "\nDebug Range Extension\n");
3682 this->TPC = TPC;
3683
3684 this->DomTree = DomTree;
3685 TRI = MF.getSubtarget().getRegisterInfo();
3686 MRI = &MF.getRegInfo();
3687 TII = MF.getSubtarget().getInstrInfo();
3688 TFI = MF.getSubtarget().getFrameLowering();
3689 TFI->getCalleeSaves(MF, CalleeSavedRegs);
3690 MFI = &MF.getFrameInfo();
3691 LS.initialize(MF);
3692
3693 const auto &STI = MF.getSubtarget();
3694 AdjustsStackInCalls = MFI->adjustsStack() &&
3695 STI.getFrameLowering()->stackProbeFunctionModifiesSP();
3696 if (AdjustsStackInCalls)
3697 StackProbeSymbolName = STI.getTargetLowering()->getStackProbeSymbolName(MF);
3698
3699 MTracker =
3700 new MLocTracker(MF, *TII, *TRI, *MF.getSubtarget().getTargetLowering());
3701 VTracker = nullptr;
3702 TTracker = nullptr;
3703
3706 LiveInsT SavedLiveIns;
3707
3708 int MaxNumBlocks = -1;
3709 for (auto &MBB : MF)
3710 MaxNumBlocks = std::max(MBB.getNumber(), MaxNumBlocks);
3711 assert(MaxNumBlocks >= 0);
3712 ++MaxNumBlocks;
3713
3714 initialSetup(MF);
3715
3716 MLocTransfer.resize(MaxNumBlocks);
3717 vlocs.resize(MaxNumBlocks, VLocTracker(DVMap, OverlapFragments, EmptyExpr));
3718 SavedLiveIns.resize(MaxNumBlocks);
3719
3720 produceMLocTransferFunction(MF, MLocTransfer, MaxNumBlocks);
3721
3722 // Allocate and initialize two array-of-arrays for the live-in and live-out
3723 // machine values. The outer dimension is the block number; while the inner
3724 // dimension is a LocIdx from MLocTracker.
3725 unsigned NumLocs = MTracker->getNumLocs();
3726 FuncValueTable MOutLocs(MaxNumBlocks, NumLocs);
3727 FuncValueTable MInLocs(MaxNumBlocks, NumLocs);
3728
3729 // Solve the machine value dataflow problem using the MLocTransfer function,
3730 // storing the computed live-ins / live-outs into the array-of-arrays. We use
3731 // both live-ins and live-outs for decision making in the variable value
3732 // dataflow problem.
3733 buildMLocValueMap(MF, MInLocs, MOutLocs, MLocTransfer);
3734
3735 // Patch up debug phi numbers, turning unknown block-live-in values into
3736 // either live-through machine values, or PHIs.
3737 for (auto &DBG_PHI : DebugPHINumToValue) {
3738 // Identify unresolved block-live-ins.
3739 if (!DBG_PHI.ValueRead)
3740 continue;
3741
3742 ValueIDNum &Num = *DBG_PHI.ValueRead;
3743 if (!Num.isPHI())
3744 continue;
3745
3746 unsigned BlockNo = Num.getBlock();
3747 LocIdx LocNo = Num.getLoc();
3748 ValueIDNum ResolvedValue = MInLocs[BlockNo][LocNo.asU64()];
3749 // If there is no resolved value for this live-in then it is not directly
3750 // reachable from the entry block -- model it as a PHI on entry to this
3751 // block, which means we leave the ValueIDNum unchanged.
3752 if (ResolvedValue != ValueIDNum::EmptyValue)
3753 Num = ResolvedValue;
3754 }
3755 // Later, we'll be looking up ranges of instruction numbers.
3756 llvm::sort(DebugPHINumToValue);
3757
3758 // Walk back through each block / instruction, collecting DBG_VALUE
3759 // instructions and recording what machine value their operands refer to.
3760 for (MachineBasicBlock *MBB : OrderToBB) {
3761 CurBB = MBB->getNumber();
3762 VTracker = &vlocs[CurBB];
3763 VTracker->MBB = MBB;
3764 MTracker->loadFromArray(MInLocs[*MBB], CurBB);
3765 CurInst = 1;
3766 for (auto &MI : *MBB) {
3767 process(MI, &MOutLocs, &MInLocs);
3768 ++CurInst;
3769 }
3770 MTracker->reset();
3771 }
3772
3773 // Map from one LexicalScope to all the variables in that scope.
3774 ScopeToVarsT ScopeToVars;
3775
3776 // Map from One lexical scope to all blocks where assignments happen for
3777 // that scope.
3778 ScopeToAssignBlocksT ScopeToAssignBlocks;
3779
3780 // Store map of DILocations that describes scopes.
3781 ScopeToDILocT ScopeToDILocation;
3782
3783 // To mirror old LiveDebugValues, enumerate variables in RPOT order. Otherwise
3784 // the order is unimportant, it just has to be stable.
3785 unsigned VarAssignCount = 0;
3786 for (unsigned int I = 0; I < OrderToBB.size(); ++I) {
3787 auto *MBB = OrderToBB[I];
3788 auto *VTracker = &vlocs[MBB->getNumber()];
3789 // Collect each variable with a DBG_VALUE in this block.
3790 for (auto &idx : VTracker->Vars) {
3791 DebugVariableID VarID = idx.first;
3792 const DILocation *ScopeLoc = VTracker->Scopes[VarID];
3793 assert(ScopeLoc != nullptr);
3794 auto *Scope = LS.findLexicalScope(ScopeLoc);
3795
3796 // No insts in scope -> shouldn't have been recorded.
3797 assert(Scope != nullptr);
3798
3799 ScopeToVars[Scope].insert(VarID);
3800 ScopeToAssignBlocks[Scope].insert(VTracker->MBB);
3801 ScopeToDILocation[Scope] = ScopeLoc;
3802 ++VarAssignCount;
3803 }
3804 }
3805
3806 bool Changed = false;
3807
3808 // If we have an extremely large number of variable assignments and blocks,
3809 // bail out at this point. We've burnt some time doing analysis already,
3810 // however we should cut our losses.
3811 if ((unsigned)MaxNumBlocks > InputBBLimit &&
3812 VarAssignCount > InputDbgValLimit) {
3813 LLVM_DEBUG(dbgs() << "Disabling InstrRefBasedLDV: " << MF.getName()
3814 << " has " << MaxNumBlocks << " basic blocks and "
3815 << VarAssignCount
3816 << " variable assignments, exceeding limits.\n");
3817 } else {
3818 // Optionally, solve the variable value problem and emit to blocks by using
3819 // a lexical-scope-depth search. It should be functionally identical to
3820 // the "else" block of this condition.
3821 Changed = depthFirstVLocAndEmit(
3822 MaxNumBlocks, ScopeToDILocation, ScopeToVars, ScopeToAssignBlocks,
3823 SavedLiveIns, MOutLocs, MInLocs, vlocs, MF, *TPC);
3824 }
3825
3826 delete MTracker;
3827 delete TTracker;
3828 MTracker = nullptr;
3829 VTracker = nullptr;
3830 TTracker = nullptr;
3831
3832 ArtificialBlocks.clear();
3833 OrderToBB.clear();
3834 BBToOrder.clear();
3835 BBNumToRPO.clear();
3836 DebugInstrNumToInstr.clear();
3837 DebugPHINumToValue.clear();
3838 OverlapFragments.clear();
3839 SeenFragments.clear();
3840 SeenDbgPHIs.clear();
3841 DbgOpStore.clear();
3842 DVMap.clear();
3843
3844 return Changed;
3845}
3846
3848 return new InstrRefBasedLDV();
3849}
3850
3851namespace {
3852class LDVSSABlock;
3853class LDVSSAUpdater;
3854
3855// Pick a type to identify incoming block values as we construct SSA. We
3856// can't use anything more robust than an integer unfortunately, as SSAUpdater
3857// expects to zero-initialize the type.
3858typedef uint64_t BlockValueNum;
3859
3860/// Represents an SSA PHI node for the SSA updater class. Contains the block
3861/// this PHI is in, the value number it would have, and the expected incoming
3862/// values from parent blocks.
3863class LDVSSAPhi {
3864public:
3866 LDVSSABlock *ParentBlock;
3867 BlockValueNum PHIValNum;
3868 LDVSSAPhi(BlockValueNum PHIValNum, LDVSSABlock *ParentBlock)
3869 : ParentBlock(ParentBlock), PHIValNum(PHIValNum) {}
3870
3871 LDVSSABlock *getParent() { return ParentBlock; }
3872};
3873
3874/// Thin wrapper around a block predecessor iterator. Only difference from a
3875/// normal block iterator is that it dereferences to an LDVSSABlock.
3876class LDVSSABlockIterator {
3877public:
3879 LDVSSAUpdater &Updater;
3880
3881 LDVSSABlockIterator(MachineBasicBlock::pred_iterator PredIt,
3882 LDVSSAUpdater &Updater)
3883 : PredIt(PredIt), Updater(Updater) {}
3884
3885 bool operator!=(const LDVSSABlockIterator &OtherIt) const {
3886 return OtherIt.PredIt != PredIt;
3887 }
3888
3889 LDVSSABlockIterator &operator++() {
3890 ++PredIt;
3891 return *this;
3892 }
3893
3894 LDVSSABlock *operator*();
3895};
3896
3897/// Thin wrapper around a block for SSA Updater interface. Necessary because
3898/// we need to track the PHI value(s) that we may have observed as necessary
3899/// in this block.
3900class LDVSSABlock {
3901public:
3903 LDVSSAUpdater &Updater;
3904 using PHIListT = SmallVector<LDVSSAPhi, 1>;
3905 /// List of PHIs in this block. There should only ever be one.
3906 PHIListT PHIList;
3907
3908 LDVSSABlock(MachineBasicBlock &BB, LDVSSAUpdater &Updater)
3909 : BB(BB), Updater(Updater) {}
3910
3911 LDVSSABlockIterator succ_begin() {
3912 return LDVSSABlockIterator(BB.succ_begin(), Updater);
3913 }
3914
3915 LDVSSABlockIterator succ_end() {
3916 return LDVSSABlockIterator(BB.succ_end(), Updater);
3917 }
3918
3919 /// SSAUpdater has requested a PHI: create that within this block record.
3920 LDVSSAPhi *newPHI(BlockValueNum Value) {
3921 PHIList.emplace_back(Value, this);
3922 return &PHIList.back();
3923 }
3924
3925 /// SSAUpdater wishes to know what PHIs already exist in this block.
3926 PHIListT &phis() { return PHIList; }
3927};
3928
3929/// Utility class for the SSAUpdater interface: tracks blocks, PHIs and values
3930/// while SSAUpdater is exploring the CFG. It's passed as a handle / baton to
3931// SSAUpdaterTraits<LDVSSAUpdater>.
3932class LDVSSAUpdater {
3933public:
3934 /// Map of value numbers to PHI records.
3936 /// Map of which blocks generate Undef values -- blocks that are not
3937 /// dominated by any Def.
3939 /// Map of machine blocks to our own records of them.
3941 /// Machine location where any PHI must occur.
3942 LocIdx Loc;
3943 /// Table of live-in machine value numbers for blocks / locations.
3944 const FuncValueTable &MLiveIns;
3945
3946 LDVSSAUpdater(LocIdx L, const FuncValueTable &MLiveIns)
3947 : Loc(L), MLiveIns(MLiveIns) {}
3948
3949 void reset() {
3950 for (auto &Block : BlockMap)
3951 delete Block.second;
3952
3953 PHIs.clear();
3954 PoisonMap.clear();
3955 BlockMap.clear();
3956 }
3957
3958 ~LDVSSAUpdater() { reset(); }
3959
3960 /// For a given MBB, create a wrapper block for it. Stores it in the
3961 /// LDVSSAUpdater block map.
3962 LDVSSABlock *getSSALDVBlock(MachineBasicBlock *BB) {
3963 auto it = BlockMap.find(BB);
3964 if (it == BlockMap.end()) {
3965 BlockMap[BB] = new LDVSSABlock(*BB, *this);
3966 it = BlockMap.find(BB);
3967 }
3968 return it->second;
3969 }
3970
3971 /// Find the live-in value number for the given block. Looks up the value at
3972 /// the PHI location on entry.
3973 BlockValueNum getValue(LDVSSABlock *LDVBB) {
3974 return MLiveIns[LDVBB->BB][Loc.asU64()].asU64();
3975 }
3976};
3977
3978LDVSSABlock *LDVSSABlockIterator::operator*() {
3979 return Updater.getSSALDVBlock(*PredIt);
3980}
3981
3982#ifndef NDEBUG
3983
3984raw_ostream &operator<<(raw_ostream &out, const LDVSSAPhi &PHI) {
3985 out << "SSALDVPHI " << PHI.PHIValNum;
3986 return out;
3987}
3988
3989#endif
3990
3991} // namespace
3992
3993namespace llvm {
3994
3995/// Template specialization to give SSAUpdater access to CFG and value
3996/// information. SSAUpdater calls methods in these traits, passing in the
3997/// LDVSSAUpdater object, to learn about blocks and the values they define.
3998/// It also provides methods to create PHI nodes and track them.
3999template <> class SSAUpdaterTraits<LDVSSAUpdater> {
4000public:
4001 using BlkT = LDVSSABlock;
4002 using ValT = BlockValueNum;
4003 using PhiT = LDVSSAPhi;
4004 using BlkSucc_iterator = LDVSSABlockIterator;
4005
4006 // Methods to access block successors -- dereferencing to our wrapper class.
4007 static BlkSucc_iterator BlkSucc_begin(BlkT *BB) { return BB->succ_begin(); }
4008 static BlkSucc_iterator BlkSucc_end(BlkT *BB) { return BB->succ_end(); }
4009
4010 /// Iterator for PHI operands.
4011 class PHI_iterator {
4012 private:
4013 LDVSSAPhi *PHI;
4014 unsigned Idx;
4015
4016 public:
4017 explicit PHI_iterator(LDVSSAPhi *P) // begin iterator
4018 : PHI(P), Idx(0) {}
4019 PHI_iterator(LDVSSAPhi *P, bool) // end iterator
4020 : PHI(P), Idx(PHI->IncomingValues.size()) {}
4021
4022 PHI_iterator &operator++() {
4023 Idx++;
4024 return *this;
4025 }
4026 bool operator==(const PHI_iterator &X) const { return Idx == X.Idx; }
4027 bool operator!=(const PHI_iterator &X) const { return !operator==(X); }
4028
4029 BlockValueNum getIncomingValue() { return PHI->IncomingValues[Idx].second; }
4030
4031 LDVSSABlock *getIncomingBlock() { return PHI->IncomingValues[Idx].first; }
4032 };
4033
4034 static inline PHI_iterator PHI_begin(PhiT *PHI) { return PHI_iterator(PHI); }
4035
4036 static inline PHI_iterator PHI_end(PhiT *PHI) {
4037 return PHI_iterator(PHI, true);
4038 }
4039
4040 /// FindPredecessorBlocks - Put the predecessors of BB into the Preds
4041 /// vector.
4042 static void FindPredecessorBlocks(LDVSSABlock *BB,
4044 for (MachineBasicBlock *Pred : BB->BB.predecessors())
4045 Preds->push_back(BB->Updater.getSSALDVBlock(Pred));
4046 }
4047
4048 /// GetPoisonVal - Normally creates an IMPLICIT_DEF instruction with a new
4049 /// register. For LiveDebugValues, represents a block identified as not having
4050 /// any DBG_PHI predecessors.
4051 static BlockValueNum GetPoisonVal(LDVSSABlock *BB, LDVSSAUpdater *Updater) {
4052 // Create a value number for this block -- it needs to be unique and in the
4053 // "poison" collection, so that we know it's not real. Use a number
4054 // representing a PHI into this block.
4055 BlockValueNum Num = ValueIDNum(BB->BB.getNumber(), 0, Updater->Loc).asU64();
4056 Updater->PoisonMap[&BB->BB] = Num;
4057 return Num;
4058 }
4059
4060 /// CreateEmptyPHI - Create a (representation of a) PHI in the given block.
4061 /// SSAUpdater will populate it with information about incoming values. The
4062 /// value number of this PHI is whatever the machine value number problem
4063 /// solution determined it to be. This includes non-phi values if SSAUpdater
4064 /// tries to create a PHI where the incoming values are identical.
4065 static BlockValueNum CreateEmptyPHI(LDVSSABlock *BB, unsigned NumPreds,
4066 LDVSSAUpdater *Updater) {
4067 BlockValueNum PHIValNum = Updater->getValue(BB);
4068 LDVSSAPhi *PHI = BB->newPHI(PHIValNum);
4069 Updater->PHIs[PHIValNum] = PHI;
4070 return PHIValNum;
4071 }
4072
4073 /// AddPHIOperand - Add the specified value as an operand of the PHI for
4074 /// the specified predecessor block.
4075 static void AddPHIOperand(LDVSSAPhi *PHI, BlockValueNum Val, LDVSSABlock *Pred) {
4076 PHI->IncomingValues.push_back(std::make_pair(Pred, Val));
4077 }
4078
4079 /// ValueIsPHI - Check if the instruction that defines the specified value
4080 /// is a PHI instruction.
4081 static LDVSSAPhi *ValueIsPHI(BlockValueNum Val, LDVSSAUpdater *Updater) {
4082 return Updater->PHIs.lookup(Val);
4083 }
4084
4085 /// ValueIsNewPHI - Like ValueIsPHI but also check if the PHI has no source
4086 /// operands, i.e., it was just added.
4087 static LDVSSAPhi *ValueIsNewPHI(BlockValueNum Val, LDVSSAUpdater *Updater) {
4088 LDVSSAPhi *PHI = ValueIsPHI(Val, Updater);
4089 if (PHI && PHI->IncomingValues.size() == 0)
4090 return PHI;
4091 return nullptr;
4092 }
4093
4094 /// GetPHIValue - For the specified PHI instruction, return the value
4095 /// that it defines.
4096 static BlockValueNum GetPHIValue(LDVSSAPhi *PHI) { return PHI->PHIValNum; }
4097};
4098
4099} // end namespace llvm
4100
4101std::optional<ValueIDNum> InstrRefBasedLDV::resolveDbgPHIs(
4102 MachineFunction &MF, const FuncValueTable &MLiveOuts,
4103 const FuncValueTable &MLiveIns, MachineInstr &Here, uint64_t InstrNum) {
4104 // This function will be called twice per DBG_INSTR_REF, and might end up
4105 // computing lots of SSA information: memoize it.
4106 auto SeenDbgPHIIt = SeenDbgPHIs.find(std::make_pair(&Here, InstrNum));
4107 if (SeenDbgPHIIt != SeenDbgPHIs.end())
4108 return SeenDbgPHIIt->second;
4109
4110 std::optional<ValueIDNum> Result =
4111 resolveDbgPHIsImpl(MF, MLiveOuts, MLiveIns, Here, InstrNum);
4112 SeenDbgPHIs.insert({std::make_pair(&Here, InstrNum), Result});
4113 return Result;
4114}
4115
4116std::optional<ValueIDNum> InstrRefBasedLDV::resolveDbgPHIsImpl(
4117 MachineFunction &MF, const FuncValueTable &MLiveOuts,
4118 const FuncValueTable &MLiveIns, MachineInstr &Here, uint64_t InstrNum) {
4119 // Pick out records of DBG_PHI instructions that have been observed. If there
4120 // are none, then we cannot compute a value number.
4121 auto RangePair = std::equal_range(DebugPHINumToValue.begin(),
4122 DebugPHINumToValue.end(), InstrNum);
4123 auto LowerIt = RangePair.first;
4124 auto UpperIt = RangePair.second;
4125
4126 // No DBG_PHI means there can be no location.
4127 if (LowerIt == UpperIt)
4128 return std::nullopt;
4129
4130 // If any DBG_PHIs referred to a location we didn't understand, don't try to
4131 // compute a value. There might be scenarios where we could recover a value
4132 // for some range of DBG_INSTR_REFs, but at this point we can have high
4133 // confidence that we've seen a bug.
4134 auto DBGPHIRange = make_range(LowerIt, UpperIt);
4135 for (const DebugPHIRecord &DBG_PHI : DBGPHIRange)
4136 if (!DBG_PHI.ValueRead)
4137 return std::nullopt;
4138
4139 // If there's only one DBG_PHI, then that is our value number.
4140 if (std::distance(LowerIt, UpperIt) == 1)
4141 return *LowerIt->ValueRead;
4142
4143 // Pick out the location (physreg, slot) where any PHIs must occur. It's
4144 // technically possible for us to merge values in different registers in each
4145 // block, but highly unlikely that LLVM will generate such code after register
4146 // allocation.
4147 LocIdx Loc = *LowerIt->ReadLoc;
4148
4149 // We have several DBG_PHIs, and a use position (the Here inst). All each
4150 // DBG_PHI does is identify a value at a program position. We can treat each
4151 // DBG_PHI like it's a Def of a value, and the use position is a Use of a
4152 // value, just like SSA. We use the bulk-standard LLVM SSA updater class to
4153 // determine which Def is used at the Use, and any PHIs that happen along
4154 // the way.
4155 // Adapted LLVM SSA Updater:
4156 LDVSSAUpdater Updater(Loc, MLiveIns);
4157 // Map of which Def or PHI is the current value in each block.
4159 // Set of PHIs that we have created along the way.
4160 SmallVector<LDVSSAPhi *, 8> CreatedPHIs;
4161
4162 // Each existing DBG_PHI is a Def'd value under this model. Record these Defs
4163 // for the SSAUpdater.
4164 for (const auto &DBG_PHI : DBGPHIRange) {
4165 LDVSSABlock *Block = Updater.getSSALDVBlock(DBG_PHI.MBB);
4166 const ValueIDNum &Num = *DBG_PHI.ValueRead;
4167 AvailableValues.insert(std::make_pair(Block, Num.asU64()));
4168 }
4169
4170 LDVSSABlock *HereBlock = Updater.getSSALDVBlock(Here.getParent());
4171 const auto &AvailIt = AvailableValues.find(HereBlock);
4172 if (AvailIt != AvailableValues.end()) {
4173 // Actually, we already know what the value is -- the Use is in the same
4174 // block as the Def.
4175 return ValueIDNum::fromU64(AvailIt->second);
4176 }
4177
4178 // Otherwise, we must use the SSA Updater. It will identify the value number
4179 // that we are to use, and the PHIs that must happen along the way.
4180 SSAUpdaterImpl<LDVSSAUpdater> Impl(&Updater, &AvailableValues, &CreatedPHIs);
4181 BlockValueNum ResultInt = Impl.GetValue(Updater.getSSALDVBlock(Here.getParent()));
4183
4184 // We have the number for a PHI, or possibly live-through value, to be used
4185 // at this Use. There are a number of things we have to check about it though:
4186 // * Does any PHI use an 'Undef' (like an IMPLICIT_DEF) value? If so, this
4187 // Use was not completely dominated by DBG_PHIs and we should abort.
4188 // * Are the Defs or PHIs clobbered in a block? SSAUpdater isn't aware that
4189 // we've left SSA form. Validate that the inputs to each PHI are the
4190 // expected values.
4191 // * Is a PHI we've created actually a merging of values, or are all the
4192 // predecessor values the same, leading to a non-PHI machine value number?
4193 // (SSAUpdater doesn't know that either). Remap validated PHIs into the
4194 // the ValidatedValues collection below to sort this out.
4196
4197 // Define all the input DBG_PHI values in ValidatedValues.
4198 for (const auto &DBG_PHI : DBGPHIRange) {
4199 LDVSSABlock *Block = Updater.getSSALDVBlock(DBG_PHI.MBB);
4200 const ValueIDNum &Num = *DBG_PHI.ValueRead;
4201 ValidatedValues.insert(std::make_pair(Block, Num));
4202 }
4203
4204 // Sort PHIs to validate into RPO-order.
4205 SmallVector<LDVSSAPhi *, 8> SortedPHIs;
4206 for (auto &PHI : CreatedPHIs)
4207 SortedPHIs.push_back(PHI);
4208
4209 llvm::sort(SortedPHIs, [&](LDVSSAPhi *A, LDVSSAPhi *B) {
4210 return BBToOrder[&A->getParent()->BB] < BBToOrder[&B->getParent()->BB];
4211 });
4212
4213 for (auto &PHI : SortedPHIs) {
4214 ValueIDNum ThisBlockValueNum = MLiveIns[PHI->ParentBlock->BB][Loc.asU64()];
4215
4216 // Are all these things actually defined?
4217 for (auto &PHIIt : PHI->IncomingValues) {
4218 // Any undef input means DBG_PHIs didn't dominate the use point.
4219 if (Updater.PoisonMap.contains(&PHIIt.first->BB))
4220 return std::nullopt;
4221
4222 ValueIDNum ValueToCheck;
4223 const ValueTable &BlockLiveOuts = MLiveOuts[PHIIt.first->BB];
4224
4225 auto VVal = ValidatedValues.find(PHIIt.first);
4226 if (VVal == ValidatedValues.end()) {
4227 // We cross a loop, and this is a backedge. LLVMs tail duplication
4228 // happens so late that DBG_PHI instructions should not be able to
4229 // migrate into loops -- meaning we can only be live-through this
4230 // loop.
4231 ValueToCheck = ThisBlockValueNum;
4232 } else {
4233 // Does the block have as a live-out, in the location we're examining,
4234 // the value that we expect? If not, it's been moved or clobbered.
4235 ValueToCheck = VVal->second;
4236 }
4237
4238 if (BlockLiveOuts[Loc.asU64()] != ValueToCheck)
4239 return std::nullopt;
4240 }
4241
4242 // Record this value as validated.
4243 ValidatedValues.insert({PHI->ParentBlock, ThisBlockValueNum});
4244 }
4245
4246 // All the PHIs are valid: we can return what the SSAUpdater said our value
4247 // number was.
4248 return Result;
4249}
Rewrite undef for PHI
MachineBasicBlock & MBB
MachineBasicBlock MachineBasicBlock::iterator DebugLoc DL
static cl::opt< unsigned > MaxNumBlocks("debug-ata-max-blocks", cl::init(10000), cl::desc("Maximum num basic blocks before debug info dropped"), cl::Hidden)
static const Function * getParent(const Value *V)
static GCRegistry::Add< OcamlGC > B("ocaml", "ocaml 3.10-compatible GC")
static GCRegistry::Add< ErlangGC > A("erlang", "erlang-compatible garbage collector")
#define LLVM_DUMP_METHOD
Mark debug helper function definitions like dump() that should not be stripped from debug builds.
Definition: Compiler.h:537
Returns the sub type a function will return at a given Idx Should correspond to the result type of an ExtractValue instruction executed with just that one unsigned Idx
#define LLVM_DEBUG(X)
Definition: Debug.h:101
This file defines the DenseMap class.
This file contains constants used for implementing Dwarf debug support.
uint64_t Size
static GCMetadataPrinterRegistry::Add< ErlangGCPrinter > X("erlang", "erlang-compatible garbage collector")
Compute iterated dominance frontiers using a linear time algorithm.
const HexagonInstrInfo * TII
IRTranslator LLVM IR MI
static cl::opt< unsigned > StackWorkingSetLimit("livedebugvalues-max-stack-slots", cl::Hidden, cl::desc("livedebugvalues-stack-ws-limit"), cl::init(250))
static cl::opt< bool > EmulateOldLDV("emulate-old-livedebugvalues", cl::Hidden, cl::desc("Act like old LiveDebugValues did"), cl::init(false))
#define NUM_LOC_BITS
static cl::opt< unsigned > InputBBLimit("livedebugvalues-input-bb-limit", cl::desc("Maximum input basic blocks before DBG_VALUE limit applies"), cl::init(10000), cl::Hidden)
#define I(x, y, z)
Definition: MD5.cpp:58
unsigned const TargetRegisterInfo * TRI
#define P(N)
const char LLVMTargetMachineRef TM
This file builds on the ADT/GraphTraits.h file to build a generic graph post order iterator.
assert(ImpDefSCC.getReg()==AMDGPU::SCC &&ImpDefSCC.isDef())
This file contains some templates that are useful if you are working with the STL at all.
This file defines the SmallPtrSet class.
This file defines the SmallSet class.
This file defines the SmallVector class.
This file describes how to lower LLVM code to machine code.
Target-Independent Code Generator Pass Configuration Options pass.
Class storing the complete set of values that are observed by DbgValues within the current function.
DbgOp find(DbgOpID ID) const
Returns the DbgOp associated with ID.
DbgOpID insert(DbgOp Op)
If Op does not already exist in this map, it is inserted and the corresponding DbgOpID is returned.
Meta qualifiers for a value.
bool isJoinable(const DbgValueProperties &Other) const
Class recording the (high level) value of a variable.
int BlockNo
For a NoVal or VPHI DbgValue, which block it was generated in.
DbgValueProperties Properties
Qualifiers for the ValueIDNum above.
ArrayRef< DbgOpID > getDbgOpIDs() const
void setDbgOpIDs(ArrayRef< DbgOpID > NewIDs)
void dump(const MLocTracker *MTrack=nullptr, const DbgOpIDMap *OpStore=nullptr) const
DbgOpID getDbgOpID(unsigned Index) const
KindT Kind
Discriminator for whether this is a constant or an in-program value.
unsigned getLocationOpCount() const
Mapping from DebugVariable to/from a unique identifying number.
const VarAndLoc & lookupDVID(DebugVariableID ID) const
DebugVariableID insertDVID(DebugVariable &Var, const DILocation *Loc)
DebugVariableID getDVID(const DebugVariable &Var) const
DenseMap< const MachineBasicBlock *, DbgValue * > LiveIdxT
Live in/out structure for the variable values: a per-block map of variables to their values.
DenseMap< const LexicalScope *, const DILocation * > ScopeToDILocT
Mapping from lexical scopes to a DILocation in that scope.
std::optional< LocIdx > findLocationForMemOperand(const MachineInstr &MI)
SmallVector< SmallVector< VarAndLoc, 8 >, 8 > LiveInsT
Vector (per block) of a collection (inner smallvector) of live-ins.
InstrRefBasedLDV()
Default construct and initialize the pass.
DIExpression::FragmentInfo FragmentInfo
bool hasFoldedStackStore(const MachineInstr &MI)
DenseMap< const LexicalScope *, SmallPtrSet< MachineBasicBlock *, 4 > > ScopeToAssignBlocksT
Mapping from lexical scopes to blocks where variables in that scope are assigned.
LLVM_DUMP_METHOD void dump_mloc_transfer(const MLocTransferMap &mloc_transfer) const
DenseMap< const LexicalScope *, SmallSet< DebugVariableID, 4 > > ScopeToVarsT
Mapping from lexical scopes to variables in that scope.
Handle-class for a particular "location".
static LocIdx MakeIllegalLoc()
Tracker for what values are in machine locations.
unsigned getLocSizeInBits(LocIdx L) const
How large is this location (aka, how wide is a value defined there?).
bool isRegisterTracked(Register R)
Is register R currently tracked by MLocTracker?
std::optional< SpillLocationNo > getOrTrackSpillLoc(SpillLoc L)
Find LocIdx for SpillLoc L, creating a new one if it's not tracked.
void loadFromArray(ValueTable &Locs, unsigned NewCurBB)
Load values for each location from array of ValueIDNums.
IndexedMap< unsigned, LocIdxToIndexFunctor > LocIdxToLocID
Inverse map of LocIDToLocIdx.
unsigned getSpillIDWithIdx(SpillLocationNo Spill, unsigned Idx)
Given a spill number, and a slot within the spill, calculate the ID number for that location.
iterator_range< MLocIterator > locations()
Return a range over all locations currently tracked.
SmallSet< Register, 8 > SPAliases
When clobbering register masks, we chose to not believe the machine model and don't clobber SP.
unsigned getLocID(Register Reg)
Produce location ID number for a Register.
const TargetLowering & TLI
const TargetRegisterInfo & TRI
unsigned NumRegs
Cached local copy of the number of registers the target has.
DenseMap< StackSlotPos, unsigned > StackSlotIdxes
Map from a size/offset pair describing a position in a stack slot, to a numeric identifier for that p...
LocIdx lookupOrTrackRegister(unsigned ID)
void setReg(Register R, ValueIDNum ValueID)
Set a register to a value number.
SpillLocationNo locIDToSpill(unsigned ID) const
Return the spill number that a location ID corresponds to.
void reset()
Wipe any un-necessary location records after traversing a block.
DenseMap< unsigned, StackSlotPos > StackIdxesToPos
Inverse of StackSlotIdxes.
std::string IDAsString(const ValueIDNum &Num) const
void writeRegMask(const MachineOperand *MO, unsigned CurBB, unsigned InstID)
Record a RegMask operand being executed.
std::pair< unsigned short, unsigned short > StackSlotPos
Pair for describing a position within a stack slot – first the size in bits, then the offset.
const TargetInstrInfo & TII
bool isSpill(LocIdx Idx) const
Return true if Idx is a spill machine location.
LocIdx getRegMLoc(Register R)
Determine the LocIdx of an existing register.
MachineInstrBuilder emitLoc(const SmallVectorImpl< ResolvedDbgOp > &DbgOps, const DebugVariable &Var, const DILocation *DILoc, const DbgValueProperties &Properties)
Create a DBG_VALUE based on debug operands DbgOps.
void setMLoc(LocIdx L, ValueIDNum Num)
Set a locaiton to a certain value.
LocToValueType LocIdxToIDNum
Map of LocIdxes to the ValueIDNums that they store.
std::vector< LocIdx > LocIDToLocIdx
"Map" of machine location IDs (i.e., raw register or spill number) to the LocIdx key / number for tha...
SmallVector< std::pair< const MachineOperand *, unsigned >, 32 > Masks
Collection of register mask operands that have been observed.
unsigned NumSlotIdxes
Number of slot indexes the target has – distinct segments of a stack slot that can take on the value ...
UniqueVector< SpillLoc > SpillLocs
Unique-ification of spill.
ValueIDNum readMLoc(LocIdx L)
Read the value of a particular location.
void setMPhis(unsigned NewCurBB)
Reset all locations to contain a PHI value at the designated block.
ValueIDNum readReg(Register R)
void defReg(Register R, unsigned BB, unsigned Inst)
Record a definition of the specified register at the given block / inst.
LLVM_DUMP_METHOD void dump()
LocIdx trackRegister(unsigned ID)
Create a LocIdx for an untracked register ID.
MLocTracker(MachineFunction &MF, const TargetInstrInfo &TII, const TargetRegisterInfo &TRI, const TargetLowering &TLI)
LLVM_DUMP_METHOD void dump_mloc_map()
StackSlotPos locIDToSpillIdx(unsigned ID) const
Returns the spill-slot size/offs that a location ID corresponds to.
LocIdx getSpillMLoc(unsigned SpillID)
std::string LocIdxToName(LocIdx Idx) const
Thin wrapper around an integer – designed to give more type safety to spill location numbers.
Collection of DBG_VALUEs observed when traversing a block.
SmallDenseMap< DebugVariableID, const DILocation *, 8 > Scopes
void defVar(const MachineInstr &MI, const DbgValueProperties &Properties, const SmallVectorImpl< DbgOpID > &DebugOps)
MapVector< DebugVariableID, DbgValue > Vars
Map DebugVariable to the latest Value it's defined to have.
Unique identifier for a value defined by an instruction, as a value type.
static ValueIDNum fromU64(uint64_t v)
std::string asString(const std::string &mlocname) const
static ValueIDNum TombstoneValue
LocationAndQuality(LocIdx L, LocationQuality Q)
Tracker for converting machine value locations and variable values into variable locations (the outpu...
const DebugVariableMap & DVMap
DenseSet< DebugVariableID > UseBeforeDefVariables
The set of variables that are in UseBeforeDefs and can become a location once the relevant value is d...
const BitVector & CalleeSavedRegs
void loadInlocs(MachineBasicBlock &MBB, ValueTable &MLocs, DbgOpIDMap &DbgOpStore, const SmallVectorImpl< std::pair< DebugVariableID, DbgValue > > &VLocs, unsigned NumLocs)
Load object with live-in variable values.
const TargetLowering * TLI
void addUseBeforeDef(DebugVariableID VarID, const DbgValueProperties &Properties, const SmallVectorImpl< DbgOp > &DbgOps, unsigned Inst)
Record that Var has value ID, a value that becomes available later in the function.
SmallVector< ValueIDNum, 32 > VarLocs
Local cache of what-value-is-in-what-LocIdx.
MLocTracker * MTracker
This machine location tracker is assumed to always contain the up-to-date value mapping for all machi...
void transferMlocs(LocIdx Src, LocIdx Dst, MachineBasicBlock::iterator Pos)
Transfer variables based on Src to be based on Dst.
MachineFunction & MF
std::optional< LocationQuality > getLocQualityIfBetter(LocIdx L, LocationQuality Min) const
SmallVector< std::pair< DebugVariableID, MachineInstr * >, 4 > PendingDbgValues
Temporary cache of DBG_VALUEs to be entered into the Transfers collection.
bool isEntryValueVariable(const DebugVariable &Var, const DIExpression *Expr) const
void checkInstForNewValues(unsigned Inst, MachineBasicBlock::iterator pos)
After the instruction at index Inst and position pos has been processed, check whether it defines a v...
const TargetInstrInfo * TII
DenseMap< LocIdx, SmallSet< DebugVariableID, 4 > > ActiveMLocs
Map from LocIdxes to which DebugVariables are based that location.
MachineInstrBuilder emitMOLoc(const MachineOperand &MO, const DebugVariable &Var, const DbgValueProperties &Properties)
TransferTracker(const TargetInstrInfo *TII, MLocTracker *MTracker, MachineFunction &MF, const DebugVariableMap &DVMap, const TargetRegisterInfo &TRI, const BitVector &CalleeSavedRegs, const TargetPassConfig &TPC)
bool isEntryValueValue(const ValueIDNum &Val) const
const TargetRegisterInfo & TRI
void redefVar(const MachineInstr &MI)
Change a variable value after encountering a DBG_VALUE inside a block.
bool recoverAsEntryValue(DebugVariableID VarID, const DbgValueProperties &Prop, const ValueIDNum &Num)
bool isCalleeSaved(LocIdx L) const
void clobberMloc(LocIdx MLoc, MachineBasicBlock::iterator Pos, bool MakeUndef=true)
Account for a location mloc being clobbered.
void flushDbgValues(MachineBasicBlock::iterator Pos, MachineBasicBlock *MBB)
Helper to move created DBG_VALUEs into Transfers collection.
DenseMap< DebugVariableID, ResolvedDbgValue > ActiveVLocs
Map from DebugVariable to it's current location and qualifying meta information.
DenseMap< unsigned, SmallVector< UseBeforeDef, 1 > > UseBeforeDefs
Map from instruction index (within the block) to the set of UseBeforeDefs that become defined at that...
void clobberMloc(LocIdx MLoc, ValueIDNum OldValue, MachineBasicBlock::iterator Pos, bool MakeUndef=true)
Overload that takes an explicit value OldValue for when the value in MLoc has changed and the Transfe...
SmallVector< Transfer, 32 > Transfers
Collection of transfers (DBG_VALUEs) to be inserted.
void redefVar(const MachineInstr &MI, const DbgValueProperties &Properties, SmallVectorImpl< ResolvedDbgOp > &NewLocs)
Handle a change in variable location within a block.
void loadVarInloc(MachineBasicBlock &MBB, DbgOpIDMap &DbgOpStore, const SmallVectorImpl< ValueLocPair > &ValueToLoc, DebugVariableID VarID, DbgValue Value)
For a variable Var with the live-in value Value, attempts to resolve the DbgValue to a concrete DBG_V...
std::pair< ValueIDNum, LocationAndQuality > ValueLocPair
static bool ValueToLocSort(const ValueLocPair &A, const ValueLocPair &B)
ArrayRef - Represent a constant reference to an array (0 or more elements consecutively in memory),...
Definition: ArrayRef.h:41
bool test(unsigned Idx) const
Definition: BitVector.h:461
BitVector & flip()
Definition: BitVector.h:431
iterator_range< const_set_bits_iterator > set_bits() const
Definition: BitVector.h:140
DWARF expression.
unsigned getNumElements() const
bool isImplicit() const
Return whether this is an implicit location description.
static bool fragmentsOverlap(const FragmentInfo &A, const FragmentInfo &B)
Check if fragments overlap between a pair of FragmentInfos.
static DIExpression * appendOpsToArg(const DIExpression *Expr, ArrayRef< uint64_t > Ops, unsigned ArgNo, bool StackValue=false)
Create a copy of Expr by appending the given list of Ops to each instance of the operand DW_OP_LLVM_a...
bool isComplex() const
Return whether the location is computed on the expression stack, meaning it cannot be a simple regist...
static std::optional< FragmentInfo > getFragmentInfo(expr_op_iterator Start, expr_op_iterator End)
Retrieve the details of this fragment expression.
static std::optional< const DIExpression * > convertToNonVariadicExpression(const DIExpression *Expr)
If Expr is a valid single-location expression, i.e.
bool isDeref() const
Return whether there is exactly one operator and it is a DW_OP_deref;.
static DIExpression * prepend(const DIExpression *Expr, uint8_t Flags, int64_t Offset=0)
Prepend DIExpr with a deref and offset operation and optionally turn it into a stack value or/and an ...
bool isSingleLocationExpression() const
Return whether the evaluated expression makes use of a single location at the start of the expression...
DILocalScope * getScope() const
Get the local scope for this variable.
bool isValidLocationForIntrinsic(const DILocation *DL) const
Check that a location is valid for this variable.
Debug location.
std::optional< uint64_t > getSizeInBits() const
Determines the size of the variable's type.
This class represents an Operation in the Expression.
A debug info location.
Definition: DebugLoc.h:33
Identifies a unique instance of a variable.
const DILocation * getInlinedAt() const
std::optional< FragmentInfo > getFragment() const
const DILocalVariable * getVariable() const
iterator find(const_arg_type_t< KeyT > Val)
Definition: DenseMap.h:155
bool erase(const KeyT &Val)
Definition: DenseMap.h:336
unsigned size() const
Definition: DenseMap.h:99
bool empty() const
Definition: DenseMap.h:98
iterator end()
Definition: DenseMap.h:84
std::pair< iterator, bool > insert(const std::pair< KeyT, ValueT > &KV)
Definition: DenseMap.h:211
void reserve(size_type NumEntries)
Grow the densemap so that it can contain at least NumEntries items before resizing again.
Definition: DenseMap.h:103
Implements a dense probed hash-table based set.
Definition: DenseSet.h:271
DISubprogram * getSubprogram() const
Get the attached subprogram.
Definition: Metadata.cpp:1837
LLVMContext & getContext() const
getContext - Return a reference to the LLVMContext associated with this function.
Definition: Function.cpp:380
Determine the iterated dominance frontier, given a set of defining blocks, and optionally,...
StorageT::size_type size() const
Definition: IndexedMap.h:79
void grow(IndexT n)
Definition: IndexedMap.h:69
This is an important class for using LLVM in a threaded context.
Definition: LLVMContext.h:67
LexicalScope - This class is used to track scope information.
Definition: LexicalScopes.h:44
unsigned getDFSIn() const
SmallVectorImpl< LexicalScope * > & getChildren()
Definition: LexicalScopes.h:65
unsigned getDFSOut() const
void initialize(const MachineFunction &)
initialize - Scan machine function and constuct lexical scope nest, resets the instance if necessary.
LexicalScope * findLexicalScope(const DILocation *DL)
findLexicalScope - Find lexical scope, either regular or inlined, for the given DebugLoc.
void getMachineBasicBlocks(const DILocation *DL, SmallPtrSetImpl< const MachineBasicBlock * > &MBBs)
getMachineBasicBlocks - Populate given set using machine basic blocks which have machine instructions...
LexicalScope * getCurrentFunctionScope() const
getCurrentFunctionScope - Return lexical scope for the current function.
bool hasValue() const
TypeSize getValue() const
Describe properties that are true of each instruction in the target description file.
Definition: MCInstrDesc.h:198
const MCInstrDesc & get(unsigned Opcode) const
Return the machine instruction descriptor that corresponds to the specified instruction opcode.
Definition: MCInstrInfo.h:63
MCRegAliasIterator enumerates all registers aliasing Reg.
MCRegUnitRootIterator enumerates the root registers of a register unit.
bool isValid() const
Check if the iterator is at the end of the list.
unsigned getNumSubRegIndices() const
Return the number of sub-register indices understood by the target.
iterator_range< MCSubRegIterator > subregs(MCRegister Reg) const
Return an iterator range over all sub-registers of Reg, excluding Reg.
unsigned getSubRegIndex(MCRegister RegNo, MCRegister SubRegNo) const
For a given register pair, return the sub-register index if the second register is a sub-register of ...
iterator_range< MCRegUnitIterator > regunits(MCRegister Reg) const
Returns an iterator range over all regunits for Reg.
unsigned getNumRegs() const
Return the number of registers this target has (useful for sizing arrays holding per register informa...
Iterator that enumerates the sub-registers of a Reg and the associated sub-register indices.
bool isValid() const
Returns true if this iterator is not yet at the end.
LLVMContext & getContext() const
Definition: Metadata.h:1233
instr_iterator instr_begin()
SmallVectorImpl< MachineBasicBlock * >::const_iterator const_succ_iterator
instr_iterator insert(instr_iterator I, MachineInstr *M)
Insert MI into the instruction list before I, possibly inside a bundle.
int getNumber() const
MachineBasicBlocks are uniquely numbered at the function level, unless they're not in a MachineFuncti...
bool isEntryBlock() const
Returns true if this is the entry block of the function.
SmallVectorImpl< MachineBasicBlock * >::iterator pred_iterator
Instructions::iterator instr_iterator
const MachineFunction * getParent() const
Return the MachineFunction containing this basic block.
std::string getFullName() const
Return a formatted string to identify this block and its parent function.
iterator_range< succ_iterator > successors()
iterator_range< pred_iterator > predecessors()
instr_iterator insertAfterBundle(instr_iterator I, MachineInstr *MI)
If I is bundled then insert MI into the instruction list after the end of the bundle,...
DominatorTree Class - Concrete subclass of DominatorTreeBase that is used to compute a normal dominat...
bool properlyDominates(const MachineDomTreeNode *A, const MachineDomTreeNode *B) const
MachineDominatorTree & getBase()
bool adjustsStack() const
Return true if this function adjusts the stack – e.g., when calling another function.
bool isDeadObjectIndex(int ObjectIdx) const
Returns true if the specified index corresponds to a dead object.
Replacement definition for a debug instruction reference.
SmallVector< DebugSubstitution, 8 > DebugValueSubstitutions
Debug value substitutions: a collection of DebugSubstitution objects, recording changes in where a va...
const TargetSubtargetInfo & getSubtarget() const
getSubtarget - Return the subtarget for which this machine code is being compiled.
MachineFrameInfo & getFrameInfo()
getFrameInfo - Return the frame info object for the current function.