LLVM 22.0.0git
llvm::InteractiveModelRunner Class Reference

A MLModelRunner that asks for advice from an external agent, or host. More...

#include "llvm/Analysis/InteractiveModelRunner.h"

Inheritance diagram for llvm::InteractiveModelRunner:
[legend]

Public Member Functions

 InteractiveModelRunner (LLVMContext &Ctx, const std::vector< TensorSpec > &Inputs, const TensorSpec &Advice, StringRef OutboundName, StringRef InboundName)
void switchContext (StringRef Name) override
virtual ~InteractiveModelRunner ()
Public Member Functions inherited from llvm::MLModelRunner
 MLModelRunner (const MLModelRunner &)=delete
MLModelRunneroperator= (const MLModelRunner &)=delete
virtual ~MLModelRunner ()=default
template<typename T>
T evaluate ()
template<typename T, typename I>
TgetTensor (I FeatureID)
template<typename T, typename I>
const TgetTensor (I FeatureID) const
void * getTensorUntyped (size_t Index)
const void * getTensorUntyped (size_t Index) const
Kind getKind () const

Static Public Member Functions

static bool classof (const MLModelRunner *R)

Additional Inherited Members

Public Types inherited from llvm::MLModelRunner
enum class  Kind : int {
  Unknown , Release , Development , NoOp ,
  Interactive
}
Protected Member Functions inherited from llvm::MLModelRunner
 MLModelRunner (LLVMContext &Ctx, Kind Type, size_t NumInputs)
void setUpBufferForTensor (size_t Index, const TensorSpec &Spec, void *Buffer)
Protected Attributes inherited from llvm::MLModelRunner
LLVMContextCtx
const Kind Type

Detailed Description

A MLModelRunner that asks for advice from an external agent, or host.

It uses 2 files - ideally named pipes - one to send data to that agent, and one to receive advice. The data exchange uses the training logger (Utils/TrainingLogger.h) format. Specifically, the compiler will send the log header, set the context, and send observations; the host is expected to reply with a tensor value after each observation as a binary buffer that's conforming to the shape of the advice. Interleaved, the data closely resembles the training log for a log where we don't capture the reward signal.

Note that the correctness of the received data is the responsibility of the host. In particular, if insufficient data were sent, the compiler will block when waiting for an advice.

Note that the host can either open the pipes RW, or open first the pipe to the compiler - i.e. the "Inbound" - and then the "Outbound", to avoid deadlock. This is because the compiler first tries to open the inbound (which will hang until there's a writer on the other end).

Definition at line 39 of file InteractiveModelRunner.h.

Constructor & Destructor Documentation

◆ InteractiveModelRunner()

InteractiveModelRunner::InteractiveModelRunner ( LLVMContext & Ctx,
const std::vector< TensorSpec > & Inputs,
const TensorSpec & Advice,
StringRef OutboundName,
StringRef InboundName )

◆ ~InteractiveModelRunner()

InteractiveModelRunner::~InteractiveModelRunner ( )
virtual

Member Function Documentation

◆ classof()

bool llvm::InteractiveModelRunner::classof ( const MLModelRunner * R)
inlinestatic

◆ switchContext()

void llvm::InteractiveModelRunner::switchContext ( StringRef Name)
inlineoverridevirtual

Reimplemented from llvm::MLModelRunner.

Definition at line 49 of file InteractiveModelRunner.h.


The documentation for this class was generated from the following files: