在 ExecuTorch 中调试模型¶
使用 ExecuTorch 开发人员工具,用户可以调试他们的模型是否存在数值不准确,并从他们的设备中提取模型输出以进行质量分析(例如信噪比、均方误差等)。
目前,ExecuTorch 支持以下调试流程:
通过 ETDump 提取模型级输出。
通过 ETDump 提取中间输出(代表之外):
将这些中间输出链接回 Eager Model python 代码。
在 ExecuTorch 中调试模型的步骤¶
运行¶
有关反映以下步骤的真实示例,请参阅 example_runner.cpp。
[可选]在导出模型时生成 ETRecord。如果提供,这使用户能够将分析信息链接回 Eager Model 源代码(带有堆栈跟踪和模块层次结构)。
将 ETDump 生成集成到运行时,并通过配置对象来设置调试级别。然后,提供一个额外的缓冲区,中间 output 和 program output 将被写入该缓冲区。目前我们支持两个级别的调试:
ETDumpGen
程序级输出
Span<uint8_t> buffer((uint8_t*)debug_buffer, debug_buffer_size); etdump_gen.set_debug_buffer(buffer); etdump_gen.set_event_tracer_debug_level( EventTracerDebugLogLevel::kProgramOutputs);
已执行(非委托)操作的中间输出(也将包括程序级输出)
Span<uint8_t> buffer((uint8_t*)debug_buffer, debug_buffer_size); etdump_gen.set_debug_buffer(buffer); etdump_gen.set_event_tracer_debug_level( EventTracerDebugLogLevel::kIntermediateOutputs);
使用启用调试事件跟踪的预处理器标志构建运行时。说明在 ETDump 文档中。
运行您的模型并转储 ETDump 缓冲区,如此处所述。(如果上面配置了调试缓冲区,则对调试缓冲区执行类似的操作)
使用 Inspector API 的¶
运行模型后,使用生成的 ETDump 和调试缓冲区,用户可以利用 Inspector API 来检查这些调试输出。
from executorch.devtools import Inspector
# Create an Inspector instance with etdump and the debug buffer.
inspector = Inspector(etdump_path=etdump_path,
buffer_path = buffer_path,
# etrecord is optional, if provided it'll link back
# the runtime events to the eager model python source code.
etrecord = etrecord_path)
# Accessing program outputs is as simple as this:
for event_block in inspector.event_blocks:
if event_block.name == "Execute":
print(event_blocks.run_output)
# Accessing intermediate outputs from each event (an event here is essentially an instruction that executed in the runtime).
for event_block in inspector.event_blocks:
if event_block.name == "Execute":
for event in event_block.events:
print(event.debug_data)
# If an ETRecord was provided by the user during Inspector initialization, users
# can print the stacktraces and module hierarchy of these events.
print(event.stack_traces)
print(event.module_hierarchy)
我们还提供了一组简单的实用程序,允许用户根据一组参考输出(可能来自 Eager 模式模型)对他们的模型输出进行质量分析。
from executorch.devtools.inspector import compare_results
# Run a simple quality analysis between the model outputs sourced from the
# runtime and a set of reference outputs.
#
# Setting plot to True will result in the quality metrics being graphed
# and displayed (when run from a notebook) and will be written out to the
# filesystem. A dictionary will always be returned which will contain the
# results.
for event_block in inspector.event_blocks:
if event_block.name == "Execute":
compare_results(event_blocks.run_output, ref_outputs, plot = True)