探索回调在机器学习中的重要作用--这些工具可监测、控制和自动进行模型训练,从而提高准确性、灵活性和效率。
回调是在较大进程(如训练机器学习模型)执行过程中的特定阶段执行的函数或函数集。在人工智能和 ML 的背景下,回调提供了一种强大的机制来监控内部状态、影响训练循环的行为,并在不修改训练框架核心代码的情况下自动执行操作。它们就像训练管道中的钩子,允许开发人员在预定义的点上注入自定义逻辑,如一个历时、批次或整个训练过程的开始或结束。
在模型训练过程中,各种事件会依次发生:训练开始,一个"...... 纪元 开始、批次处理、验证发生、历时结束,最后训练结束。通过回调,您可以触发与这些事件相关的特定操作。例如,每当验证准确性提高时,您可能希望保存模型的权重,将指标记录到可视化工具,如 张量板或在模型停止改进时提前停止训练。框架,如 Keras 和图书馆,如 ultralytics
Python 软件包大量使用回调来提供灵活性和可扩展性。
Ultralytics 训练引擎提供了一个回调系统,在训练过程中的不同阶段触发回调。 培训, 核实, 预报和 出口 进程。这些事件包括 on_train_start
, on_epoch_end
, on_fit_epoch_end
(包括验证)、 on_batch_end
, on_train_end
等等。用户可以定义自定义回调,以执行详细日志记录、发送通知或与以下平台交互等操作 Ultralytics HUB.
from ultralytics import YOLO
from ultralytics.engine.callbacks import BaseCallback
# Define a simple custom callback
class MyCallback(BaseCallback):
def on_epoch_end(self, trainer): # This code will run at the end of each epoch
print(f"Epoch {trainer.epoch + 1} finished.") # Example: Access metrics like validation loss
if trainer.metrics:
val_loss = trainer.metrics.get('val/loss', None) # Example metric key, adjust as needed
if val_loss is not None:
print(f"Validation loss: {val_loss:.4f}")
# Load a model
model = YOLO('yolov8n.yaml').load('yolov8n.pt') # build from YAML and transfer weights
# Initialize your callback
my_callback = MyCallback()
# Add the callback to the training process
# Note: Direct addition via API might vary; often done via configuration or extending Trainer
# For demonstration, assume a mechanism exists like adding to a list or overriding methods.
# In Ultralytics, callbacks are often managed internally or via specific integrations.
# A conceptual example of how one might use it if Trainer exposed callbacks directly:
# trainer.add_callback(my_callback) # Hypothetical method
# Train the model (the callback methods are triggered automatically by the Trainer)
# The actual mechanism involves the Trainer checking for registered callbacks for specific events.
try: # This is a simplified representation. Callbacks are typically integrated deeper. # We can simulate adding callback logic by overriding trainer methods or using signals if available. # The Ultralytics framework automatically handles registered internal and integration callbacks. # To add custom behaviour like this, you might need to modify the source or use provided extension points.
print("Training started (callback logic would be triggered internally)...") # Example: Manually trigger for demonstration if needed for testing callback logic # my_callback.on_epoch_end(trainer_mock_object)
results = model.train(data='coco128.yaml', epochs=3, imgsz=640) # Training triggers internal events
print("Training finished.")
except Exception as e:
print(f"An error occurred: {e}")
# Example of using an existing integration callback (e.g., TensorBoard)
# This is usually enabled via arguments or configuration:
# results = model.train(data='coco128.yaml', epochs=3, imgsz=640, tensorboard=True)
在 ML 模型开发过程中,回调可以实现许多有用的功能:
回调是创建灵活、自动化和可观察的机器学习工作流的基础,允许开发人员高效地扩展和定制训练流程。它们与一般的软件事件监听器略有不同,因为它们被紧密集成到了机器学习训练和评估的特定生命周期阶段中。