logo

RLHF (Reinforcement learning with human feedback)

Machine learning

RLHF is an extension of Reinforcement Learning (RL), a reward and punishment-based training technique for AI models. It involves training a model through iterative interactions where humans provide guidance or evaluations to improve the model's decision-making process.