开源
PaLM:在PaLM架构之上实现RLHF
来源:元经纪     阅读:903
网站管理员
发布于 2023-04-23 06:03
查看主页

概述

在PaLM架构之上实现RLHF(带有人类反馈的强化学习)。基本上是ChatAI,但使用PaLM。

安装

$ pip install palm-rlhf-pytorch

用法

[hidecontent type="logged" desc="隐藏内容:登录后可查看"]

第一个列车PaLM,像任何其他自回归变压器一样

import torch
from palm_rlhf_pytorch import PaLM

palm = PaLM(
    num_tokens = 20000,
    dim = 512,
    depth = 12,
    flash_attn = True # https://arxiv.org/abs/2205.14135
).cuda()

seq = torch.randint(0, 20000, (1, 2048)).cuda()

loss = palm(seq, return_loss = True)
loss.backward()

# after much training, you can now generate sequences

generated = palm.generate(2048) # (1, 2048)

然后使用精心策划的人工反馈来训练您的奖励模型。在原始论文中,他们无法在不过度拟合的情况下从预训练的变压器中获得微调奖励模型,但我给出了无论如何微调的选项,因为它仍然是开放研究。LoRA

import torch
from palm_rlhf_pytorch import PaLM, RewardModel

palm = PaLM(
    num_tokens = 20000,
    dim = 512,
    depth = 12,
    causal = False
)

reward_model = RewardModel(
    palm,
    num_binned_output = 5 # say rating from 1 to 5
).cuda()

# mock data

seq = torch.randint(0, 20000, (1, 1024)).cuda()
prompt_mask = torch.zeros(1, 1024).bool().cuda() # which part of the sequence is prompt, which part is response
labels = torch.randint(0, 5, (1,)).cuda()

# train

loss = reward_model(seq, prompt_mask = prompt_mask, labels = labels)
loss.backward()

# after much training

reward = reward_model(seq, prompt_mask = prompt_mask)

然后,您将转换器和奖励模型传递给RLHFTrainer

import torch
from palm_rlhf_pytorch import PaLM, RewardModel, RLHFTrainer

# load your pretrained palm

palm = PaLM(
    num_tokens = 20000,
    dim = 512,
    depth = 12
).cuda()

palm.load('./path/to/pretrained/palm.pt')

# load your pretrained reward model

reward_model = RewardModel(
    palm,
    num_binned_output = 5
).cuda()

reward_model.load('./path/to/pretrained/reward_model.pt')

# ready your list of prompts for reinforcement learning

prompts = torch.randint(0, 256, (50000, 512)).cuda() # 50k prompts

# pass it all to the trainer and train

trainer = RLHFTrainer(
    palm = palm,
    reward_model = reward_model,
    prompt_token_ids = prompts
)

trainer.train(num_episodes = 50000)

# then, if it succeeded...
# generate say 10 samples and use the reward model to return the best one

answer = trainer.generate(2048, prompt = prompts[0], num_samples = 10) # (<= 2048,)

[/hidecontent]

 
免责声明:本文为用户发表,不代表网站立场,仅供参考,不构成引导等用途。 开源
人工智能将显著改变企业获取价值方式
快速部署产品创新力,提升出海全球化竞争力
雪场有了“机器狗”
北京天空出现三个太阳:三日凌空!大太阳左右两边各有一个“小太阳”
多地摸排“家底” 工业设备更新市场空间将启

首页

分类

定制方案

消息

我的