P-tuning
❗ News
A novel method to tune language models. Codes and datasets for paper ``GPT understands, too''.
Xiao Liu*, Yanan Zheng*, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, Jie Tang
You may be also interested in our another work GLM: All NLP Tasks Are Generation Tasks: A General Pretraining Framework
How to use our code
We have released the code and datasets for LAMA and few-shot SuperGLUE (32-dev) experiments. Please check README.md and requirement.txt in the corresponding subdirectories for details.
The LAMA and FewGLUE_32dev datasets are available. The LAMA dataset should be placed in ./data directory, and the SuperGLUE dataset should be placed in the ./ (project root) directory.
Citation
If you find our work useful, please cite the following paper:
@article{liu2021gpt,
title={GPT Understands, Too},
author={Liu, Xiao and Zheng, Yanan and Du, Zhengxiao and Ding, Ming and Qian, Yujie and Yang, Zhilin and Tang, Jie},
journal={arXiv:2103.10385},
year={2021}
}