There are no reviews yet. Be the first to send feedback to the community and the maintainers!
Chinese-LLaMA-Alpaca
中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)Chinese-BERT-wwm
Pre-Training with Whole Word Masking for Chinese BERT(中文BERT-wwm系列模型)Chinese-LLaMA-Alpaca-2
中文LLaMA-2 & Alpaca-2大模型二期项目 + 64K超长上下文模型 (Chinese LLaMA-2 & Alpaca-2 LLMs with 64K long context models)Chinese-XLNet
Pre-Trained Chinese XLNet(中文XLNet预训练模型)Chinese-ELECTRA
Pre-trained Chinese ELECTRA(中文ELECTRA预训练模型)Chinese-LLaMA-Alpaca-3
中文羊驼大模型三期项目 (Chinese Llama-3 LLMs) developed from Meta Llama 3MacBERT
Revisiting Pre-trained Models for Chinese Natural Language Processing (MacBERT)Chinese-Mixtral
中文Mixtral混合专家大模型(Chinese Mixtral MoE LLMs)cmrc2018
A Span-Extraction Dataset for Chinese Machine Reading Comprehension (CMRC 2018)PERT
PERT: Pre-training BERT with Permuted Language ModelChinese-RC-Datasets
Collections of Chinese reading comprehension datasetsLERT
LERT: A Linguistically-motivated Pre-trained Language Model(语言学信息增强的预训练模型LERT)Chinese-Cloze-RC
A Chinese Cloze-style RC Dataset: People's Daily & Children's Fairy Tale (CFT)cmrc2019
A Sentence Cloze Dataset for Chinese Machine Reading Comprehension (CMRC 2019)LAMB_Optimizer_TF
LAMB Optimizer for Large Batch Training (TensorFlow version)cmrc2017
The First Evaluation Workshop on Chinese Machine Reading Comprehension (CMRC 2017)Eval-on-NN-of-RC
Empirical Evaluation on Current Neural Networks on Cloze-style Reading ComprehensionChinese-MobileBERT
Chinese MobileBERT(中文MobileBERT模型)ChatGPT-in-Academia
Policies of scientific publisher and conferences towards large language model (LLM), such as ChatGPTexpmrc
ExpMRC: Explainability Evaluation for Machine Reading ComprehensionNLP-Review-Scorer
Score your NLP paper reviewACL2020-PC-Blogs-Chinese
Chinese Version of ACL 2020 PC Blogs (ACL 2020程序委员会博文中文版)mrc-model-analysis
Multilingual Multi-Aspect Explainability Analyses on Machine Reading Comprehension Models (iScience)Love Open Source and this site? Check out how you can help us