Automated Peer Reviewing in Paper SEA: Standardization, Evaluation, and Analysis
Paper
β’
2407.12857
β’
Published
β’
2
Paper Link: https://arxiv.org/abs/2407.12857
Project Page: https://ecnu-sea.github.io/
The SEA-E model utilizes Mistral-7B-Instruct-v0.2 as its backbone. It is derived by performing supervised fine-tuning (SFT) on a high-quality peer review instruction dataset, standardized through the SEA-S model. This model can provide comprehensive and insightful review feedback for submitted papers!
from transformers import AutoModelForCausalLM, AutoTokenizer
instruction = system_prompt_dict['instruction_e']
paper = read_txt_file(mmd_file_path)
idx = paper.find("## References")
paper = paper[:idx].strip()
model_name = "/root/sea/"
tokenizer = AutoTokenizer.from_pretrained(model_name)
chat_model = AutoModelForCausalLM.from_pretrained(model_name)
chat_model.to("cuda:0")
messages = [
{"role": "system", "content": instruction},
{"role": "user", "content": paper},
]
encodes = tokenizer.apply_chat_template(messages, return_tensors="pt")
encodes = encodes.to("cuda:0")
len_input = encodes.shape[1]
generated_ids = chat_model.generate(encodes,max_new_tokens=8192,do_sample=True)
# response = chat_model.chat(messages)[0].response_text
response = tokenizer.batch_decode(generated_ids[: , len_input:])[0]
The code provided above is an example. For detailed usage instructions, please refer to https://github.com/ecnu-sea/sea.
The additional clauses for this project are as follows:
If you find our paper or models helpful, please consider cite as follows:
@inproceedings{yu2024automated,
title={Automated Peer Reviewing in Paper SEA: Standardization, Evaluation, and Analysis},
author={Yu, Jianxiang and Ding, Zichen and Tan, Jiaqi and Luo, Kangyang and Weng, Zhenmin and Gong, Chenghua and Zeng, Long and Cui, RenJing and Han, Chengcheng and Sun, Qiushi and others},
booktitle={Findings of the Association for Computational Linguistics: EMNLP 2024},
pages={10164--10184},
year={2024}
}