HELM-BERT: A Transformer for Medium-sized Peptide Property Prediction
Paper • 2512.23175 • Published • 1
A language model for peptide representation learning using HELM (Hierarchical Editing Language for Macromolecules) notation.
HELM-BERT is built upon the DeBERTa architecture, designed for peptide sequences in HELM notation:
| Parameter | Value |
|---|---|
| Parameters | 54.8M |
| Hidden size | 768 |
| Layers | 6 |
| Attention heads | 12 |
| Vocab size | 78 |
| Max token length | 512 |
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("Flansma/helm-bert", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("Flansma/helm-bert", trust_remote_code=True)
# Cyclosporine A
inputs = tokenizer("PEPTIDE1{[Abu].[Sar].[meL].V.[meL].A.[dA].[meL].[meL].[meV].[Me_Bmt(E)]}$PEPTIDE1,PEPTIDE1,1:R1-11:R2$$$", return_tensors="pt")
outputs = model(**inputs)
embeddings = outputs.last_hidden_state
Pretrained on deduplicated peptide sequences from:
| R² | Pearson | RMSE | MAE |
|---|---|---|---|
| 0.758 | 0.871 | 0.384 | 0.283 |
Train/test 9:1, val 10% from train.
| Split | ROC-AUC | PR-AUC | F1 | MCC | Balanced Acc |
|---|---|---|---|---|---|
| Random | 0.964 | 0.886 | 0.826 | 0.784 | 0.887 |
| aCSM | 0.870 | 0.700 | 0.608 | 0.549 | 0.734 |
Train/test 8:2, val 10% from train, 1:4 positive:negative ratio.
@article{lee2025helmbert,
title={HELM-BERT: A Transformer for Medium-sized Peptide Property Prediction},
author={Seungeon Lee and Takuto Koyama and Itsuki Maeda and Shigeyuki Matsumoto and Yasushi Okuno},
journal={arXiv preprint arXiv:2512.23175},
year={2025},
url={https://arxiv.org/abs/2512.23175}
}
MIT License