LongBEL: Long-Context and Document-Consistent Biomedical Entity Linking

LongBEL

LongBEL is a novel document-level framework for biomedical entity linking (BEL). Instead of normalizing each mention independently, LongBEL conditions each prediction on the document context and on previous normalizations produced in the same document. This design enforces document-level consistency and is enhanced by our robust memory mechanism. The method is introduced in our paper, currently under review.

LongBEL (SPACCC Edition)

This is a finetuned version of LLaMA-3-1B trained on SPACCC, applying the LongBEL framework to enable long context and robust memory predictions.

Field Value
Base model meta-llama/Llama-3.2-1B-Instruct
Task Biomedical Entity Linking
Dataset SPACCC
Knowledge base SNOMED CT Spanish Version (July 31, 2021 release)
Input BigBio-like documents with mention spans and semantic groups
Output Ranked SNOMED concept predictions
Decoding Semantic-guided constrained decoding
Main metric Recall@1

Intended Use

This model is intended for research on biomedical entity linking and document-level consistency.

It assumes that mention spans and semantic groups are already provided. It does not perform named entity recognition. In a full pipeline, a NER model should first detect mentions and assign semantic groups, then LongBEL can normalize these mentions to SNOMED concepts.

Usage

Loading the model

import torch
from transformers import AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained(
    "AnonymousARR42/LongBEL_1B_SPACCC",
    trust_remote_code=True,
    torch_dtype=torch.bfloat16,
    device_map="auto",
)

Inference example

The model expects BigBio-like documents. Each entity should include a mention text, character offsets, and a semantic group in the type field.

num_beams = 5

bigbio_pages = [
    {
        "id": "001",
        "document_id": "doc_001",
        "passages": [
            {
                "id": "0",
                "type": "paragraph",
                "text": [
                    "Una mujer embarazada de 29 años consultó por hipertensión grave, "
                    "cefalea y dolor epigástrico. Las pruebas de laboratorio mostraron proteinuria "
                    "y una ligera elevación de las enzimas hepáticas. Fue ingresada durante la noche "
                    "por sospecha de PET y se inició tratamiento urgente."
                ],
                "offsets": [[0, 275]],
            }
        ],
        "entities": [
            {
                "id": "T1",
                "type": "Living Beings",
                "text": ["mujer embarazada"],
                "offsets": [[4, 20]],
            },
            {
                "id": "T2",
                "type": "Disorders",
                "text": ["hipertensión grave"],
                "offsets": [[45, 63]],
            },
            {
                "id": "T3",
                "type": "Disorders",
                "text": ["proteinuria"],
                "offsets": [[131, 142]],
            },
            {
                "id": "T4",
                "type": "Disorders",
                "text": ["PET"],
                "offsets": [[239, 242]],
            },
        ],
        "events": [],
        "coreferences": [],
        "relations": [],
    }
]

predictions = model.sample(
    bigbio_pages=bigbio_pages,
    num_beams=num_beams,
)

for i in range(0, len(predictions), num_beams):
    mention = predictions[i]["mention"]
    print(f"## Mention {(i // num_beams) + 1}: {mention}")

    for j in range(num_beams):
        pred = predictions[i + j]
        print(
            f"   - Beam {j + 1}:\n"
            f"     Predicted concept name: {pred['pred_concept_name']}\n"
            f"     Predicted code: {pred['pred_concept_code']}\n"
            f"     Beam score: {pred['beam_score']:.3f}\n"
        )

Example Output:

## Mention 1: pregnant woman
   - Beam 1:
   - Predicted concept name:Pregnant Woman
   - Predicted code: C0033011
   - Beam score: 1.000

   - Beam 2:
   - Predicted concept name:Pregnant woman
   - Predicted code: C0033011
   - Beam score: 0.003

   - Beam 3:
   - Predicted concept name:Pregnant woman (person)
   - Predicted code: C0033011
   - Beam score: 0.001

   - Beam 4:
   - Predicted concept name:Pregnancy Partner
   - Predicted code: C3538996
   - Beam score: 0.000

   - Beam 5:
   - Predicted concept name:Pregnant woman (person)
   - Predicted code: C0033011
   - Beam score: 0.000

## Mention 2: severe-range hypertension
   - Beam 1:
   - Predicted concept name:Hypertensive disease
   - Predicted code: C0020538
   - Beam score: 0.078

   - Beam 2:
   - Predicted concept name:Hypertension (in some patients)
   - Predicted code: C3280936
   - Beam score: 0.022

   - Beam 3:
   - Predicted concept name:Hypertensive disease (disorder)
   - Predicted code: C0020538
   - Beam score: 0.010

   - Beam 4:
   - Predicted concept name:Hypertension, severe
   - Predicted code: C4013784
   - Beam score: 0.010

   - Beam 5:
   - Predicted concept name:Hypertension (patient A)
   - Predicted code: C4313262
   - Beam score: 0.004

## Mention 3: proteinuria
   - Beam 1:
   - Predicted concept name:Proteinurias
   - Predicted code: C0033687
   - Beam score: 1.000

   - Beam 2:
   - Predicted concept name:Proteinuric diabetic nephropathy (disorder)
   - Predicted code: C0403519
   - Beam score: 0.003

   - Beam 3:
   - Predicted concept name:Proteinuria
   - Predicted code: C0033687
   - Beam score: 0.003

   - Beam 4:
   - Predicted concept name:Proteinuric diabetic nephropathy
   - Predicted code: C0403519
   - Beam score: 0.002

   - Beam 5:
   - Predicted concept name:Proteinuric hypertension of pregnancy (disorder)
   - Predicted code: C0032914
   - Beam score: 0.001

## Mention 4: PET
   - Beam 1:
   - Predicted concept name:PET - Pre-eclamptic toxemia
   - Predicted code: C0032914
   - Beam score: 0.075

   - Beam 2:
   - Predicted concept name:PET - Pre-eclamptic toxaemia
   - Predicted code: C0032914
   - Beam score: 0.039

   - Beam 3:
   - Predicted concept name:Preeclamptic toxemia
   - Predicted code: C2931877
   - Beam score: 0.027

   - Beam 4:
   - Predicted concept name:Preeclampsia
   - Predicted code: C0032914
   - Beam score: 0.023

   - Beam 5:
   - Predicted concept name:Preeclampsia with Severe Features
   - Predicted code: C0341950
   - Beam score: 0.019

Evaluation

Entity linking performance is reported using Recall@1 with bootstrap confidence intervals. The best result is shown in bold, and the second-best result is underlined.

Model MM-ST21PV
(English)
QUAERO-EMEA
(French)
SympTEMIST
(Spanish)
DisTEMIST
(Spanish)
MedProcNER
(Spanish)
Context-Free BEL
SciSpacy 53.8 ± 1.0 37.1 ± 4.3 9.8 ± 1.3 21.1 ± 1.9 10.3 ± 1.2
SapBERT 65.6 ± 1.0 59.7 ± 3.8 34.2 ± 2.0 38.6 ± 2.6 30.4 ± 2.1
CODER-all 62.9 ± 1.1 66.9 ± 4.0 42.2 ± 2.2 47.0 ± 2.6 42.7 ± 2.1
SapBERT-all 64.6 ± 1.1 67.9 ± 3.9 49.8 ± 2.4 49.6 ± 2.6 45.1 ± 2.2
BERGAMOT 60.9 ± 1.1 63.8 ± 4.9 48.0 ± 2.7 48.9 ± 2.4 42.3 ± 2.2
Local-Context BEL
ArboEL 76.9 ± 0.9 63.0 ± 3.9 55.4 ± 2.5 54.7 ± 2.6 59.7 ± 2.6
GENRE / mBART-large 69.6 ± 1.0 69.3 ± 5.4 59.8 ± 2.7 58.7 ± 2.7 66.0 ± 2.3
GENRE / Llama-1B 73.1 ± 1.0 75.1 ± 3.6 60.5 ± 2.4 62.5 ± 2.3 67.4 ± 2.1
GENRE / Llama-8B 75.0 ± 0.9 73.8 ± 4.0 61.7 ± 2.5 63.2 ± 2.5 68.3 ± 2.2
Global-Context BEL: LongBEL
LongBEL-1B 77.6 ± 0.9 74.5 ± 3.7 59.8 ± 2.5 61.9 ± 2.4 66.6 ± 2.1
LongBEL-1B + Ensemble 78.6 ± 0.8 77.2 ± 3.0 61.8 ± 2.5 64.3 ± 2.2 69.0 ± 2.0
LongBEL-8B 79.3 ± 0.8 75.4 ± 3.4 62.0 ± 2.6 63.6 ± 2.1 69.0 ± 2.1
LongBEL-8B + Ensemble 80.0 ± 0.8 77.6 ± 3.0 63.3 ± 2.5 65.8 ± 2.2 71.0 ± 2.0

The score reported for this checkpoint is the single LongBEL-1B model. The ensemble result requires fusing several LongBEL input configurations and is not produced by this checkpoint alone.

Speed and Memory

Measured on a single NVIDIA H100 80GB GPU.

Model Model memory Candidate memory Speed
GENRE-Llama-1B baseline 2.4 GB 5.4 GB 69.6 mentions/s
LongBEL-1B 2.4 GB 5.4 GB 48.5 mentions/s

LongBEL has the same model memory footprint as the sentence-level Llama-1B baseline, but it is slower because it processes longer contexts and updates document-level memory during inference.

Limitations

This model assumes that mention spans and semantic groups are given. It does not perform mention detection.

LongBEL is most useful when concepts recur within a document. When most concepts appear only once, the memory mechanism has less information to exploit.

Because LongBEL uses previous predictions as memory, early mistakes can still influence later predictions. Robust memory training reduces this risk but does not remove it completely.

This model is intended for research use. It should not be used for clinical decision-making without additional validation and human oversight.

Reproducibility

Code and evaluation scripts are available in this GitHub repository.

Trained model checkpoints and processed datasets are available in the anonymous Hugging Face collection associated with LongBEL.

Downloads last month
26
Safetensors
Model size
1B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for AnonymousARR42/LongBEL_1B_SPACCC

Finetuned
(1721)
this model

Dataset used to train AnonymousARR42/LongBEL_1B_SPACCC

Collection including AnonymousARR42/LongBEL_1B_SPACCC

Evaluation results