15 results for “topic:pretrained-language-models”
A curated list of awesome papers related to pre-trained models for information retrieval (a.k.a., pretraining for IR).
Efficient Inference for Big Models
On Transferability of Prompt Tuning for Natural Language Processing
[EMNLP 2025]* FusionDTI utilises a Token-level Fusion module to effectively learn fine-grained information for Drug-Target Interaction Prediction.
The code for the ACL 2023 paper "Linear Classifier: An Often-Forgotten Baseline for Text Classification".
Code for the paper "Exploiting Pretrained Biochemical Language Models for Targeted Drug Design", to appear in Bioinformatics, Proceedings of ECCB2022.
The official repository for AAAI 2024 Oral paper "Structured Probabilistic Coding".
A Keras-based and TensorFlow-backend NLP Models Toolkit.
This research examines the performance of Large Language Models (GPT-3.5 Turbo and Gemini 1.5 Pro) in Bengali Natural Language Inference, comparing them with state-of-the-art models using the XNLI dataset. It explores zero-shot and few-shot scenarios to evaluate their efficacy in low-resource settings.
A python tool for evaluating the quality of few-shot prompt learning.
Identified ADEs and associated terms in an annotated corpus with Named Entity Recognition (NER) modeling with Flair and PyTorch. Fine-tuned pre-trained transformer models such as XLM-RoBERTa, SpanBERT, and Bio_ClinicalBERT. Achieved F1 scores of 0.73 and 0.77 for BIOES and BIO tagging models, respectively.
The code of An Empirical Study of Pre-trained Language Models in Simple Knowledge Graph Question Answering
LSTM models for text classification on character embeddings.
Codebase to reproduce the submission of team CompLx for sub-task 2 of the 2022 FinSim4-ESG shared task
Fine tuned BERT, mBERT and XLMRoBERTa for Abusive Comments Detection in Telugu, Code-Mixed Telugu and Telugu-English.