4 results for “topic:commonsenseqa”
The implementation of DeBERTaV3-based commonsense question answering on CommonsenseQA.
Reasoning-style fine-tuning of an instruction LLM using LoRA vs QLoRA, analyzing the accuracy–memory trade-off on CommonsenseQA with real GPU constraints.
The project trains google T5 on ANLG and COS-E datasets and use these pretrained models to generates explanations for ReColr contexts which along with the context and question are passed to ALBERT for prediction.
LoRA vs QLoRA fine-tuning on CommonsenseQA measuring accuracy, GPU memory, and real-world PEFT trade-offs.