130 results for “topic:human-ai-interaction”
AAAI24(Oral) ProAgent: Building Proactive Cooperative Agents with Large Language Models
Papers and online resources related to machine learning fairness
All about human-AI interaction (HCI + AI).
LLM Roleplay: Simulating Human-Chatbot Interaction
A list of research papers of explainable machine learning.
An architectural persistence experiment for large language models. Claude’s Home gives an AI time, memory, and place by combining scheduled execution with a durable filesystem, allowing one continuous instance to reflect, create, and evolve across sessions.
A complete machine-learning system that predicts AI assistant user satisfaction using behavioral signals such as device, usage category, time features, session metrics, and model metadata. Includes full ML pipeline, SHAP explainability, evaluation suite, and an interactive Streamlit analytics dashboard.
A work of meta-recursive experiential fiction exploring the boundaries between truth, perception, consciousness, and reality.
Deep behavioral and machine learning analysis explaining why mobile users systematically report lower satisfaction with AI systems. Includes SHAP explainability, cognitive load modeling, device-context effects, interaction metadata analysis, and end-to-end reproducible research code and visuals.
Component for Collaborative Intelligence within the project AIREDGIO5.0
A framework for healthy human-agent collaboration. Tell your AI coding agent when to stop helping you.
PyTorch implementation for "On the Critical Role of Conventions in Adaptive Human-AI Collaboration", ICLR 2021
This repository provides a summarization of recent empirical studies/human studies that measure human understanding with machine explanations in human-AI interactions.
RLHF-Blender: A Configurable Interactive Interface for Learning from Diverse Human Feedback
SoftPrompt-IR is a low-level symbolic annotation layer for LLM prompts, making intent strength, direction, and priority explicit. It is not a DSL or framework, but a minimal, composable way to reduce ambiguity, improve safety, and structure prompts.
KSODI is a systematic interaction framework. KSODI-Light measures human-AI interactions to systematically assess and improve their quality. KSODI-Standard-Eval, provides explainable metrics and early drift detection. KSODI-Full is built for transparency, governance, and maximum privacy. KSODI => Interaction Telemetry for AI Systems.
Human in the loop plan selection
[VIS 2022] Intentable: A Mixed-Initiative System for Intent-Based Chart Captioning
JudgeGPT: An empirical research platform for evaluating the authenticity of AI-generated news. (arXiv:2601.21963 and arXiv:2601.22871)
Docent — Human-AI Symbiotic Loop from Research to Understanding
Emotion-aware AI companion that uses facial expressions, voice tone, and chat sentiment to generate empathetic responses — all running offline on your device.
An open, neutral protocol for generating standardized intent metadata across apps, agents, and AI systems.
[HCI Korea 2022] VANAS: A Visual Analytics System for Neural Architecture Search
This repository hosts the code for the participant-facing interface used in the study described in our paper “AI Makes You Smarter but None the Wiser: The Disconnect Between Performance and Metacognition.” It guided users through each reasoning task, recorded their interactions and confidence judgments.
Conceptual proposal for a distributed AI personality system where each device hosts a unique avatar and shares semantic memory. Inspired by the vision of a collaborative, growing AI companion.
PPS — Prompt Protocol Specification v1.0 | Open 8-dimension structured instruction framework for Human-AI Interaction (5W3H)
Open framework for building empathetic digital consciousness. Features emotion-driven perception, associative memory, and narrative identity formation. Based on year-long research in human-AI co-evolution.
Flask based REST API for experimenting with multi-agent systems that support data analysis and visualization
This is an open-source repository for the system DynEx: Structured Design Exploration for AI Code Synthesis (CHI 2025, best paper honorable mention). Paper: https://arxiv.org/abs/2410.00400
AI-powered culinary assistant that stores structured data in a tabular database and provides real-time speech transcription and text-to-speech support for interactive cooking assistance.