62 results for “topic:human-centered-ai”
This GitHub repository contains the complete code for building Business-Ready Generative AI Systems (GenAISys) from scratch. It guides you through architecting and implementing advanced AI controllers, intelligent agents, and dynamic RAG frameworks. The projects demonstrate practical applications across various domains.
A curated list of awesome academic research, books, code of ethics, courses, databases, data sets, frameworks, institutes, maturity models, newsletters, principles, podcasts, regulations, reports, responsible scale policies, tools and standards related to Responsible, Trustworthy, and Human-Centered AI.
A deep exploration of Algorithmic Empathy, the next frontier in AI understanding. This project examines how machines can learn from human fallibility, model disagreement, and align with moral reasoning. It blends psychology, fairness metrics, interpretability, and co-learning design into one framework for humane intelligence.
An in-depth exploration of the rise of human-centered, interactive machine learning. This article examines how Streamlit enables collaborative AI design by merging UX, visualization, and automation. Includes theory, architecture, and design insights from the ML Playground project.
CognitiveLens is a Streamlit-powered analytics tool for exploring alignment between human and AI decisions. It visualizes fairness, calibration, and interpretability through metrics like Cohen’s κ, AUC, and Brier score. Designed for ethical AI, bias auditing, and decision transparency in machine learning systems.
A systems-thinking essay that explains why failure rarely happens suddenly. It shows how slow drift, accumulating pressure, and weakening buffers push systems toward collapse long before outcomes change, and why prediction-focused analytics miss the most important phase of failure.
A systems-thinking essay that reframes failure as a gradual transition rather than a discrete outcome. It explains how pressure accumulation, weakening buffers, and hidden instability precede visible collapse, and why prediction-based models arrive too late to prevent failure in human-centered systems.
This article reframes pricing as a negotiation rather than a prediction, showing how price emerges from tensions between product reality, market dynamics, and buyer behavior. It introduces negotiation-aware ML, value decomposition, and equilibrium modeling to build transparent, human-aligned pricing systems.
An early-warning system that models disasters as instability transitions rather than isolated events. It combines force-based instability modeling with an interpretable ML escalation-risk layer to detect when hazards become disasters due to exposure growth, response delays, and buffer collapse.
An explanation-first HR analytics system that reconstructs why employee exit becomes rational. Instead of predicting attrition, it generates human-readable exit narratives by decomposing pressure and retention forces, adding peer context and counterfactual interventions to reveal how stability erodes over time.
A long-form systems essay arguing that most metrics fail because they measure outcomes instead of accumulated pressure. It reframes collapse as a consequence of debt, buffer depletion, and delayed feedback, and explains why early warning depends on measuring pressure rather than predicting final events.
What's In My Human Feedback? Explaining preferences in human feedback using interpretability + LLMs. https://arxiv.org/abs/2510.26202
Repository of the paper "Toward a Responsible Fairness Analysis: From Binary to Multiclass and Multigroup Assessment in Graph Neural Network-Based User Modeling Tasks"
AI that tries to show its work. Transparent, private, and easy to run yourself
A Deep Learning Framework for Visual Attention Prediction and Analysis of News Interfaces | 2025 IEEE Conference on Artificial Intelligence (CAI)
The Conscience Layer Prototype, created by Aleksandar Rodić in 2025, establishes a research foundation for ethical artificial intelligence. It brings moral awareness into computation through principles of truth, human autonomy, and societal responsibility, defining a transparent and accountable form of intelligence.
Personalized Interactive Localization-Adaptive Real-time Technology
Emotion-aware AI companion designed for natural, human-like conversation. TalkSpace combines machine learning, LLMs, and real-time UI adaptation to create a calm, supportive space where users feel heard — not analyzed 🌿
🤖 Build models that understand human fallibility, bridging the gap between machine precision and human emotion for better AI decision-making.
A capability-first method for designing AI-enabled software systems in line with the CloudPedagogy AI Capability Framework.
Independent research on human-centered AI and LLMs | Policy frameworks for responsible AI | A collaborative space for researchers, innovators, and policymakers advancing ethical, inclusive AI
Framework de Desarrollo para Modelo Etico Adaptativo
INTENTIO is a local-first, private cognitive environment for designing how AI pays attention. Not a chatbot or cloud service, but an interpretive system that operates inside a bounded, intentional knowledge space you define.
A human-centered ethical framework for AI based on emotional fulfillment.
🧠 AI Developer based in Tokyo 🇯🇵 | Building creative, human-centered AI experiences. 🥈 2nd Place — Liquid AI x W&B x Lambda Hackathon (Tokyo) 💻 Focused on multimodal LLMs, voice interaction & fast prototyping (vibe coding). 🌸 Exploring the intersection of Japanese culture and AI innovation.
A minimal constitutional law for tool-using AI agents centered on human dignity, clear agency, and revocable oversight.
Public showcase of NovaLiveSystem: a biomimetic cognitive architecture with interoception and distributed intelligence.
An end-to-end, research-grade AI system for measuring human cognition. HCMS models mastery, confidence, learning stability, and adaptability through analysis, inference, validation, robustness testing, and explainability — bridging human-centered AI research and applied systems.
Standard ufficiale del Markup Emozionale ideato da Lorenzo Coppola
Local-first personal AI assistant with chat, memory, coaching, and multimodal explain workflows.