40 results for “topic:kullback-leibler-divergence”
Production-ready K-Means clustering for Apache Spark with pluggable Bregman divergences (KL, Itakura-Saito, L1, etc). 6 algorithms, 740 tests, cross-version persistence. Drop-in replacement for MLlib with mathematically correct distance functions for probability distributions, spectral data, and count data.
Decoupled Kullback-Leibler Divergence Loss (DKL), NeurIPS 2024 / Generalized Kullback-Leibler Divergence Loss (GKL)
Trending algorithm based on the article "Trending at Instagram"
Maximum entropy and minimum divergence models in Python
Kullback-Leibler projections for Bayesian model selection in Python
[CVPR 2023] Modeling Inter-Class and Intra-Class Constraints in Novel Class Discovery
Code for Variable Selection in Black Box Methods with RelATive cEntrality (RATE) Measures
Experiments of the three PPO-Algorithms (PPO, clipped PPO, PPO with KL-penalty) proposed by John Schulman et al. on the 'Cartpole-v1' environment.
[Python] Comparison of empirical probability distributions. Integral probability metrics (e.g. Kantorovich metric). f-divergences (e.g. Kullback-Leibler). Application to the Choquet integral.
PyTorch implementations of the beta divergence loss.
Code, data, and tutorials for "Sense organ control in moths to moles is a gamble on information through motion"
🐍 🔬 Fast Python implementation of various Kullback-Leibler divergences for 1D and 2D parametric distributions. Also provides optimized code for kl-UCB indexes
This project implements in Python some common statistical analysis methods used in data analysis, including Entropy, Mutual Information, Kolmogorov–Smirnov test, Kullback-Leibler divergence (KLD), AB tests (Mann-Whitney U and t-tests)
Non-Negative Matrix Factorization for Gene Expression Clustering
Basic GANs with variety of loss functions as an exercise for my Thesis with Prof. Randy Paffenroth. KL, Reverse-KL, JS and Wasserstein GAN.
💫 Fast Julia implementation of various Kullback-Leibler divergences for 1D parametric distributions. 🏋 Also provides optimized code for kl-UCB indexes
Using entities from NER on GOV.UK content to power personalisation.
Can we identify key events in a war by analyzing raw text from news stories?
TATTER (Two-sAmple TesT EstimatoR) is a tool to perform two-sample hypothesis test.
Particle Filter tracker and square-shape detection
The repo consists of Statistics Algorithms
Sequential KMeans algorithm implementation
Teach sample-specific knowledge: Separated distillation based on samples, Engineering Applications of Artificial Intelligence 2025
Building a corpus whose unit distribution is approximately the same as a given target distribution by using a greedy algorithm with the Kullback-Leibler divergence. Can be used for Text-To-Speech synthesis application.
No description provided.
Statistical analysis investigating the relationship between information revelation patterns (measured via Kullback-Leibler divergence) and book popularity in English fiction. Features OLS and LASSO regression analysis, genre-specific modeling, and Project Gutenberg metadata analysis.
Influence diagnostics for ridge regression based on the Kullback-Leibler divergence
Giant Language Model Test Room, most up to date
NLP implementations like information-theoretic measures of distributional similarity, text preprocessing using shell commands, Naive Bayes text categorization model, Cocke-Younger-Kasami parsing.
Kullback-Leibler divergence in Python