Since the root cause turned out to be unrelated, I found that Axios works fine with its default settings—there’s no need for custom HTTPS agents, keep-alive, or timeout configurations. - 5/28/2025
If you're running a high-traffic Node.js or Next.js app with APIs (like Feathers.js) and noticing high numbers of TIME_WAIT
or SYN_RECV
in netstat
, you're not alone.
At SwimStandards.com, we serve thousands of daily users, mostly over HTTPS. Recently, we noticed:
sudo netstat -anp | grep :443 | awk '{print $6}' | sort | uniq -c
9 CLOSING
42 ESTABLISHED
4 FIN_WAIT1
7 LAST_ACK
1 LISTEN
256 SYN_RECV
127 TIME_WAIT
These numbers raised concerns. Was it normal? Was performance at risk?
TIME_WAIT
: Indicates a closed connection that the OS is keeping open for 60s (default) to prevent delayed packets from interfering with future ones.
SYN_RECV
: Part of the TCP handshake. A high number may mean your server is getting a lot of new connection attempts — possibly from bots or slow clients.
TIME_WAIT
a Problem?Count | Typical Cause | Problem? | Action |
---|---|---|---|
< 500 | Normal HTTPS/HTTP1.1 traffic | ❌ No | None |
500–2000 | Moderate load, short connections | ⚠️ Maybe | Monitor |
> 3000–5000 | High turnover or scraping bots | ✅ Yes | Optimize + tune OS |
import https from "https";
const httpsAgent = new https.Agent({
keepAlive: true,
keepAliveMsecs: 5000,
timeout: 30000,
maxSockets: 100,
maxFreeSockets: 10,
});
const client = axios.create({
httpsAgent,
timeout: 30000,
});
This helped reduce the number of new TCP connections created per request.
We didn’t customize the Feathers.js server’s HTTP agent. Node.js by default reuses sockets where possible on the server side. No immediate action was needed.
sysctl
# Enable TCP SYN cookies (protect against SYN flood)
sudo sysctl -w net.ipv4.tcp_syncookies=1
# Allow more concurrent connections in backlog queue
sudo sysctl -w net.core.somaxconn=1024
# Optional: Reduce TIME_WAIT retention
sudo sysctl -w net.ipv4.tcp_fin_timeout=30
sudo sysctl -w net.ipv4.tcp_tw_reuse=1
To persist these changes:
sudo nano /etc/sysctl.conf
# Add or uncomment:
net.ipv4.tcp_syncookies=1
net.core.somaxconn=1024
net.ipv4.tcp_fin_timeout=30
net.ipv4.tcp_tw_reuse=1
Check connection states with:
sudo netstat -anp | grep :443 | awk '{print $6}' | sort | uniq -c
Check which IPs are flooding your server:
ss -tn state syn-recv | awk 'NR > 1 {split($5,a,":"); print a[1]}' | sort | uniq -c | sort -nr | head
After these changes, we monitored TCP states with:
sudo netstat -anp | grep :443 | awk '{print $6}' | sort | uniq -c
While the number of SYN_RECV
and TIME_WAIT
states did fluctuate, we noticed an increase in TIME_WAIT
in some cases — suggesting that somaxconn=1024
may not be ideal for our traffic.
somaxconn
We reverted to the Linux default:
sudo sysctl -w net.core.somaxconn=128
And updated /etc/sysctl.conf
:
# net.core.somaxconn = 1024
Then reloaded again:
sudo sysctl -p
This helped stabilize things, and we learned that tuning these values isn’t always linear — it depends on how your app handles slow handshakes, timeouts, and bot traffic.
TIME_WAIT
is normal — only take action if it causes port exhaustion.
SYN_RECV
spikes are often bot-related and may be harmless if your server is handling them.
Keep-alive agents in Axios reduce load and improve socket reuse.
Use sysctl
carefully — tune only when your server shows real-world pressure.
If you're running an API + Next.js combo like us and want to reduce connection churn and socket errors, these steps are low-risk and effective.