How Port Monitoring Prevents Backend Surprises
Your website loads perfectly. The homepage looks great, the CSS renders, images appear. Your HTTP monitoring shows a comforting green checkmark. But behind the scenes, your PostgreSQL database stopped accepting connections 12 minutes ago. Your Redis cache crashed silently. Your SMTP server has not sent an email in two hours. Users are seeing error messages, failed transactions, and missing data -- and your monitoring dashboard has no idea.
This is the blind spot that port monitoring eliminates. While HTTP checks verify that your web server responds, port monitoring verifies that every individual service your application depends on is actually reachable and accepting connections. It is the difference between knowing your front door is open and knowing that every room in the building has working lights.
What Is Port Monitoring and How Does It Work?
Port monitoring performs TCP connection attempts against specific ports on your servers. Each network service listens on a designated port number. When a port monitor attempts to connect and receives a successful TCP handshake, the service is confirmed to be running and accepting connections. When the connection is refused, times out, or gets no response, the service is either down, overloaded, or blocked by a firewall.
Unlike HTTP monitoring, which tests the full application stack by requesting a web page, port monitoring operates at the transport layer. It answers a more fundamental question: "Is this service process running and listening on its expected port?"
For a deeper comparison of these approaches, see our guide on HTTP vs. TCP monitoring and why you need both.
Critical Ports Every Business Should Monitor
The specific ports you need to monitor depend on your infrastructure, but here are the services and ports that most web applications rely on:
| Service | Default Port | Why Monitor It | What Happens When It Fails |
|---|---|---|---|
| HTTP | 80 | Unencrypted web traffic, redirects to HTTPS | Users on http:// links get connection errors |
| HTTPS | 443 | Primary web traffic for most sites | Website completely inaccessible |
| SSH | 22 | Remote server management | Cannot deploy code, cannot debug issues remotely |
| PostgreSQL | 5432 | Primary database for many applications | All dynamic content fails, transactions break |
| MySQL / MariaDB | 3306 | Database for WordPress, Laravel, etc. | Application errors, data unavailable |
| Redis | 6379 | Caching, session storage, message queue | Slow pages, lost sessions, stuck background jobs |
| MongoDB | 27017 | Document database | Application data unavailable, API failures |
| Elasticsearch | 9200 | Search functionality | Search broken, product discovery fails |
| SMTP | 25 / 587 | Outgoing email delivery | Password resets, order confirmations, and alerts stop |
| IMAP | 993 | Email retrieval | Support team cannot read incoming emails |
| FTP / SFTP | 21 / 22 | File transfers | Uploads, exports, and integrations break |
| DNS | 53 | Name resolution | All services become unreachable by domain name |
| RabbitMQ | 5672 | Message broker | Background tasks stop processing, data pipeline halts |
| Memcached | 11211 | Object caching | Increased database load, slower responses |
| Custom API | Various (3000, 8080, etc.) | Internal microservices, Node.js apps | Dependent services fail silently |
Real-World Scenarios: When Port Monitoring Saves the Day
Scenario 1: The Silent Database Crash
A mid-size SaaS application runs a PostgreSQL database on a separate server from the web application. One night, PostgreSQL runs out of shared memory and crashes. The web server keeps running -- it even serves cached versions of some pages. The HTTP health check returns 200 because the Nginx welcome page is accessible. But every dynamic request fails with a database connection error.
Without port monitoring: The team discovers the issue the next morning when customer complaints start flowing in. Six hours of degraded service, dozens of failed signups, and hundreds of broken API requests.
With port monitoring on port 5432: The monitor detects the closed port within 2 minutes. An alert fires to the on-call engineer's phone via Telegram. The database is restarted within 10 minutes. Total impact: 12 minutes instead of 6 hours.
Scenario 2: Redis Disappears After a Server Update
After a routine OS security update, the server reboots. Most services come back automatically -- the web server, the application process, SSH. But Redis, configured to start manually rather than via systemd, does not restart. The application falls back to database queries for session management, response times triple, and the session store starts losing user login states.
Without port monitoring: Users report sporadic logouts and slow pages for hours. The team assumes it is a traffic spike.
With port monitoring on port 6379: Redis port unreachable alert fires immediately after the reboot. The engineer SSHs in, starts Redis, and adds it to the systemd auto-start configuration. Problem resolved before any user impact.
Scenario 3: Email Stops Working But Nobody Notices
An e-commerce store's SMTP relay service silently stops accepting connections. Order confirmation emails, shipping notifications, and password reset emails all stop sending. The store's website works perfectly. HTTP monitoring shows 100% uptime. But customers are not receiving their order confirmations and are calling support to ask "Did my order go through?"
Without port monitoring: The support team spends two days manually confirming orders and resending emails.
With port monitoring on port 587: SMTP port failure detected within minutes. The team switches to a backup SMTP provider within the hour.
Scenario 4: Firewall Rule Change Breaks Internal Services
A DevOps engineer updates firewall rules to harden the production server. They accidentally tighten a rule that blocks the application server from connecting to the Elasticsearch instance on port 9200. The main site still loads, but the search functionality returns empty results. Product pages render but the "Related Products" section is blank. E-commerce conversion rates drop because customers cannot find products.
With port monitoring on port 9200: The connectivity failure is detected immediately, directly pointing to the firewall change as the cause.
Port Monitoring vs. HTTP Monitoring: Why You Need Both
A common question: "If HTTP monitoring checks my web server, why do I also need port monitoring?" The answer lies in what each check type can and cannot see:
| Failure Type | HTTP Check Detects? | Port Check Detects? |
|---|---|---|
| Web server crash | Yes | Yes (ports 80/443) |
| Application error (500 response) | Yes | No (port is still open) |
| Database service stopped | Maybe (if health check queries DB) | Yes |
| Cache service crashed | No (pages still load, just slower) | Yes |
| Email service down | No | Yes (SMTP port) |
| Search engine unresponsive | No (site loads without search) | Yes |
| SSH access blocked | No | Yes (port 22) |
| Message queue down | No (async jobs just stop processing) | Yes |
| SSL certificate problem | Yes (HTTPS check) | Partial (port open but cert invalid) |
The pattern is clear: HTTP monitoring covers the front door. Port monitoring covers the entire building. Together, they give you full visibility. For more details on this complementary approach, read our guide on TCP port monitoring: why it matters and how to do it right.
Setting Up Port Monitoring with UptyBots: A Practical Guide
Step 1: Map Your Infrastructure
Before configuring monitors, document every service your application depends on and the port it runs on. For each service, note:
- The server hostname or IP address
- The port number
- Whether the service is externally accessible or internal-only
- The criticality level (what breaks if this service goes down)
Step 2: Prioritize by Business Impact
Not all ports deserve the same monitoring frequency. Organize your ports into tiers:
- Tier 1 (check every 1-2 minutes): Database ports, web server ports, primary API ports -- services where failure immediately impacts users
- Tier 2 (check every 5 minutes): Cache servers, search engines, message queues -- services where failure degrades performance but does not cause immediate visible outages
- Tier 3 (check every 10-15 minutes): SMTP, FTP, SSH, monitoring/logging ports -- services where brief interruptions are tolerable
Step 3: Configure Alerts Based on Severity
Route alerts appropriately:
- Tier 1 failures: Immediate alert via Telegram and email. These warrant waking someone up at 3 AM.
- Tier 2 failures: Alert via email and Telegram. Investigate during business hours unless it persists.
- Tier 3 failures: Alert via email. Review in the next business day unless it escalates.
Step 4: Handle Firewall Considerations
Some services (especially databases) should not be directly accessible from the internet. For these internal-only services, you have two options:
- Whitelist your monitoring service's IP range in your firewall rules to allow TCP checks from the monitoring network only
- Monitor indirectly via HTTP by creating a health check endpoint in your application that verifies internal service connectivity and exposes the result as an HTTP response. For example, a
/health/dbendpoint that attempts a database query and returns the status
If your hosting environment blocks certain check types, read our guide on what to do when your hosting blocks monitoring.
Common Port Monitoring Mistakes
Mistake 1: Monitoring Only Web Ports
Checking ports 80 and 443 is essentially HTTP monitoring with less information. The value of port monitoring comes from checking the non-web services -- databases, caches, queues, email servers. If you only monitor web ports, you are duplicating your HTTP checks without gaining new visibility.
Mistake 2: Ignoring Connection Timeouts
A port that accepts a connection but takes 15 seconds to respond is almost as bad as a port that does not respond at all. Configure your port monitors with meaningful timeout thresholds. If your database normally accepts connections in under 100ms, set a 2-3 second timeout to catch degradation before it becomes a complete outage.
Mistake 3: Not Monitoring After Deployments and Updates
Server reboots, OS updates, configuration changes, and code deployments are the most common causes of unexpected port closures. Make it a standard practice to verify that all monitored ports are responding after any infrastructure change.
Mistake 4: Monitoring Database Ports from the Internet
If your database port (3306, 5432) is accessible from the public internet, that is a security issue, not a monitoring strategy. Database ports should only be accessible from your application servers and your monitoring service's whitelisted IPs. If you cannot whitelist monitoring IPs, use an HTTP health check endpoint instead.
Mistake 5: Alerting on Every Single Port Blip
Brief network blips can cause momentary connection failures that resolve on their own. Configure your port monitors to confirm failures with retries before sending alerts. Two consecutive failures 30 seconds apart is a real problem. A single failed attempt that succeeds on immediate retry is usually noise. Learn more about distinguishing false positives from real downtime.
Port Monitoring Checklist for Production Systems
- Inventory all services and their port numbers across every server
- Create port monitors for all database services (PostgreSQL, MySQL, MongoDB, etc.)
- Monitor cache and session stores (Redis, Memcached)
- Monitor message brokers and queues (RabbitMQ, Redis queues)
- Monitor email services (SMTP on port 587, IMAP on port 993)
- Monitor search services (Elasticsearch on 9200, Solr on 8983)
- Monitor SSH access on all servers (port 22 or custom SSH port)
- Set timeout thresholds appropriate for each service type
- Configure retry policies to prevent false positive alerts
- Route critical port failure alerts to Telegram or webhook for instant notification
- Verify port monitors pass after every server reboot or firewall change
- Review port monitoring alerts weekly for patterns that indicate emerging issues
Frequently Asked Questions
What is the difference between port monitoring and ping monitoring?
Ping (ICMP) monitoring checks whether a server is reachable at the network level. Port monitoring checks whether a specific service on that server is running and accepting connections. A server can be pingable while its database port is closed. Port monitoring gives you service-level visibility that ping cannot provide.
How often should port checks run?
For critical services (databases, primary APIs): every 1-2 minutes. For supporting services (cache, search): every 5 minutes. For non-urgent services (SMTP, FTP): every 10-15 minutes. Adjust based on how quickly a failure in each service would impact users.
Can I monitor ports on servers behind a NAT or private network?
External port monitoring requires that the port is reachable from the internet (or from your monitoring service's network). For purely internal services, use an application-level health check endpoint that your monitoring service can reach via HTTP, which internally verifies that private services are running.
Does an open port mean the service is working correctly?
An open port confirms the service process is running and accepting TCP connections. It does not guarantee that the service is functioning correctly -- a database might accept connections but fail all queries due to disk corruption. For deeper verification, combine port monitoring with API monitoring or synthetic checks that validate actual service behavior.
My port check shows the port is closed but the service is running. Why?
The most common cause is a firewall between the monitoring service and your server that blocks the port. The service is running locally but is not accessible from outside. Verify your firewall rules allow inbound TCP connections on that port from your monitoring service's IP addresses.
See setup tutorials or get started with UptyBots monitoring today.