How to Use iostat and iotop to Improve Performance

By Raman Kumar

Updated on Sep 22, 2025

In this tutorial, we'll learn how to use iostat and iotop to improve performance.

Disk I/O bottlenecks are one of those invisible issues that quietly strangle server performance. Everything looks fine—CPU is calm, memory isn’t screaming—yet applications crawl. The culprit? The disks are overworked, waiting too long to read or write data. Let’s walk through how we can catch these issues in real time and fix them before they ruin uptime.

What is Disk I/O?

Disk I/O (Input/Output) is the process of reading and writing data to storage. When the disk is too busy or slow, requests pile up and cause performance issues.

Step 1: Install the right tools

Two utilities are essential:

  • iostat (part of the sysstat package) – gives historical and live performance data.
  • iotop – shows which processes are hogging disk I/O in real time.

On Debian/Ubuntu:

sudo apt install sysstat iotop

On CentOS/RHEL:

sudo yum install sysstat iotop

Step 2: Check the big picture with iostat

Run:

iostat -x 1

The -x flag gives extended stats, and 1 refreshes every second.

Key metrics to watch:

  • %util – How busy the disk is. If it’s consistently close to 100%, the drive is saturated.
  • await – Average time (ms) requests wait to be served. Higher than 20–30 ms on SSDs is a red flag.
  • svctm – Average service time. When await is much higher than svctm, requests are queuing up.
  • r/s & w/s – Read and write requests per second. Useful for spotting spikes.

If %util is maxed and await is climbing, the disk is the choke point.

Step 3: Zoom into processes with iotop

While iostat tells us the disk is suffering, iotop tells us who’s responsible.

Run:

sudo iotop -o

The -o option hides idle processes so we only see the real hogs.

Look at:

  • DISK READ / DISK WRITE columns – which process is hammering the disk.
  • PID – the process ID we can investigate or stop if needed.

If a database, log collector, or misbehaving script is hogging I/O, iotop will call it out.

Step 4: Apply practical fixes

Once we know the bottleneck, we fix it with targeted action:

Busy databases

  • Move the database to SSD or NVMe storage.
  • Enable caching layers like Redis or Memcached.
  • Optimize queries and indexes.

Log-heavy apps

  • Rotate logs more aggressively with logrotate.
  • Send logs to a remote logging service to offload disk writes.

File system optimization

  • Use noatime mount option to skip access-time updates.
  • Align block sizes with the workload (important for databases).

Distribute workload

  • Spread data across multiple disks or use RAID setups.
  • Move less critical workloads to slower disks.

Check background jobs

  • Cron jobs running backups or sync tasks often cause spikes. Schedule them off-peak.

Step 5: Monitor continuously

Disk issues are rarely one-off problems. We need to keep monitoring:

  • Add iostat checks into monitoring tools like Prometheus + Grafana or Zabbix.
  • Set alerts when %util exceeds 90% for sustained periods.
  • Track application-level latency to spot the effects early.

Final Thoughts

Disk I/O bottlenecks are sneaky, but with iostat and iotop we can expose them in minutes. By watching %util and await, then tracing the culprit process, we turn guesswork into action. The fixes—faster drives, smarter queries, or better log handling—pay off quickly in performance and stability.

We don’t need to fear slow disks once we understand how to read the numbers. Instead, we can make informed decisions that keep our systems fast and reliable.

Check out robust instant dedicated serversInstant KVM VPS, premium shared hosting and data center services in New Zealand