Skip to main content

EC2/EBS Storage I/O Diagnostic Troubleshooter (Part 2)

Use the interactive troubleshooter below to identify your EC2/EBS storage I/O symptom, review the raw evidence, understand the root cause, and apply the recommended fix.

🚨 Step 1: What specific error symptom are you experiencing?

Please click the most accurate description:


Quick Reference Table

#ScenarioKey Error SignalRoot CauseThe Fix
5latency — SSD write amplification and dramatically reduced I/O performance"This pattern results in significantly increased write amplification, increased latency, and dramatically reduced I/O performance."Writing unaligned bytes or small block sizes to SSD instance store volumes filled the disk, forcing the controller to execute expensive garbage collection.Leave 10% of the volume unpartitioned for over-provisioning or configure the OS to use the TRIM command.
6abort — NVMe Abort issued due to exceeded I/O timeout threshold"The Abort command is an NVMe Admin command that is issued to abort a specific command... typically issued by the device driver to storage devices that have exceeded the I/O operation timeout threshold."The Linux device driver issued an NVMe Abort because the storage device exceeded the configured I/O timeout threshold.Increase nvme_core.io_timeout to 4294967295 or use instance types that natively support Abort (e.g., R5b, M6i).
7EBS / BurstBalance — I/O credit bucket depleted on gp2 volume"When I/O demand is greater than baseline performance, the volume spends I/O credits... You can monitor the I/O credit balance for a volume using the Amazon EBS BurstBalance metric in Amazon CloudWatch."The workload's I/O demand exceeded the gp2 volume's baseline limit, entirely depleting its 5.4 million I/O credit bucket.Modify the volume to gp3 or increase its size using aws ec2 modify-volume.
8volume — Instance boots from wrong volume due to duplicate partition label[ec2-user ~]$ sudo e2label /dev/xvda1 / / [ec2-user ~]$ sudo e2label /dev/xvdf1 /The initial ramdisk booted from the wrong attached volume because multiple volumes concurrently possessed the identical / partition label or UUID.sudo e2label /dev/xvdf1 old/ (ext4) or sudo xfs_admin -L old/ /dev/xvdf1 (xfs).