Ceph Disk write slow on dd oflag=dsync on small block sizes

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP



Ceph Disk write slow on dd oflag=dsync on small block sizes



We have deployed a ceph cluster with ceph version 12.2.5, using Dell R730xd servers as storage nodes with 10 7.2k NLSAS drives as OSDs. We have 3 storage nodes.



We did not configured RAID settings and used the drives directly to create OSDs.



We are using ceph-ansible-stable-3.1 to deploy the ceph cluster.



We have encounter slow performance on disk write test in VM uses a RBD image.


[root@test-vm-1 vol2_common]# dd if=/dev/zero of=disk-test bs=512 count=1000 oflag=direct ; dd if=/dev/zero of=disk-test bs=512 count=1000 oflag=dsync ; dd if=/dev/zero of=disk-test bs=512 count=1000
1000+0 records in
1000+0 records out
512000 bytes (512 kB) copied, 0.101852 s, 5.0 MB/s
1000+0 records in
1000+0 records out
512000 bytes (512 kB) copied, 21.7985 s, 23.5 kB/s
1000+0 records in
1000+0 records out
512000 bytes (512 kB) copied, 0.00702407 s, 72.9 MB/s



when checking on OSD node, under osd directory, we identified that same lower disk speeds.


[root@storage01moc ~]# cd /var/lib/ceph/osd/ceph-26
[root@storage01moc ceph-26]# dd if=/dev/zero of=disk-test bs=512 count=1000 oflag=direct ; dd if=/dev/zero of=disk-test bs=512 count=1000 oflag=dsync ; dd if=/dev/zero of=disk-test bs=512 count=1000
1000+0 records in
1000+0 records out
512000 bytes (512 kB) copied, 14.6416 s, 35.0 kB/s
1000+0 records in
1000+0 records out
512000 bytes (512 kB) copied, 9.93967 s, 51.5 kB/s
1000+0 records in
1000+0 records out
512000 bytes (512 kB) copied, 0.00591158 s, 86.6 MB/s



We suspect that the cause of the issue is no hardware caching is available when not using any RAID configuration (RAID 0) on individual OSD drives.



Ceph Configurations


[global]
fsid = ....
mon initial members = ...
mon host = ....
public network = ...
cluster network = ...
mon_pg_warn_max_object_skew=500

[osd]
osd mkfs type = xfs
osd mkfs options xfs = -f -i size=2048
osd mount options xfs = noatime,largeio,inode64,swalloc
osd journal size = 10240

[client]
rbd cache = true
rbd cache writethrough until flush = true
rbd_concurrent_management_ops = 20



Disk Details


=== START OF INFORMATION SECTION ===
Vendor: TOSHIBA
Product: MG04SCA60EE
Revision: DR07
Compliance: SPC-4
User Capacity: 6,001,175,126,016 bytes [6.00 TB]
Logical block size: 512 bytes
Physical block size: 4096 bytes
Formatted with type 2 protection
Rotation Rate: 7200 rpm
Form Factor: 3.5 inches
Device type: disk
Transport protocol: SAS (SPL-3)
Local Time is: Wed Aug 1 20:59:52 2018 +08
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
Temperature Warning: Disabled or Not Supported



Please let me know if we Shrink OSDs and use RAID 0 on Drives and recreate OSDs, will it help for increasing the disk writes ?



Thanks in advance.




1 Answer
1



When we configure each OSD drive with RAID0 on the storage controller, disk write issue was resolved.



Reason for the slowness identified as the RAID controller write cache was not applicable on the drives that not configured with any RAID level.






By clicking "Post Your Answer", you acknowledge that you have read our updated terms of service, privacy policy and cookie policy, and that your continued use of the website is subject to these policies.

Popular posts from this blog

Firebase Auth - with Email and Password - Check user already registered

Dynamically update html content plain JS

How to determine optimal route across keyboard