How to use fio
eMMC throughput writing to QNX partition
time sh -c "dd if=/dev/zero of=/dev/hd0.qnx6.ifs_qnxhyp_b" bs=4096 count=12700 && sync
We can also use system OR data partition to write as they are of more size. Other partitions are of ~10 MBs we can check and verify the same with less data transfer accordingly. Check size availability using df -h command
eMMC throughput reading from QNX partition
time sh -c "dd of=/dev/null if=/dev/hd0.qnx6.ifs_qnxhyp_b" bs=4096 count=12700 && sync
eMMC throughput writing to Android partition
time sh -c "dd if=/dev/zero of=/dev/block/vda4" bs=4096 count=12700 && sync
eMMC throughput reading from Android partition
time sh -c "dd of=/dev/null if=/dev/block/vda4" bs=4096 count=12700 && sync
Verify QNX eMMC partitions are not available to Android
From Android console :-
time sh -c "dd if=/dev/zero of="TBD" bs=512000 count=1024 && sync"
From QNX Console :-
time sh -c "dd if=/dev/zero of="TBD" bs=512000 count=1024 && sync"
Read from Android emmc partition and QNX partition simultaneously
From Android console :-
time sh -c "dd of=/dev/null if=/dev/block/vda4" bs=4096 count=12700 && sync
From QNX Console :-
time sh -c "dd of=/dev/null if=/dev/hd0.qnx6.ifs_qnxhyp_b" bs=4096 count=12700 && sync
Verify QNX eMMC partitions are not available to non root android processes.
QNX Side: Go to /dev/hd0 tab. We will get list of partitions. Out of which some are QNX specific (vda partitions from vda 3 to vda14) and some are android Specific. We should not be able to write QNX partition from Android except root.Android console :- ls /dev/block/vda tab. We should be able to see complete list but not able to write on any partition between vda3 to vda14 partitions in list except SU (root).
titan_vhywrd_a0:/storage/self/primary # time dd if=/dev/zero of=testfile.img bs=1024 count=102400
102400+0 records in
102400+0 records out
104857600 bytes (100 M) copied, 0.990338 s, 101 M/s
0m01.01s real 0m00.02s user 0m00.26s system
titan_vhywrd_a0:/storage/self/primary # time dd if=testfile.img of=/dev/null bs=1024 count=102400
102400+0 records in
102400+0 records out
104857600 bytes (100 M) copied, 0.273368 s, 366 M/s
0m00.29s real 0m00.03s user 0m00.21s system
fio in android
---------------
fio random read
#./fio -filename=/data/test_rand -direct=1 -iodepth=1 -thread -rw=randread -ioengine=psync -bs=1M -size=100M -numjobs=4 -group_reporting -name=rand_read
rand_read: (g=0): rw=randread, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=psync, iodepth=1
...
fio-3.28-128-gfab60
Starting 4 threads
rand_read: Laying out IO file (1 file / 100MiB)
Jobs: 4 (f=4): [r(4)][87.5%][r=61.9MiB/s][r=61 IOPS][eta 00m:01s]
rand_read: (groupid=0, jobs=4): err= 0: pid=7477: Mon Nov 8 22:14:24 2021
read: IOPS=62, BW=62.2MiB/s (65.2MB/s)(400MiB/6435msec)
clat (msec): min=14, max=163, avg=62.66, stdev=21.32
lat (msec): min=14, max=163, avg=62.66, stdev=21.32
clat percentiles (msec):
| 1.00th=[ 16], 5.00th=[ 35], 10.00th=[ 40], 20.00th=[ 46],
| 30.00th=[ 50], 40.00th=[ 55], 50.00th=[ 60], 60.00th=[ 66],
| 70.00th=[ 71], 80.00th=[ 79], 90.00th=[ 91], 95.00th=[ 102],
| 99.00th=[ 123], 99.50th=[ 138], 99.90th=[ 163], 99.95th=[ 163],
| 99.99th=[ 163]
bw ( KiB/s): min=53165, max=73687, per=99.93%, avg=63607.67, stdev=1857.41, samples=48
iops : min= 49, max= 71, avg=60.50, stdev= 1.87, samples=48
lat (msec) : 20=1.25%, 50=29.75%, 100=63.50%, 250=5.50%
cpu : usr=0.04%, sys=1.12%, ctx=425, majf=0, minf=1030
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=400,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
READ: bw=62.2MiB/s (65.2MB/s), 62.2MiB/s-62.2MiB/s (65.2MB/s-65.2MB/s), io=400MiB (419MB), run=6435-6435msec
Disk stats (read/write):
sda: ios=12490/6, merge=0/6, ticks=686299/203, in_queue=661588, util=98.44%
titan_vfmt_ghqc:/data #
titan_vfmt_ghqc:/data # ./fio -filename=/data/test_rand -direct=1 -iodepth=1 -thread -rw=randread -ioengine=psync -bs=1M -size=100M -numjobs=4 -group_reporting -name=rand_read
rand_read: (g=0): rw=randread, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=psync, iodepth=1
...
fio-3.28-128-gfab60
Starting 4 threads
Jobs: 4 (f=4): [r(4)][100.0%][r=61.0MiB/s][r=61 IOPS][eta 00m:00s]
rand_read: (groupid=0, jobs=4): err= 0: pid=7846: Mon Nov 8 22:16:06 2021
read: IOPS=59, BW=59.9MiB/s (62.9MB/s)(400MiB/6673msec)
clat (msec): min=14, max=149, avg=65.88, stdev=20.80
lat (msec): min=14, max=149, avg=65.88, stdev=20.80
clat percentiles (msec):
| 1.00th=[ 28], 5.00th=[ 33], 10.00th=[ 42], 20.00th=[ 49],
| 30.00th=[ 54], 40.00th=[ 60], 50.00th=[ 65], 60.00th=[ 69],
| 70.00th=[ 74], 80.00th=[ 83], 90.00th=[ 92], 95.00th=[ 104],
| 99.00th=[ 131], 99.50th=[ 134], 99.90th=[ 150], 99.95th=[ 150],
| 99.99th=[ 150]
bw ( KiB/s): min=49077, max=73687, per=99.64%, avg=61161.61, stdev=1923.48, samples=51
iops : min= 45, max= 71, avg=58.53, stdev= 1.97, samples=51
lat (msec) : 20=0.25%, 50=23.50%, 100=69.25%, 250=7.00%
cpu : usr=0.04%, sys=0.98%, ctx=434, majf=0, minf=1031
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=400,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
READ: bw=59.9MiB/s (62.9MB/s), 59.9MiB/s-59.9MiB/s (62.9MB/s-62.9MB/s), io=400MiB (419MB), run=6673-6673msec
Disk stats (read/write):
sda: ios=12730/2, merge=0/4, ticks=723659/80, in_queue=698188, util=98.63%
fio sequential write
---------------------
./fio -filename=/data/test_ordwrite -direct=1 -iodepth=1 -thread -rw=write -ioengine=psync -bs=1M -size=100M -numjobs=4 -group_reporting -name=seq_write
fio seq read:
-------------
./fio -filename=/data/test_ordread -direct=1 -iodepth=1 -thread -rw=read -ioengine=psync -bs=1M -size=100M -numjobs=4 -group_reporting -name=seq_read
Comments
Post a Comment