Benchmark JuiceFS on AWS 1

Tried JuiceFS v1.0.2, with PostgreSQL database, which is running at another t3a.medium instance runs within same VPC but another avalibility zone.

Used a t2.micro to test the performance. Mounted EFS as cache, which has 100MB throughput.

$ juicefs bench jfs-test -p 2

Cleaning kernel cache, may ask for root privilege...
Write big blocks count: 2048 / 2048 [==============================================================] done
Read big blocks count: 2048 / 2048 [==============================================================] done
Write small blocks count: 200 / 200 [==============================================================] done
Read small blocks count: 200 / 200 [==============================================================] done
Stat small files count: 200 / 200 [==============================================================] done
Benchmark finished!
BlockSize: 1 MiB, BigFileSize: 1024 MiB, SmallFileSize: 128 KiB, SmallFileCount: 100, NumThreads: 2
Time used: 64.2 s, CPU: 43.4%, Memory: 679.5 MiB
+------------------+------------------+----------------+
| ITEM | VALUE | COST |
+------------------+------------------+----------------+
| Write big file | 114.83 MiB/s | 17.84 s/file |
| Read big file | 73.38 MiB/s | 27.91 s/file |
| Write small file | 19.9 files/s | 100.29 ms/file |
| Read small file | 99.2 files/s | 20.16 ms/file |
| Stat file | 450.6 files/s | 4.44 ms/file |
| FUSE operation | 37855 operations | 2.81 ms/op |
| Update meta | 4412 operations | 17.85 ms/op |
| Put object | 712 operations | 507.14 ms/op |
| Get object | 516 operations | 173.71 ms/op |
| Delete object | 0 operations | 0.00 ms/op |
| Write into cache | 351 operations | 100.58 ms/op |
| Read from cache | 196 operations | 6.98 ms/op |
+------------------+------------------+----------------+

Change -p to 4, which means 4 threads.

$ juicefs bench jfs-test -p 4
Cleaning kernel cache, may ask for root privilege...
Write big blocks count: 4096 / 4096 [==============================================================] done
Read big blocks count: 4096 / 4096 [==============================================================] done
Write small blocks count: 400 / 400 [==============================================================] done
Read small blocks count: 400 / 400 [==============================================================] done
Stat small files count: 400 / 400 [==============================================================] done
Benchmark finished!
BlockSize: 1 MiB, BigFileSize: 1024 MiB, SmallFileSize: 128 KiB, SmallFileCount: 100, NumThreads: 4
Time used: 97.4 s, CPU: 49.8%, Memory: 560.9 MiB
+------------------+------------------+----------------+
| ITEM | VALUE | COST |
+------------------+------------------+----------------+
| Write big file | 114.89 MiB/s | 35.65 s/file |
| Read big file | 116.10 MiB/s | 35.28 s/file |
| Write small file | 35.1 files/s | 114.12 ms/file |
| Read small file | 131.4 files/s | 30.44 ms/file |
| Stat file | 874.9 files/s | 4.57 ms/file |
| FUSE operation | 75752 operations | 4.28 ms/op |
| Update meta | 8842 operations | 17.43 ms/op |
| Put object | 1424 operations | 509.62 ms/op |
| Get object | 1107 operations | 593.01 ms/op |
| Delete object | 0 operations | 0.00 ms/op |
| Write into cache | 669 operations | 73.71 ms/op |
| Read from cache | 317 operations | 9.26 ms/op |
+------------------+------------------+----------------+

With 8 threads.

$ juicefs bench jfs-test -p 8
Cleaning kernel cache, may ask for root privilege...
Write big blocks count: 8192 / 8192 [==============================================================] done
Read big blocks count: 8192 / 8192 [==============================================================] done
Write small blocks count: 800 / 800 [==============================================================] done
Read small blocks count: 800 / 800 [==============================================================] done
Stat small files count: 800 / 800 [==============================================================] done
Benchmark finished!
BlockSize: 1 MiB, BigFileSize: 1024 MiB, SmallFileSize: 128 KiB, SmallFileCount: 100, NumThreads: 8
Time used: 189.0 s, CPU: 49.9%, Memory: 502.6 MiB
+------------------+-------------------+----------------+
| ITEM | VALUE | COST |
+------------------+-------------------+----------------+
| Write big file | 117.95 MiB/s | 69.45 s/file |
| Read big file | 114.59 MiB/s | 71.49 s/file |
| Write small file | 40.1 files/s | 199.39 ms/file |
| Read small file | 240.1 files/s | 33.33 ms/file |
| Stat file | 1256.4 files/s | 6.37 ms/file |
| FUSE operation | 151773 operations | 8.35 ms/op |
| Update meta | 17826 operations | 24.64 ms/op |
| Put object | 2858 operations | 497.79 ms/op |
| Get object | 2259 operations | 968.98 ms/op |
| Delete object | 0 operations | 0.00 ms/op |
| Write into cache | 1184 operations | 88.53 ms/op |
| Read from cache | 604 operations | 14.63 ms/op |
+------------------+-------------------+----------------+

As we can see, the performance of Write big file doesn't change. Since t2.micro has only 1 vCPU, the write operation was limited by CPU.

The Stat file, FUSE operation, Update metadata are nearly increasing linearly, because metadata operation was done with database, that's not a bottleneck for this test.

Conclusion, JuiceFS is good candidate for cloud application which requires heavy read operation from object store (AWS S3 etc), and program should be optimized to use symbol link heavily, which is just a database operation rather than real IO operation. In the meanwhile, for best write performance, a good enough cache is also needed. 

posted on 2022-12-20 19:52  Bo Schwarzstein  阅读(43)  评论(0编辑  收藏  举报