Skip to content

Commit 2d641aa

Browse files
feat(video): Boost ZFS Performance with a Special VDEV in TrueNAS
1 parent 5370ecf commit 2d641aa

File tree

2 files changed

+147
-0
lines changed

2 files changed

+147
-0
lines changed
Lines changed: 147 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,147 @@
1+
---
2+
layout: post
3+
title: "Boost ZFS Performance with a Special VDEV in TrueNAS"
4+
date: 2025-05-10 08:00:00 -0500
5+
categories: homelab
6+
tags: homelab zsf truenas
7+
image:
8+
path: /assets/img/headers/special-vdev-truenas-hero.webp
9+
lqip: data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/2wCEAAEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAf/AABEIAAUACgMBEQACEQEDEQH/xAGiAAABBQEBAQEBAQAAAAAAAAAAAQIDBAUGBwgJCgsQAAIBAwMCBAMFBQQEAAABfQECAwAEEQUSITFBBhNRYQcicRQygZGhCCNCscEVUtHwJDNicoIJChYXGBkaJSYnKCkqNDU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6g4SFhoeIiYqSk5SVlpeYmZqio6Slpqeoqaqys7S1tre4ubrCw8TFxsfIycrS09TV1tfY2drh4uPk5ebn6Onq8fLz9PX29/j5+gEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoLEQACAQIEBAMEBwUEBAABAncAAQIDEQQFITEGEkFRB2FxEyIygQgUQpGhscEJIzNS8BVictEKFiQ04SXxFxgZGiYnKCkqNTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqCg4SFhoeIiYqSk5SVlpeYmZqio6Slpqeoqaqys7S1tre4ubrCw8TFxsfIycrS09TV1tfY2dri4+Tl5ufo6ery8/T19vf4+fr/2gAMAwEAAhEDEQA/APsfRP2MP2ffjt+z78O/DfxF0bxpeaf4q+HHw5udauNJ8bTaXqTtZaMmrW0dlcNpV5a2kUd+YZdpsJz5cmpRKyi9tzp/7X4hZxxtlOZcd18BxfjKlGni+I6WGyvMqFbE5RT5sxxPs5VaOAx+VY+p7FeyjFUMzwjccPQSceWXN+FeH3B/COY0uDp47K8ZKpVwOU161fD5hQpVpSWXRqNU/rGXYuhCLqTnJKph61vaTTbunH8Nvib+zN8IPA/xI+IPgvw/Z+L49A8IeN/FnhfQ477xTFd3qaRoGvX+k6al5drolut1dLZWkK3FwIIRNKHkEUYbYP7F4BqYuvwJwVXxVWlPE1uEuHKuInSji4Up16mT4OdadKFfH4uvCnKo5OEa2KxNWMWlUxFaadSX8xcc4LB4XjbjHC4WnUhhsNxTxBQw8KssPOpGhRzbF06Uak6eEoU5zjTjFTnCjRhKSbjSpxagv//Z
10+
---
11+
12+
Curious about how a special metadata VDEV can boost ZFS performance, especially with spinning disks? In this video, I walk through what it is, why it matters, and how to set it up in TrueNAS and some from the terminal. I’ll also share real-world benchmarks comparing pools with and without a special VDEV, so you can see the difference for yourself.
13+
14+
{% include embed/youtube.html id='2PdLHsSRHto' %}
15+
📺 [Watch Video](https://www.youtube.com/watch?v=2PdLHsSRHto)
16+
17+
## Testing
18+
19+
I created a test script that you can find here: <https://github.com/techno-tim/zfs-tools>
20+
21+
The test script will try to create a pool based on 3 drives. `HDD1`, `HDD2`, and `NVME_SPECIAL`. You can modify these to match your disk Ids.
22+
23+
You can find your disk Ids by running:
24+
25+
```shell
26+
ls -l /dev/disk/by-id/
27+
```
28+
29+
You can also adjust the test files by changing `TEST_COUNT` however I found that `100,000` is a good number to get consistent results.
30+
31+
Update the script with your disk Ids.
32+
33+
You can then run the script:
34+
35+
```shell
36+
chmod +x zfs-metadata.sh # make it executable
37+
./zfs-metadata.sh # run the script
38+
```
39+
40+
It will create 2 pools, `test-1` and `test-2`, `test-2` has the special vdeb.
41+
42+
This will take a long time to run depending on your system, disks, and `TEST_COUNT`.
43+
44+
When it's done, you will find the results in `/mnt/test-results/`
45+
46+
If you want to check the small blocks values for your pool:
47+
48+
List pool
49+
50+
```shell
51+
zpool list -v pond # change based on your pool name
52+
```
53+
54+
Get value
55+
56+
```shell
57+
zfs get special_small_blocks pond -r # change based on your pool name
58+
```
59+
60+
Set value
61+
62+
```shell
63+
zfs set special_small_blocks=128k pond # change based on your pool name and the small block value you want to use
64+
```
65+
66+
## My Test results
67+
68+
### Random Read (4K, iodepth=1)
69+
70+
Here are my results from the video.
71+
72+
- `test-1` pool was a single 14TB EXOS drive
73+
- `test-2` pool was a single 14TB EXOS drive + a Samsung 990 Pro NVMe
74+
75+
| Metric | test-1 (HDD only) | test-2 (Special VDEV) | Improvement |
76+
|--------------------------|------------------|------------------------|-----------------|
77+
| **IOPS** | 4,331 | 57,200 | +1,220% |
78+
| **Bandwidth** | 16.9 MiB/s | 223 MiB/s | +1,220% |
79+
| **Avg Latency** | 865.88 µs | 66.35 µs | −92% |
80+
| **99% Latency** | 8.0 µs | 2.8 µs | −65% |
81+
| **Read IO Completed** | 2.0 GiB | 26.2 GiB | +1,210% |
82+
83+
### Random Write (4K, iodepth=1)
84+
85+
| Metric | test-1 (HDD only) | test-2 (Special VDEV) | Change |
86+
|--------------------------|------------------|------------------------|-----------------|
87+
| **IOPS** | 209 | 195 | −6.7% |
88+
| **Bandwidth** | 838 KiB/s | 782 KiB/s | −6.7% |
89+
| **Avg Latency** | 18.8 ms | 20.2 ms | +7.4% (worse) |
90+
| **99% Latency** | 10.18 µs | 10.82 µs | Slightly worse |
91+
92+
### Random Write (4K, iodepth=16)
93+
94+
| Metric | test-1 (HDD only) | test-2 (Special VDEV) | Improvement |
95+
|--------------------------|------------------|------------------------|-----------------|
96+
| **IOPS** | 210 | 229 | +9% |
97+
| **Bandwidth** | 840 KiB/s | 919 KiB/s | +9% |
98+
| **Avg Latency** | 303.8 ms | 277.4 ms | −8.7% |
99+
| **99% Latency** | 701 ms | 376 ms | −46.4% |
100+
| **99.95% Latency** | 776 ms | 443 ms | −43% |
101+
| **Max Latency** | 842 ms | 516 ms | −38.7% |
102+
103+
### Random Read/Write (4K, iodepth=16)
104+
105+
| Metric | test-1 (HDD only) | test-2 (Special VDEV) | Improvement |
106+
|--------------------------|------------------|------------------------|-----------------|
107+
| **Read IOPS** | 196 | 247 | +26% |
108+
| **Write IOPS** | 196 | 246 | +25% |
109+
| **Read Bandwidth** | 788 KiB/s | 989 KiB/s | +25.5% |
110+
| **Write Bandwidth** | 786 KiB/s | 986 KiB/s | +25.5% |
111+
| **Avg Read Latency** | 160.15 ms | 127.45 ms | −20% |
112+
| **Avg Write Latency** | 163.93 ms | 130.24 ms | −20.5% |
113+
| **99% Read Latency** | 502 ms | 207 ms | −58.8% |
114+
| **99% Write Latency** | 506 ms | 209 ms | −58.7% |
115+
116+
### Metadata – Random Access (20,000 files)
117+
118+
| Pool | Duration (s) | Improvement |
119+
|----------|--------------|-----------------|
120+
| test-1 | 71.91 s ||
121+
| test-2 | 65.73 s | +8.6% faster |
122+
123+
### Metadata – Sequential Access (20,000 files)
124+
125+
| Pool | Duration (s) | Improvement |
126+
|----------|--------------|-----------------|
127+
| test-1 | 139.80 s ||
128+
| test-2 | 81.62 s | +41.6% faster |
129+
130+
## 📦 Products in this video 📦
131+
132+
While enterprise gear is great for businesses, I have found that if you have a good warranty, redundancy, and understand the trade-offs, consumer gear works great for home.
133+
134+
- Samsung 990 Pro NVMe: <https://amzn.to/4d9JKXk> (affiliate link)
135+
136+
## Join the conversation
137+
138+
<blockquote class="twitter-tweet" data-dnt="true" data-theme="dark">I tested how much a special metadata VDEV can actually speed up ZFS. Turns out it makes directory browsing faster, snappier containers, and a smart use of NVMe. I built a test script, ran benchmarks, and walk through it all in <a href="https://twitter.com/TrueNAS?ref_src=twsrc%5Etfw">@TrueNAS</a> <a href="https://t.co/czyXEFkSd3">https://t.co/czyXEFkSd3</a> <a href="https://t.co/6hpIve8HR9">pic.twitter.com/6hpIve8HR9</a></p>&mdash; Techno Tim (@TechnoTimLive) <a href="https://twitter.com/TechnoTimLive/status/1921242653628199171?ref_src=twsrc%5Etfw">May 10, 2025</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
139+
## Links
140+
141+
🛍️ Check out the new Merch Shop at <https://l.technotim.live/shop>
142+
143+
⚙️ See all the hardware I recommend at <https://l.technotim.live/gear>
144+
145+
🚀 Don't forget to check out the [🚀Launchpad repo](https://l.technotim.live/quick-start) with all of the quick start source files
146+
147+
🤝 Support me and [help keep this site ad-free!](/sponsor)
Binary file not shown.

0 commit comments

Comments
 (0)