14 Mar 2021

Building a storage multimedia server - Part 2

Finally all parts recieved and I found some time to assemble everything which worked perfectly, no issues at all.


I prepared an usb drive with latest debian 10 netinstall and started the installation.

During the installation, I learned that with those NVME SSDs, legacy boot is no option anymore, so it was my first time installing Debian in UEFI mode. I read some different discussions on whether to put the /boot/efi on a RAID1 or not. I decided to setup two seperate partitions and only put the root filesystem to RAID1.


The final parition layout looks like this:

nvme0n1 259:0 0 232.9G 0 disk
├─nvme0n1p1 259:1 0 953M 0 part
└─nvme0n1p2 259:2 0 232G 0 part
└─md0 9:0 0 231.8G 0 raid1
├─system-swap 253:0 0 7.6G 0 lvm [SWAP]
└─system-root 253:1 0 224.2G 0 lvm /
nvme1n1 259:3 0 232.9G 0 disk
├─nvme1n1p1 259:4 0 953M 0 part /boot/efi
└─nvme1n1p2 259:5 0 232G 0 part
└─md0 9:0 0 231.8G 0 raid1
├─system-swap 253:0 0 7.6G 0 lvm [SWAP]
└─system-root 253:1 0 224.2G 0 lvm /

After doing all the basic stuff like puppet and monitoring, I read documentation on how to setup hardware acceleration (HWA) for video encoding/decoding using FFMpeg.

Jellyfin has a nice documentation for that,

To make full use of hwa, you need to switch to the no-free iHD driver. But the problem was that the kernel and driver in Debian 10 is too old for my setup. So I had to switch to Debian Testing (bullseye).

Installing jellyfin and enabling hwa was pretty straightforward. No issues there.

After that I finally started some tests and everything worked as expected. I wanted to test some 4K content but I don’t have a device powerful enough to play 4K. So I tried to transcode 4K to 1080p, but I quickly noticed some weird bugs especially with the colors. But it looks like that jellyfin 10.7 will address those issues. So I will probably redo the tests with that version.

I also created the RAID5 with mdadm for my data. After creating an ext4 filesystem on this 26TB block device, I noticed that ext4 is initizializing the inode table in the background. There is a kernel processes named ext4lazyinit. This thing usually runs in the background with little priority causing only little io load. But for 26TB this never came to an end, even after days. The solution for that, is to mount the filesystem with init_itable=0. After a few hours, the process was finished.

The man pages says:
The lazy itable init code will wait n times the number of
milliseconds it took to zero out the previous block
group’s inode table. This minimizes the impact on system
performance while the filesystem’s inode table is being