Gvinum on FreeBSD

Two years ago I documented how I used Vinum on FreeBSD. Since then Vinum has been replaced by Gvinum, although it's not always clear when you should use either term. The Handbook documentation isn't easy to understand, either. Luckily I combined my old notes with this helpful tutorial to accomplish my goal.

I wanted to take two separate partitions, /nsm1 on one disk and /nsm2 on a second disk, and make them look like a single /nsm partition. I had already been using /nsm1, but I was prepared to lose that data since it was only for test purposes thus far. This is what the df command produced.

cel433:/root# df -m
Filesystem 1M-blocks Used Avail Capacity Mounted on
/dev/ad0s1a 495 36 419 8% /
devfs 0 0 0 100% /dev
/dev/ad0s1f 989 0 910 0% /home
/dev/ad0s1h 10553 8655 1053 89% /nsm1
/dev/ad1s1d 18491 0 17012 0% /nsm2
/dev/ad0s1g 989 25 884 3% /tmp
/dev/ad0s1d 1978 328 1492 18% /usr
/dev/ad0s1e 2973 25 2710 1% /var

Here's bsdlabel output.

cel433:/root# bsdlabel /dev/ad0s1
# /dev/ad0s1:
8 partitions:
# size offset fstype [fsize bsize bps/cpg]
a: 1048576 0 4.2BSD 2048 16384 8
b: 1048576 1048576 swap
c: 39102273 0 unused 0 0 # "raw" part, don't edit
d: 4194304 2097152 4.2BSD 2048 16384 28552
e: 6291456 6291456 4.2BSD 2048 16384 28552
f: 2097152 12582912 4.2BSD 2048 16384 28552
g: 2097152 14680064 4.2BSD 2048 16384 28552
h: 22325057 16777216 4.2BSD 2048 16384 28552
cel433:/root# bsdlabel /dev/ad1s1
# /dev/ad1s1:
8 partitions:
# size offset fstype [fsize bsize bps/cpg]
c: 39102273 0 unused 0 0 # "raw" part, don't edit
d: 39102273 0 4.2BSD 2048 16384 28552

So, I need to combine /dev/ad0s1h and /dev/ad1s1d into one bigger virtual disk.

First I unmounted both /nsm1 and /nsm2 were not mounted. Next I edited the bsdlabel using 'bsdlabel -e'. These were the results.

cel433:/root# bsdlabel /dev/ad0s1
# /dev/ad0s1:
8 partitions:
# size offset fstype [fsize bsize bps/cpg]
a: 1048576 0 4.2BSD 2048 16384 8
b: 1048576 1048576 swap
c: 39102273 0 unused 0 0 # "raw" part, don't edit
d: 4194304 2097152 4.2BSD 2048 16384 28552
e: 6291456 6291456 4.2BSD 2048 16384 28552
f: 2097152 12582912 4.2BSD 2048 16384 28552
g: 2097152 14680064 4.2BSD 2048 16384 28552
h: 22325057 16777216 vinum
cel433:/root# bsdlabel /dev/ad1s1
# /dev/ad1s1:
8 partitions:
# size offset fstype [fsize bsize bps/cpg]
c: 39102273 0 unused 0 0 # "raw" part, don't edit
d: 39102273 0 vinum

That's an example where you use the term 'vinum' even though the implementation is Gvinum.

Then I created /etc/gvinum.conf which described that I wanted to create one big /nsm drive. I used the drive size numbers from the df -m command showed earlier.

cel433:/root# cat /etc/gvinum.conf
drive drive1 device /dev/ad0s1h
drive drive2 device /dev/ad1s1d
volume nsm
plex org concat
sd length 10553m drive drive1
sd length 18491m drive drive2

Now I loaded the Gvinum kernel module and invoked gvinum:

cel433:/root# kldload geom_vinum
cel433:/root# kldstat
Id Refs Address Size Name
1 4 0xc0400000 691a48 kernel
2 1 0xc0a92000 58554 acpi.ko
3 1 0xc1d2c000 10000 geom_vinum.ko
cel433:/root# gvinum create /etc/gvinum.conf
2 drives:
D drive2 State: up /dev/ad1s1 A: 601/19092 MB (3%)
D drive1 State: up /dev/ad0s1h A: 347/10900 MB (3%)

1 volume:
V nsm State: up Plexes: 1 Size: 28 GB

1 plex:
P nsm.p0 C State: up Subdisks: 2 Size: 28 GB

2 subdisks:
S nsm.p0.s1 State: up D: drive2 Size: 18 GB
S nsm.p0.s0 State: up D: drive1 Size: 10 GB

That's good news. Time to prepare /dev/gvinum/nsm for data.

cel433:/root# newfs /dev/gvinum/nsm
/dev/gvinum/nsm: 29044.0MB (59482112 sectors) block size 16384, fragment size 2048
using 159 cylinder groups of 183.77MB, 11761 blks, 23552 inodes.
super-block backups (for fsck -b #) at:
160, 376512, 752864, 1129216, 1505568, 1881920, 2258272, 2634624, 3010976,
3387328, 3763680, 4140032, 4516384, 4892736, 5269088, 5645440, 6021792,
6398144, 6774496, 7150848, 7527200, 7903552, 8279904, 8656256, 9032608,
9408960, 9785312, 10161664, 10538016, 10914368, 11290720, 11667072, 12043424,
...truncated...

Finally I created a /nsm mount point and mounted the new drive.

cel433:/root# mkdir /nsm
cel433:/root# mount /dev/gvinum/nsm /nsm
cel433:/root# df -h
Filesystem Size Used Avail Capacity Mounted on
/dev/ad0s1a 496M 36M 420M 8% /
devfs 1.0K 1.0K 0B 100% /dev
/dev/ad0s1f 989M 74K 910M 0% /home
/dev/ad0s1g 989M 26M 885M 3% /tmp
/dev/ad0s1d 1.9G 328M 1.5G 18% /usr
/dev/ad0s1e 2.9G 25M 2.6G 1% /var
/dev/gvinum/nsm 27G 4.0K 25G 0% /nsm

To enable Gvinum at boot, I added the following to /boot/loader.conf:

geom_vinum_load="YES"

I also added this entry to /etc/fstab:

/dev/gvinum/nsm /nsm ufs rw 2 2

Unfortunately, after a reboot, I had problems with the new /nsm:

Nov 9 15:52:38 cel433 kernel: GEOM_VINUM: subdisk nsm.p0.s1 state change: down
-> stale
Nov 9 15:52:38 cel433 kernel: GEOM_VINUM: subdisk nsm.p0.s0 state change: down
-> stale
Nov 9 15:52:47 cel433 kernel: g_vfs_done():gvinum/nsm[READ(offset=65536, length
=8192)]error = 6
Nov 9 15:52:56 cel433 kernel: g_vfs_done():gvinum/nsm[READ(offset=65536, length
=8192)]error = 6

When I tried to mount /nsm I got this error:

mount: /dev/gvinum/nsm: Device not configured

Gvinum didn't look happy:

cel433:/root# gvinum list
2 drives:
D drive1 State: up /dev/ad0s1h A: 347/10900 MB (3%)
D drive2 State: up /dev/ad1s1 A: 601/19092 MB (3%)

1 volume:
V nsm State: down Plexes: 1 Size: 28 GB

1 plex:
P nsm.p0 C State: down Subdisks: 2 Size: 28 GB

2 subdisks:
S nsm.p0.s0 State: stale D: drive1 Size: 10 GB
S nsm.p0.s1 State: stale D: drive2 Size: 18 GB

Thankfully I found this post which solved the problem.

cel433:/root# gvinum start nsm
2 drives:
D drive1 State: up /dev/ad0s1h A: 347/10900 MB (3%)
D drive2 State: up /dev/ad1s1 A: 601/19092 MB (3%)

1 volume:
V nsm State: up Plexes: 1 Size: 28 GB

1 plex:
P nsm.p0 C State: up Subdisks: 2 Size: 28 GB

2 subdisks:
S nsm.p0.s0 State: up D: drive1 Size: 10 GB
S nsm.p0.s1 State: up D: drive2 Size: 18 GB

Then I was able to access /nsm:

cel433:/root# mount /nsm
cel433:/root# ls -al /nsm
total 6
drwxr-xr-x 3 root wheel 512 Nov 9 15:28 .
drwxr-xr-x 23 root wheel 512 Nov 9 15:29 ..
drwxrwxr-x 2 root operator 512 Nov 9 15:28 .snap
cel433:/root# df -h /nsm
Filesystem Size Used Avail Capacity Mounted on
/dev/gvinum/nsm 27G 4.0K 25G 0% /nsm

This process survived a reboot, so I am all set now.

Comments

Anonymous said…
Personally I would have chosen gconcat(8) instead, which requires only:

gconcat create nsm ad0s1h ad1s1d
newfs /dev/concat/nsm

and of course:

echo 'geom_concat_load="YES"' >> /boot/loader.conf

Or probably even gstripe(8) :)

I really think that vinum's heyday was before GEOM made this kind of stuff absurdly easy.
Hi Ceri,

That is too cool for words. I've never heard of gconcat until now. It's not mentioned in the Handbook.
Anonymous said…
Yeah, we (read: me) have been somewhat remiss in getting that stuff in. There are a whole bunch of these gfoo commands that implement mirroring, striping, raid3 and, as of last week in -HEAD, journaling.

And nobody knows... :-(
Anonymous said…
ya.. man gstripe
shuttle01# gconcat create -v nsm ad5s1h ad7s1d
Done.
shuttle01# newfs /dev/concat/nsm
newfs: /dev/concat/nsm: could not open special device
shuttle01# kldstat
Id Refs Address Size Name
1 4 0xc0400000 70794c kernel
2 1 0xc0b08000 59f20 acpi.ko
3 1 0xc8301000 5000 geom_concat.ko
shuttle01# ls /dev/concat
nsm

Rats, what's the problem?
Tried the label method after destroying nsm:

shuttle01# gconcat label -v nsm ad5s1h ad7s1d
Can't store metadata on ad5s1h: Operation not permitted.
Here is df -h output for the box I'm trying to set up:

shuttle01# df -h
Filesystem Size Used Avail Capacity Mounted on
/dev/ad5s1a 496M 36M 420M 8% /
devfs 1.0K 1.0K 0B 100% /dev
/dev/ad5s1f 989M 22K 910M 0% /home
/dev/ad5s1h 54G 4.0K 50G 0% /nsm1
/dev/ad7s1d 361G 4.0K 332G 0% /nsm2
/dev/ad5s1g 989M 12K 910M 0% /tmp
/dev/ad5s1d 1.9G 531M 1.3G 29% /usr
/dev/ad5s1e 4.8G 1.6M 4.5G 0% /var
rwatson pointed out I need to unmount /nsm1 and /nsm2. Dur. Thanks rwatson!
Anonymous said…
But note gconcat (striping) is actually making you *more* vulnerable to disk loss, as a single disk lost means losing all of your data.
RAID5 - like vinum provided, solves this problem.
zilzal,

I know. I think RAID for two drives is silly given the risk. For three drives, I would definitely do RAID.

Popular posts from this blog

Zeek in Action Videos

MITRE ATT&CK Tactics Are Not Tactics

New Book! The Best of TaoSecurity Blog, Volume 4