I'm in the process of switching from a classic mdadm raid to a zfs pool, and have made a few stumbles that I'm trying to recover from.
Originally I had two 4tb drives in a raid 1 mirror.
I then put two new 4tb drives in the machine and disconnected the originals. I create a zpool with the new drives in a mirror, but I used /dev/sda
, and /dev/sdb
because that's what the guide I was using told me to do, and wasn't thinking.
So of course when I reconnected the old drives to copy the data over, they took /dev/sdb
and /dev/sdc
which made one of my two zfs drives /dev/sdd
which of course messed up the zfs pool and showed one as UNAVAIL
After working with someone online I managed to get the zfs pool into UUID mode by zpool export pool
and then zpool import -d /dev/disk/by-uuid pool
This then allowed me to detach the UNAVAIL
drive, which I then wiped clean, and added back to the zfs as a mirror of the first using it's /dev/disk/by-id
. After a few days it resilvered successfully.
Now, I have a zpool with one device having a long integer as it's identifier, and another with a string along the lines of ata-WDC_WD...
. I wanted to get them all on the same page, so I was planning to detach the first disk with the integer identifier, and re-add it using it's /dev/disk/by-id
. However, attempted to detach gives me the error: cannot detach 13419994393693470939: only applicable to mirror and replacing vdevs
.
Ok, so I tried to replace it with a different drive, and got this error: cannot open '13419994393693470939': name must begin with a letter
While the pool is working, I would like everything to be in a consistent state. I could use the old two drives to make a new pool and copy the data back over, then destroy the old pool and then add those drives to the new one (which then requires me renaming the pools which causes some interruptions in service in the meantime), but I would hoipe there is a way around this I just haven't found.