LVM: How to remove a volume using pvremove
First, I encountered this error "Can't pvremove physical volume "/dev/sdc1" of volume group "nova-volumes" without -ff" (I'm actually using OpenStack but it's another story).
So to remove this, do first,
~# pvscan
PV /dev/sdb1 VG nova-volumes lvm2 [1.82 TiB / 1.82 TiB free]
PV /dev/sda5 VG cloudmaster lvm2 [297.85 GiB / 12.00 MiB free]
PV /dev/sdc1 lvm2 [1.82 TiB]
Total: 3 [3.93 TiB] / in use: 2 [2.11 TiB] / in no VG: 1 [1.82 TiB]
Then, when I do,
~# pvremove /dev/sdc1
Can't pvremove physical volume "/dev/sdc1" of volume group "nova-volumes" without -ff
So I need to,
~# vgreduce nova-volumes /dev/sdc1
Removed "/dev/sdc1" from volume group "nova-volumes"
Then do `pvscan` again,
~# pvscan
PV /dev/sdb1 VG nova-volumes lvm2 [1.82 TiB / 1.82 TiB free]
PV /dev/sda5 VG cloudmaster lvm2 [297.85 GiB / 12.00 MiB free]
PV /dev/sdc1 lvm2 [1.82 TiB]
Total: 3 [3.93 TiB] / in use: 2 [2.11 TiB] / in no VG: 1 [1.82 TiB]
Then now, I can remove this by,
~# pvremove /dev/sdc1
Labels on physical volume "/dev/sdc1" successfully wiped
Then running `pvscan` again,
root@cloudmaster:~# pvscan
PV /dev/sdb1 VG nova-volumes lvm2 [1.82 TiB / 1.82 TiB free]
PV /dev/sda5 VG cloudmaster lvm2 [297.85 GiB / 12.00 MiB free]
Total: 2 [2.11 TiB] / in use: 2 [2.11 TiB] / in no VG: 0 [0 ]
So to remove this, do first,
~# pvscan
PV /dev/sdb1 VG nova-volumes lvm2 [1.82 TiB / 1.82 TiB free]
PV /dev/sda5 VG cloudmaster lvm2 [297.85 GiB / 12.00 MiB free]
PV /dev/sdc1 lvm2 [1.82 TiB]
Total: 3 [3.93 TiB] / in use: 2 [2.11 TiB] / in no VG: 1 [1.82 TiB]
Then, when I do,
~# pvremove /dev/sdc1
Can't pvremove physical volume "/dev/sdc1" of volume group "nova-volumes" without -ff
So I need to,
~# vgreduce nova-volumes /dev/sdc1
Removed "/dev/sdc1" from volume group "nova-volumes"
Then do `pvscan` again,
~# pvscan
PV /dev/sdb1 VG nova-volumes lvm2 [1.82 TiB / 1.82 TiB free]
PV /dev/sda5 VG cloudmaster lvm2 [297.85 GiB / 12.00 MiB free]
PV /dev/sdc1 lvm2 [1.82 TiB]
Total: 3 [3.93 TiB] / in use: 2 [2.11 TiB] / in no VG: 1 [1.82 TiB]
Then now, I can remove this by,
~# pvremove /dev/sdc1
Labels on physical volume "/dev/sdc1" successfully wiped
Then running `pvscan` again,
root@cloudmaster:~# pvscan
PV /dev/sdb1 VG nova-volumes lvm2 [1.82 TiB / 1.82 TiB free]
PV /dev/sda5 VG cloudmaster lvm2 [297.85 GiB / 12.00 MiB free]
Total: 2 [2.11 TiB] / in use: 2 [2.11 TiB] / in no VG: 0 [0 ]
Shows that the /dev/sdc device is now removed. Hope this helps.
Comments
I have an external HDD which when connected to my laptop is now showing as LVM2_member. It did not auto-mount so I opened a terminal to take a look. On trying to mount it from the command line, it showed unknown filesystem type 'LVM2_member'.
Now, when I use fdisk to look at the disk, it shows one partition with HPFS/NTFS/exFAT as the type, which I presume is Windows fs format.
My questions:
1. Is my hard-disk bricked i.e. have I lost all data?
2. In the happy condition that answer to 1. is "no", can I recover the data?
I am contemplating using pvremove on the disk. Will that wipe the data too? Just want to check before I do anything more stupid now.
Btw, pvs shows the partition on the external HDD as a physical volume, but is not assigned to any vg.
Thanks!