Anything else to try?
Hello All
In Solaris 11.1 x86, I have a ZFS pool consisting of 4 mirrors, 2 disks each. After some hardware manipulation, both disks of mirror-0 happened to be on one controller, and one day, that controller generated many IO errors.
Both disk were marked faulty by fmadm, appear as UNAVAIL in zpool status, and whole zpool is UNAVAIL, too.
I'm pretty sure the data is still available, even though zpool status says it's corrupted. Controller is replaced.
Disks appear in format -e, even though under different names. Labels look good.
What I tried:
fmadm repaired for all faulty FMRIs. Marked repaired successfully, but then appear in fmadm faulty again.
Booted from a backup BE, and from Live 11.3 USB, pool still shows those disks UNAVAIL, even though "fmadm failed" shows no entries.
Re-shuffled the disks across controllers. Failed disks still appear as UNAVAIL under their OLD names, i.e. c1t0d0s0, even though that c1t0d0s0 is now a small SSD in rpool, not the 2 TB spindle from the failed pool.
Where exactly is this FAILED/UNAVAIL info is kept? Can I clean it?
Would DD to a fresh 2TB disk copy that FAILED mark as well?
Anything else to try?
Thanks
Andrei
Hello All
In Solaris 11.1 x86, I have a ZFS pool consisting of 4 mirrrors, 2 disks each.
After some hardware manipulation, both disks of mirror-0 happened to be on one controller, and one day, that controller generated many IO errors.
Both disk were marked faulty by fmadm, appear as UNAVAIL in zpool status, and whole zpool is UNAVAIL, too.
I'm pretty sure the data is still available, even though zpool status says it's corrupted. Controller is replaced.
Disks appear in format -e, even though under different names. Labels look good.
What I tried:
fmadm repaired for all faulty FMRIs. Marked repaired successfully, but then appear in fmadm faulty again.
Booted from a backup BE, and from Live 11.3 USB, pool still shows those disks UNAVAIL, even though "fmadm failed" shows no entries.
Re-shuffled the disks across controllers. Failed disks still appear as UNAVAIL under their OLD names, i.e. c1t0d0s0, even though that c1t0d0s0 is now a small SSD in rpool, not the 2 TB spindle from the failed pool.
Where exactly is this FAILED/UNAVAIL info is kept? Can I clean it?
Would DD to a fresh 2TB disk copy that FAILED mark as well?
Anything else to try?
Thanks
Andrei
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 399 |
Nodes: | 16 (1 / 15) |
Uptime: | 103:52:44 |
Calls: | 8,365 |
Calls today: | 4 |
Files: | 13,165 |
Messages: | 5,898,203 |