Situation: Server uses to mirrored drives to save backup data. One drive
was throwing bad sectors. Dreaded 3TB Constellation drives so replaced
with 4TB IronWolf. After transferring 2.2TB data from the one good drive
to the two new drives installed drives in server. Server would only boot
into emergency mode. In emergency mode a manual mount command would
mount the two drives and you can exit to regular login.
Diagnosis: Replacing the new drives triggered fsck upon boot, but systemd.fsck@dev-disk-by...service failed for each new drive. UUIDs were correctly updated in fstab. Went and tried manually fsck drives and
e2fsck failed with incompatible version.
Conclusion: (Or the DOH-Moment) That server is still running 18.04 LTS
and I formatted the new drives with a usb drive dock and I am running
23.04! The files are currently accessible and writable so I temporarily disabled fsck for those two drives. In a live 18.04 session formatted up
a new drive with correct version. Luckily I have a 3rd spare drive to expedite this. Have another system coping files from old good drive to
this new drive so it can latter be swapped to the server and repeat the processes for the other drive...Just have to get it done before Sunday
when the backup occurs.
Penalty: Humiliation and time. Just passing on the info and maybe save someone else of the same mistake.
BTW: have not found what they did to change the format of ext4 from the different versions OS. Both drives were gpt and same format. One would
think that would not matter...
Jonathan N. Little wrote:fsck doesn't know about. And then the rest of the system refuses to mount it because fsck failed.
Situation: Server uses to mirrored drives to save backup data. One drive
was throwing bad sectors. Dreaded 3TB Constellation drives so replaced
with 4TB IronWolf. After transferring 2.2TB data from the one good drive
to the two new drives installed drives in server. Server would only boot
into emergency mode. In emergency mode a manual mount command would
mount the two drives and you can exit to regular login.
Diagnosis: Replacing the new drives triggered fsck upon boot, but
systemd.fsck@dev-disk-by...service failed for each new drive. UUIDs were
correctly updated in fstab. Went and tried manually fsck drives and
e2fsck failed with incompatible version.
Conclusion: (Or the DOH-Moment) That server is still running 18.04 LTS
and I formatted the new drives with a usb drive dock and I am running
23.04! The files are currently accessible and writable so I temporarily
disabled fsck for those two drives. In a live 18.04 session formatted up
a new drive with correct version. Luckily I have a 3rd spare drive to
expedite this. Have another system coping files from old good drive to
this new drive so it can latter be swapped to the server and repeat the
processes for the other drive...Just have to get it done before Sunday
when the backup occurs.
Penalty: Humiliation and time. Just passing on the info and maybe save
someone else of the same mistake.
BTW: have not found what they did to change the format of ext4 from the
different versions OS. Both drives were gpt and same format. One would
think that would not matter...
It is surprising that something that significant would have changed in the formatting. Perhaps the older fsck is just being cautious, and refusing to try fixing a partition formatted with a newer version of ext4, which might use features the older
However, a couple of years ago I did have an issue with a couple of disks that I'd been using with a USB-SATA adapter, which then couldn't be read when attached directly to the PC's SATA bus. It turns out that some USB-SATA adapters misreport thelogical block size used by the disk (seems to be an issue with a commonly used chipset). When I attached a disk to the USB adapter and formatted it, the adapter reported 4096 byte logical blocks. So the GPT partition table was placed at block 1, 4096
Some relevant information I came across while trying to figure this one out: <https://askubuntu.com/questions/909041/harddrive-on-usb-to-sata-adapter-not-showing-full-size>
<https://bugzilla.redhat.com/show_bug.cgi?id=734015> <https://www.spinics.net/lists/linux-usb/msg70029.html>
That doesn't seem like the same issue you had, since your problems were with fsck, so in your case it sounds like the partitions were read correctly and the filesystem was found.
However, a couple of years ago I did have an issue with a couple of
disks that I'd been using with a USB-SATA adapter, which then couldn't
be read when attached directly to the PC's SATA bus. It turns out that
some USB-SATA adapters misreport the logical block size used by the disk (seems to be an issue with a commonly used chipset).
That doesn't seem like the same issue you had, since your problems were
with fsck, so in your case it sounds like the partitions were read
correctly and the filesystem was found.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 307 |
Nodes: | 16 (2 / 14) |
Uptime: | 101:28:28 |
Calls: | 6,850 |
Calls today: | 1 |
Files: | 12,355 |
Messages: | 5,415,391 |