The one I aborted was because it was stuck on 10% for well over a day.
The whole test doesn't take that long, or shouldn't anyway. I restarted
it shortly after that. I might add, the test did take many hours longer than it estimated which from my past experience is quite odd. It's
usually pretty accurate. Still, it completed and shows it passed, just
has a boo boo on it. I also did a file system check it fixed a couple problems and a bunch of little things I see corrected often on bootup. Something about length of something. Seems trivial.
Given the low number and it showing it corrected that error, and then
passed a short and long test, is this drive "safe enough" to keep in service? I have backups just in case but just curious what others know
from experience. At least this isn't one of those nasty messages that
the drive will die within 24 hours. I got one of those ages ago and it didn't miss it by much. A little over 30 hours or so later, it was a
door stop. It would spin but it couldn't even be seen by the BIOS.
Maybe drives are getting better and SMART is getting better as well.
Thoughts. Replace as soon as drive arrives or wait and see?
On 12/04/2022 02:27, Dale wrote:
The one I aborted was because it was stuck on 10% for well over a day.
The whole test doesn't take that long, or shouldn't anyway. I restarted
it shortly after that. I might add, the test did take many hours longer
than it estimated which from my past experience is quite odd. It's
usually pretty accurate. Still, it completed and shows it passed, just
has a boo boo on it. I also did a file system check it fixed a couple
problems and a bunch of little things I see corrected often on bootup.
Something about length of something. Seems trivial.
Given that the firmware SOMETIMES gets its knickers in a twist,
especially consumer drives (not sure what yours are?), and read errors
are a dime a dozen, I wouldn't worry that much about ONE error.
Do another SMART test after your next reboot. Any NEW errors will be a
red flag, but just this one again? Don't worry.
Given the low number and it showing it corrected that error, and then
passed a short and long test, is this drive "safe enough" to keep in
service? I have backups just in case but just curious what others know
from experience. At least this isn't one of those nasty messages that
the drive will die within 24 hours. I got one of those ages ago and it
didn't miss it by much. A little over 30 hours or so later, it was a
door stop. It would spin but it couldn't even be seen by the BIOS.
Maybe drives are getting better and SMART is getting better as well.
SMART is a lot better than it was, but remember, it only picks up wear
and tear. Mechanical failure is just as deadly, and usually strikes
out of the blue. I saw some stats somewhere it's something like 1/3,
2/3 wear and tear picked up by SMART, and mechanical failure
undetectable by smart. Can't remember which stat was which.
Thoughts. Replace as soon as drive arrives or wait and see?
If you get a couple of errors, then no more for months, the drive is
probably fine. If you get new errors every time you test, ditch it ASAP.
Either way, make sure it's backed up!
Cheers,
Wol
Thoughts. Replace as soon as drive arrives or wait and see?
On Mon, Apr 11, 2022 at 9:27 PM Dale <rdalek1967@gmail.com> wrote:
Thoughts. Replace as soon as drive arrives or wait and see?So, first of all just about all my hard drives are in a RAID at this
point, so I have a higher tolerance for issues.
If a drive is under warranty I'll usually try to see if they will RMA
it. More often than not they will, and in that case there is really
no reason not to. I'll do advance shipping and replace the drive
before sending the old one back so that I mostly have redundancy the
whole time.
If it isn't under warranty then I'll scrub it and see what happens.
I'll of course do SMART self-tests, but usually an error like this
won't actually clear until you overwrite the offline sector so that
the drive can reallocate it. A RAID scrub/resilver/etc will overwrite
the sector with the correct contents which will allow this to happen. (Otherwise there is no way for the drive to recover - if it knew what
was stored there it wouldn't have an error in the first place.)
If an error comes back then I'll replace the drive. My drives are
pretty large at this point so I don't like keeping unreliable drives
around. It just increases the risk of double failures, given that a
large hard drive can take more than a day to replace. Write speeds
just don't keep pace with capacities. I do have offline backups but I shudder at the thought of how long one of those would take to restore.
Rich Freeman wrote:
On Mon, Apr 11, 2022 at 9:27 PM Dale <rdalek1967@gmail.com> wrote:
Thoughts. Replace as soon as drive arrives or wait and see?So, first of all just about all my hard drives are in a RAID at this
point, so I have a higher tolerance for issues.
Sadly, I don't have RAID here but to be honest, I really need to have it given the data and my recent luck with hard drives.
Drives used to get dumped because they were just to small to use anymore. Nowadays, they seem to break in some fashion long before their usefulness ends their lives.
I remounted the drives and did a backup. For anyone running up on this, just in case one of the files got corrupted, I used a little trick to
see if I can figure out which one may be bad if any. I took my rsync commands from my little script and ran them one at a time with --dry-run added.
before their usefulness ends their lives.-----Original Message-----
From: Dale <rdalek1967@gmail.com>
Sent: Tuesday, April 12, 2022 10:08 AM
To: gentoo-user@lists.gentoo.org
Subject: Re: [gentoo-user] Hard drive error from SMART
Rich Freeman wrote:
On Mon, Apr 11, 2022 at 9:27 PM Dale <rdalek1967@gmail.com> wrote:
Thoughts. Replace as soon as drive arrives or wait and see?So, first of all just about all my hard drives are in a RAID at this
point, so I have a higher tolerance for issues.
If a drive is under warranty I'll usually try to see if they will RMA
it. More often than not they will, and in that case there is really
no reason not to. I'll do advance shipping and replace the drive
before sending the old one back so that I mostly have redundancy the
whole time.
If it isn't under warranty then I'll scrub it and see what happens.
I'll of course do SMART self-tests, but usually an error like this
won't actually clear until you overwrite the offline sector so that
the drive can reallocate it. A RAID scrub/resilver/etc will overwrite
the sector with the correct contents which will allow this to happen.
(Otherwise there is no way for the drive to recover - if it knew what
was stored there it wouldn't have an error in the first place.)
If an error comes back then I'll replace the drive. My drives are
pretty large at this point so I don't like keeping unreliable drives
around. It just increases the risk of double failures, given that a
large hard drive can take more than a day to replace. Write speeds
just don't keep pace with capacities. I do have offline backups but I
shudder at the thought of how long one of those would take to restore.
Sadly, I don't have RAID here but to be honest, I really need to have it given the data and my recent luck with hard drives. Drives used to get dumped because they were just to small to use anymore. Nowadays, they seem to break in some fashion long
them one at a time with --dry-run added. If a file was to be updated on the backup that I hadn't changed or added, I was going to check into it before updating my backups. It could be that the backup file was still good and the file on my drive
I remounted the drives and did a backup. For anyone running up on this, just in case one of the files got corrupted, I used a little trick to see if I can figure out which one may be bad if any. I took my rsync commands from my little script and ran
are also pretty cheap.You can get up to 16X SATA PCI-e cards these days for pretty cheap. So as long as you have the power to run another drive or two there's not much reason not to do RAID on the important stuff. Also, the SATA protocol allows for port expanders, which
Drive isn't under warranty. I may have to start buying new drives from dealers. Sometimes I find drives that are pulled from systems and have very few hours on them. Still, warranty may not last long. Saves a lot of money tho.
USPS claims drive is on the way. Left a distribution point and should update again when it gets close. First said Saturday, then said Friday. I think Friday is about right but if the wind blows right, maybe Thursday.
I hope I have another port and power cable plug for the swap out. At least now, I can unmount it and swap without a lot of rebooting. Since it's on LVM, that part is easy. Regretfully I have experience on that process. :/
Thanks to all.
Dale
:-) :-)
One of my favorite things about BTRFS is the data checksums. If the drive returns garbage, it turns into a read error. Also, if you can't do real RAID, but have excess space you can tell it to keep two copies of everything. Doesn't help with totaldrive failure, but does protect against the occasional failed sector. If you don't mind writes taking twice as long anyway.
LMP
I actually developed a tool for that. It creates and checks md5
checksums recursively and *per directory*. Whenever I copy stuff from somewhere, like a music album, I do an immediate md5 run on that
directory. And when I later copy that stuff around, I simply run the
tool again on the copy (after the FS cache was flushed, for example by unmounting and remounting) to see whether the checksums are still valid.
There's also app-crypt/md5deep
Does a number of hashes, is threaded, has options for piecewise hashing and a matching mode for using the hashes to find duplicates. Also a number of input and output filters for those cases where you don't want to hash everything.
I remounted the drives and did a backup. For anyone running up on this,
just in case one of the files got corrupted, I used a little trick to
see if I can figure out which one may be bad if any. I took my rsync commands from my little script and ran them one at a time with --dry-run added. If a file was to be updated on the backup that I hadn't changed
or added, I was going to check into it before updating my backups.
You can get up to 16X SATA PCI-e cards these days for pretty cheap. So as long as you have the power to run another drive or two there's not much reason not to do RAID on the important stuff. Also, the SATA protocol allows for port expanders, whichare also pretty cheap.
One of my favorite things about BTRFS is the data checksums. If the drive returns garbage, it turns into a read error. Also, if you can't do real RAID, but have excess space you can tell it to keep two copies of everything. Doesn't help with totaldrive failure, but does protect against the occasional failed sector. If you don't mind writes taking twice as long anyway.
On 12/04/2022 18:21, Laurence Perkins wrote:
You can get up to 16X SATA PCI-e cards these days for pretty cheap.
So as long as you have the power to run another drive or two there's
not much reason not to do RAID on the important stuff. Also, the
SATA protocol allows for port expanders, which are also pretty cheap.
One of my favorite things about BTRFS is the data checksums. If the
drive returns garbage, it turns into a read error. Also, if you
can't do real RAID, but have excess space you can tell it to keep two
copies of everything. Doesn't help with total drive failure, but
does protect against the occasional failed sector. If you don't mind
writes taking twice as long anyway.
https://raid.wiki.kernel.org/index.php/Linux_Raid
https://raid.wiki.kernel.org/index.php/System2020
That system in the second link is the system being used to type this
message ...
Cheers,
Wol
On Tue, Apr 12, 2022 at 1:08 PM Dale <rdalek1967@gmail.com> wrote:
I remounted the drives and did a backup. For anyone running up on this,Unless you're using the --checksum option on rsync this isn't likely
just in case one of the files got corrupted, I used a little trick to
see if I can figure out which one may be bad if any. I took my rsync
commands from my little script and ran them one at a time with --dry-run
added. If a file was to be updated on the backup that I hadn't changed
or added, I was going to check into it before updating my backups.
to be effective. By default rsync only looks at size and mtime, so it
isn't going to back up a file unless you intentionally changed it. If
data was silently corrupted this wouldn't detect a change at all
without the --checksum option.
Ultimately if you care about silent corruptions you're best off using
a solution that actually achieves this. btrfs, zfs, or something
whipped up with dm-integrity would be best. At a file level you could
store multiple files and hashes, or use a solution like PAR2. Plain
mdadm raid1 will fix issues if the drive detects and reports errors
(the drive typically has a checksum to do this, but it is a black box
and may not always work). The other solutions will reliably detect
and possibly recover errors even if the drive fails to detect them (a so-called silent error).
Just about all my linux data these days is on a solution that detects
silent errors - zfs or lizardfs. On ssd-based systems where I don't
want to invest in mirroring I still run zfs to detect errors and just
use frequent backups (ssds are small anyway so they're cheap to
frequently back up, especially if they're on zfs where there are
send-based backup scripts for this, and typically this is for OS
drives where things don't change much anyway).
LVM is good for being able to swap out drives easily but with the modern, huge drives you really want data checksums if you can get them. Otherwise all it takes is a flipped bit somewhere to wreck your data and drive firmware doesn't always notice. Ithink you can do that with LVM, but I've never looked into it for certain.
I went with a couple of https://www.amazon.com/MZHOU-Profile-Bracket-Support-Converter/dp/B08L7W8QFT/ in a couple different sizes for two of my mass storage systems and they seem to be doing OK.selfishly hog all the space.
The difference between the cheap vendors and the expensive vendors these days tends to be quality control. So plug it in, load it up, run it hard for a few hours. If it doesn't die relatively quickly you're usually good.
Especially if you have RAID with checksums it's difficult for a controller to mangle things too badly even if it does have an issue.
Remember: Data does not exist if it doesn't exist in at least three places. So you still want off-site backups in case your house burns down. Especially for irreplaceable things.
If you have friends who also want off-site backups and you leave your machines running all the time then tahoe-lafs is pretty decent. For that matter they don't even have to really be friends, you really only have to be able to trust them to not
I use BTRFS RAID1 for a lot of stuff. So far it's been pretty good at catching dropped bits and recovering from failures. It has a bit of the RAID issue where a drive could fail while you're doing a recovery since it only guarantees integrity withone dud drive regardless of the number of drives in the pool. But since each chunk is only written to two drives instead of spread across all of them the rebuild time stays relatively short and even if another drive does fail you'll only lose some of
ZFS and similar are arguably better for larger arrays, but are also more hassle to set up.think you can do that with LVM, but I've never looked into it for certain.
LVM is good for being able to swap out drives easily but with the modern, huge drives you really want data checksums if you can get them. Otherwise all it takes is a flipped bit somewhere to wreck your data and drive firmware doesn't always notice. I
LMP
Neat setup. I need something similar for a NAS setup thingy. Just got
way to much going on right now.
Dale
:-) :-)
Rich Freeman wrote:
On Tue, Apr 12, 2022 at 1:08 PM Dale <rdalek1967@gmail.com> wrote:
I remounted the drives and did a backup. For anyone running up on this, >> just in case one of the files got corrupted, I used a little trick toUnless you're using the --checksum option on rsync this isn't likely
see if I can figure out which one may be bad if any. I took my rsync
commands from my little script and ran them one at a time with --dry-run >> added. If a file was to be updated on the backup that I hadn't changed
or added, I was going to check into it before updating my backups.
to be effective.
My hope was if it was corrupted and something changed then I'd see it in
the list. If nothing changed then rsync wouldn't change anything on the backups either. I'll look into that option tho. May be something for
the future. ;-) I suspect it would slow things down quite a bit tho.
For those people looking at btrfs - note that parity-raid (5 or 6) is not a wise idea at the moment so you don't get two-failure protection ...
Cheers,
Wol
Am Tue, Apr 12, 2022 at 05:03:01PM -0500 schrieb Dale:
Rich Freeman wrote:The advantage of an integrity scheme (like ZFS or comparing with a checksum file) over your rsync approach is that you only need to read all the datas™ from one drive instead of two. Plus: if rsync actually detects a change, it doesn’t know which of the two drives introduced the error. You need to find out yourself after the fact (which probably won’t be hard, but still, it’s
On Tue, Apr 12, 2022 at 1:08 PM Dale <rdalek1967@gmail.com> wrote:My hope was if it was corrupted and something changed then I'd see it in
I remounted the drives and did a backup. For anyone running up on this, >>>> just in case one of the files got corrupted, I used a little trick toUnless you're using the --checksum option on rsync this isn't likely
see if I can figure out which one may be bad if any. I took my rsync
commands from my little script and ran them one at a time with --dry-run >>>> added. If a file was to be updated on the backup that I hadn't changed >>>> or added, I was going to check into it before updating my backups.
to be effective.
the list. If nothing changed then rsync wouldn't change anything on the
backups either. I'll look into that option tho. May be something for
the future. ;-) I suspect it would slow things down quite a bit tho.
one more manual step).
The advantage of an integrity scheme (like ZFS or comparing with a checksum file) over your rsync approach is that you only need to read all the datas™
from one drive instead of two. Plus: if rsync actually detects a change, it doesn’t know which of the two drives introduced the error. You need to find
out yourself after the fact (which probably won’t be hard, but still, it’s
one more manual step).
In this case, if something had changed, I'd have no problem manually
checking the file to be sure which was good and which was bad.
Given
the error is recent on my drive, I'd suspect the backups to still be a
good file. For that reason, I'd suspect the backup file to be good therefore not to be overwritten. I was trying to avoid a bad file
replacing a good file on the backup which then destroys all good files
and leaves only bad ones. This is why I like that SMART at least let me know there is a problem.
Sometimes things has to be done manually which is often the best way.
Just depends on the situation I guess.
Am Tue, Apr 12, 2022 at 06:01:11PM -0500 schrieb Dale:
Consider a big video file, which I know you like to accumulate from youtube and the likes. How do you find out the broken one? By watching it and trying to find the one image or audio frame that is garbled? The drive might return zeros or other garbage (bit flip) instead of actual content without SMART noticing it (uncorrectable error).The advantage of an integrity scheme (like ZFS or comparing with a checksum >>> file) over your rsync approach is that you only need to read all the datas™In this case, if something had changed, I'd have no problem manually
from one drive instead of two. Plus: if rsync actually detects a change, it >>> doesn’t know which of the two drives introduced the error. You need to find
out yourself after the fact (which probably won’t be hard, but still, it’s
one more manual step).
checking the file to be sure which was good and which was bad.
GivenI also tend to rely on smart, but it’s not all-knowing and probably not infallible.
the error is recent on my drive, I'd suspect the backups to still be a
good file. For that reason, I'd suspect the backup file to be good
therefore not to be overwritten. I was trying to avoid a bad file
replacing a good file on the backup which then destroys all good files
and leaves only bad ones. This is why I like that SMART at least let me
know there is a problem.
Howdy,
I got the drive and pvmove is doing its thing. I would like to unplug
one of the drives and physically move them around without shutting down
my system. Is there a way to tell LVM to disable the drives while I'm
doing this and restart them when done? I found the command vgchange -a n<name> but I'm not sure if that is correct. Honestly, I want to be
really sure before I unplug things. I assume the "n" changes to "y" to restart them?
Thanks.
Dale
:-) :-)
P. S. BTW, the drive has passed two new tests with no error. The tests
are slower than usual tho. I'm not sure why tho.
On Fri, 15 Apr 2022 11:49:21 -0400,
Dale wrote:
Howdy,No, you can't do that till the pmove is over.
I got the drive and pvmove is doing its thing. I would like to unplug
one of the drives and physically move them around without shutting down
my system. Is there a way to tell LVM to disable the drives while I'm
doing this and restart them when done? I found the command vgchange -a
n<name> but I'm not sure if that is correct. Honestly, I want to be
really sure before I unplug things. I assume the "n" changes to "y" to
restart them?
Thanks.
Dale
:-) :-)
P. S. BTW, the drive has passed two new tests with no error. The tests >> are slower than usual tho. I'm not sure why tho.
John Covici wrote:
On Fri, 15 Apr 2022 11:49:21 -0400,
Dale wrote:
Howdy,No, you can't do that till the pmove is over.
I got the drive and pvmove is doing its thing. I would like to unplug
one of the drives and physically move them around without shutting down
my system. Is there a way to tell LVM to disable the drives while I'm
doing this and restart them when done? I found the command vgchange -a >>> n<name> but I'm not sure if that is correct. Honestly, I want to be
really sure before I unplug things. I assume the "n" changes to "y" to >>> restart them?
Thanks.
Dale
:-) :-)
P. S. BTW, the drive has passed two new tests with no error. The tests >>> are slower than usual tho. I'm not sure why tho.
Yea. I was planning to wait until pvmove was done. It actually
finished not to long after I sent the message. It was what prompted me
to see if this is possible. I found a page that talks about it but the
info didn't explain it much. I'm pretty sure that is the right command
but given the limited info, I wasn't sure. Reading the man page helped
a little but still wasn't 100% sure then either. Thing is, I only have
to unplug and move one of the two drives on that group.
Sounds like the right command tho. If not, someone speak up.
Dale
:-) :-)
Howdy,
I got the drive and pvmove is doing its thing. I would like to unplug
one of the drives and physically move them around without shutting down
my system. Is there a way to tell LVM to disable the drives while I'm
doing this and restart them when done?
Am Fri, Apr 15, 2022 at 10:49:21AM -0500 schrieb Dale:
Howdy,Be aware that SATA hot-plugging must be enabled in the BIOS for each individual SATA port (at least that’s the case on my board). I’m not sure what a difference it actually makes, though.
I got the drive and pvmove is doing its thing. I would like to unplug
one of the drives and physically move them around without shutting down
my system. Is there a way to tell LVM to disable the drives while I'm
doing this and restart them when done?
On Saturday, 16 April 2022 15:59:25 BST Dale wrote:
Frank Steinmetzger wrote:Have a look at this post. It explains why you could end up with a race condition if you set up udev rules to name disks in different order than what the kernel assigns:
Am Fri, Apr 15, 2022 at 10:49:21AM -0500 schrieb Dale:I enabled that the first time I cut the system on after building it. I
Howdy,Be aware that SATA hot-plugging must be enabled in the BIOS for each
I got the drive and pvmove is doing its thing. I would like to unplug >>>> one of the drives and physically move them around without shutting down >>>> my system. Is there a way to tell LVM to disable the drives while I'm >>>> doing this and restart them when done?
individual SATA port (at least that’s the case on my board). I’m not sure
what a difference it actually makes, though.
couldn't think of any reason not to have it enabled really. It would be
like making USB require rebooting before plugging/unplugging something.
Certainly better than the old IDE days.
I have googled and can not find a way to reset udev and it naming
drives. I may have to rework some things since the drive kept the sdk
instead of switching to sdd when I made the physical change. Thing is,
I suspect it will when I reboot the next time. It also triggered
messages from SMART too. It got upset that it couldn't find sdd anymore.
Dale
:-) :-)
https://www.linuxquestions.org/questions/linux-hardware-18/udev-persistent-disk-name-4175450519/#post4893847
I have googled and can not find a way to reset udev and it naming
drives. I may have to rework some things since the drive kept the sdk instead of switching to sdd when I made the physical change. Thing is,
I suspect it will when I reboot the next time.
It also triggered
messages from SMART too. It got upset that it couldn't find sdd anymore.
Frank Steinmetzger wrote:
Am Fri, Apr 15, 2022 at 10:49:21AM -0500 schrieb Dale:
Howdy,
I got the drive and pvmove is doing its thing. I would like to unplug
one of the drives and physically move them around without shutting down
my system. Is there a way to tell LVM to disable the drives while I'm
doing this and restart them when done?
Be aware that SATA hot-plugging must be enabled in the BIOS for each individual SATA port (at least that’s the case on my board). I’m not sure
what a difference it actually makes, though.
I enabled that the first time I cut the system on after building it. I couldn't think of any reason not to have it enabled really. It would be
like making USB require rebooting before plugging/unplugging something. Certainly better than the old IDE days.
I have googled and can not find a way to reset udev and it naming
drives. I may have to rework some things since the drive kept the sdk instead of switching to sdd when I made the physical change. Thing is,
I suspect it will when I reboot the next time. It also triggered
messages from SMART too. It got upset that it couldn't find sdd anymore.
Dale
:-) :-)
On Sat, Apr 16, 2022 at 10:59 AM Dale <rdalek1967@gmail.com> wrote:
I have googled and can not find a way to reset udev and it namingIMO it is best to make that not matter. If you're referencing drives
drives. I may have to rework some things since the drive kept the sdk
instead of switching to sdd when I made the physical change. Thing is,
I suspect it will when I reboot the next time.
by letter in configuration files, you're just asking for some change
to re-order things and cause problems.
You're using LVM, so all the drives should be assembled based on their embedded metadata. It is fine to reference whatever temporary device
name you're using when running pvmove/pvcreate since that doesn't
really get stored anywhere. If you are directly mounting anything
without using LVM then it is best to use labels/uuids/etc to identify partitions.
It also triggeredThat is typical when hotswapping. I believe smartd only scans drives
messages from SMART too. It got upset that it couldn't find sdd anymore.
at startup, and of course if a drive does go offline it isn't a bad
thing that it is noisy about it. From a quick read of the manpage
SIGHUP might or might not get it to rescan the drives, and if not you
can just restart it. The daemon works by polling so if there are any
pending issues they should still get picked up after restarting the
daemon.
You're using LVM, so all the drives should be assembled based on their embedded metadata. It is fine to reference whatever temporary device
name you're using when running pvmove/pvcreate since that doesn't
really get stored anywhere. If you are directly mounting anything
without using LVM then it is best to use labels/uuids/etc to identify partitions.
I have to use sd** when using cryptsetup to decrypt the drive. I
haven't found a way around that that is easier yet. My command was something like cryptsetup open /dev/sdk1 <name> and then it asks for the password.
--
Neil Bothwick
If Yoda so strong in force is, why words in right order he cannot put?
My next project, find a good external drive enclosure like the three I
got now. They no longer available tho. I like them because they have a fan, a eSATA port and a nifty display to let me know things are
working. Really a good price for the features. I don't like USB
connected drives. Long story.
If Yoda so strong in force is, why words in right order he cannot
put?
Vielleicht, weil seine Muttersprache Deutsch ist. :-)
On Sat, 16 Apr 2022 12:45:20 -0500, Dale wrote:
Use /dev/disks/by/partlabel/foo or /dev/disks/by-partuuid/bar.You're using LVM, so all the drives should be assembled based on theirI have to use sd** when using cryptsetup to decrypt the drive. I
embedded metadata. It is fine to reference whatever temporary device
name you're using when running pvmove/pvcreate since that doesn't
really get stored anywhere. If you are directly mounting anything
without using LVM then it is best to use labels/uuids/etc to identify
partitions.
haven't found a way around that that is easier yet. My command was
something like cryptsetup open /dev/sdk1 <name> and then it asks for the
password.
Am Sat, Apr 16, 2022 at 12:45:20PM -0500 schrieb Dale:
My next project, find a good external drive enclosure like the three IHow about a table-top dock?
got now. They no longer available tho. I like them because they have a >> fan, a eSATA port and a nifty display to let me know things are
working. Really a good price for the features. I don't like USB
connected drives. Long story.
- no cable salad, caused by each enclosure having its own power supply and
data cable
- disks are used “naked”, so no heat buildup and you are more flexible
Here are some models with eSATA: https://skinflint.co.uk/?cat=hddocks&xf=4426_eSATA
And one of them even has four slots → even fewer cables.
That’s of course if you use the disks intermittently and store them away inbetween. If you plan on running them for longer durations at a time, it
may be better to use a proper enclosure, in order to protect the disks from physical influences (impacts, short-circuits). Also, those SATA connectors are not designed to be connected often. I think I read about 50 cycles somewhere.
to start working on a NAS. :/
On Sat, Apr 16, 2022 at 3:53 PM Dale <rdalek1967@gmail.com> wrote:
<SNIP>
Maybe this is a good excuse
to start working on a NAS. :/That's my vote. (For the second time)
I'm using a FreeBSD Nas (TrueNAS) but they recently came out with a
Linux version which you might be more comfortable with. If you use a
1Gb/S or higher network connection it's quite fast.
You can also go the Synology route via Amazon. You can get a 2-disk
NAS chassis which does RAID for around $250 last time I looked.
Good luck whatever you do.
Mark
Neil Bothwick wrote:
Use /dev/disks/by/partlabel/foo or /dev/disks/by-partuuid/bar.
That's even more typing than /dev/sdk. Some things I do easily by using
tab completion and all. When mounting, I let fstab remember the UUID
for it.
It's not like UUIDs are made to remember either.
That's even more typing than /dev/sdk. Some things I do easily byThat's what copy/paste is for. How often are you editing your
using tab completion and all. When mounting, I let fstab remember
the UUID for it.
crypttab anyway? This way when you move drives around they still
work.
What is crypttab? I type in the command manually.
Mark Knecht wrote:
On Sat, Apr 16, 2022 at 3:53 PM Dale <rdalek1967@gmail.com> wrote:
<SNIP>
Maybe this is a good excuse
to start working on a NAS. :/That's my vote. (For the second time)
I'm using a FreeBSD Nas (TrueNAS) but they recently came out with a
Linux version which you might be more comfortable with. If you use a
1Gb/S or higher network connection it's quite fast.
You can also go the Synology route via Amazon. You can get a 2-disk
NAS chassis which does RAID for around $250 last time I looked.
Good luck whatever you do.
Mark
Other than being another piece of equipment running up a light bill, it
is the best way to deal with this. The way I'm doing now is a bit of a struggle at times. I just need to get other things done first, from a
money perspective which inflation isn't helping on. A trip to the
grocery story is no fun anymore.
One of these days tho. I just gotta do it.
Dale
I was wanting to have a NAS that also puts video on my TV. That way I
can turn off my puter and still watch TV. It would be as much a media
system as a NAS. I have a mobo, ram and I think I have a extra video
card somewhere. I'd need a case, power supply and such. I'd also need
a place to put all this which is going to be interesting. I'd want
plenty of hard drive bays tho. I found a fractal 804 case that caught
my eye. Can't recall all the details tho.
Still, needs money and right now, I got to many other coals in the
fire. Plus, I'm trying to figure out this crypttab thing. From what
I've read, it is for opening encrypted drives during boot up which is
not really what I want. I can boot and login into my KDE without
anything encrypted being mounted. Kinda like this new setup really.
I'll be so glad when fiber internet gets here. I think I'm going with
the 500Mb/sec plan. Costs about the same as my current 1.5Mb/sec plan.
lol
Dale
On Sat, Apr 16, 2022 at 6:39 PM Dale <rdalek1967@gmail.com> wrote:
Neil Bothwick wrote:That's what copy/paste is for. How often are you editing your
Use /dev/disks/by/partlabel/foo or /dev/disks/by-partuuid/bar.That's even more typing than /dev/sdk. Some things I do easily by using
tab completion and all. When mounting, I let fstab remember the UUID
for it.
crypttab anyway? This way when you move drives around they still
work.
When I bought my current TV, I avoided the smart ones. At the time, it
was new technology and people were talking about how buggy it was so I
bought a regular TV. If I had to buy one today, I'd buy a smart one.
They seem to work pretty well now. Nice and stable at least. Still, I
check to make sure whatever I buy is based on Linux as its OS. One can usually check the manual and see the copyright notice in the last few
pages. It mentions the kernel. If it mentions windoze, I move on. LQ
is almost always Linux based.
I'm at the point where I know I need to do this. It's just getting
there. I even thought about putting the OS on a USB stick. After all,
once booted, it won't access the stick very often. I could even load it
into memory at boot up and it not even need the stick at all once
booted. Like is done with some Gentoo install media.
One of these days.
What is crypttab? I type in the command manually.Then use a shell alias, even less typing.
I've done a couple basic alias things here but never grasped it enough
to do anything beyond making ls run with -al each time. I think there
is another one I did but it was long ago. I'd have to dig to find it.
#!/bin/sh
cryptsetup whatever
mount whatever
I have to enter a password in the middle of that. I don't know how that would work. As I've said before, my "scripts" are so simple, they may
not even be called scripts. They're just files with commands in them.
If nothing changes when I get around to rebooting, I'll get into this
some more.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 307 |
Nodes: | 16 (2 / 14) |
Uptime: | 66:18:34 |
Calls: | 6,915 |
Files: | 12,379 |
Messages: | 5,431,756 |