I have made 1 full partiton om each one, a labeled those partitions as SiPwr_0 and SiPwr_1
My only question it will those partition names survive lvcreating an 11T lvm out of these and 2 more 2T gigastones.
I have not dealt with an lvm in about 15+ years trying it once
when it first came out with a high disaster rating then.
Hi,
On Wed, Feb 14, 2024 at 05:09:02PM -0500, gene heskett wrote:
I have made 1 full partiton om each one, a labeled those partitions as
SiPwr_0 and SiPwr_1
Please show us the command you used¹ to do that, so we know what
exactly you are talking about, because as previously discussed
there's a lot of different things that you like to call "partition
labels".
If we take that literally that would be a GPT partition name, but
you've used this same terminology before and meant a filesystem
label.
My only question it will those partition names survive lvcreating an 11T lvm >> out of these and 2 more 2T gigastones.
Assuming you meant partition name the first time as well, nothing
you do other than a disk wipe or re-name should alter those
partition names.
But your chosen partition names don't make a lot of sense to me.
You've picked names based on the type/manufacturer of device so you
may as well have just used the names from /dev/disk/by-id/… which
already have that information and are already never going to change.
I don't know why you want to complicate matters.
If instead you put filesystems on these partitions and labelled
*those*, well, no, LVM goes under filesystems so those filesystems
and their labels (and contents) are not long for this world.
I have not dealt with an lvm in about 15+ years trying it once
when it first came out with a high disaster rating then.
I hope you are putting a level of redundancy under that LVM or are
using the redundancy features of LVM (which you need to go out of
your way to do). Otherwise by default what you'll have is not
redundant and a device failure will lose at least the contents of
that device, possibly more.
Regards,
Andy
¹ and while you are there, maybe a post-it note with "I will show
the exact command I used any time I write to debian-user" stuck to
the top of the display of the screen you use to compose emails
would help, because basically every thread you post here lacks
that information.
On 2/14/24 19:48, Andy Smith wrote:
On Wed, Feb 14, 2024 at 05:09:02PM -0500, gene heskett wrote:
I have made 1 full partiton om each one, a labeled those partitions as >>> SiPwr_0 and SiPwr_1
Please show us the command you used¹ to do that, so we know what
exactly you are talking about, because as previously discussed
there's a lot of different things that you like to call "partition
labels".
This is what gparted calls a "partition label" and certainly does not
need a 4.5 megabyte camera image to see. or even a 50k screen snap.
Taking this screenshot was a pita, because the gparted window disappears behind the terminal screen when you click on take another shot, so you
have to quit, then find the gparted on the tool bar to bring it back to
the front, then move it and the terminal so its not totally hidden. Then rerun spectacle again waste a click bringing it fwd, then 30 seconds
later the spectacal instructions finally show up and after 5 minutes of screwing around, finally get the screen shot attached to prove I'm not lieing.
Will the by-id string fit in the space reserved for a label?That IF
there was a connection between the /dev/sdc that udev assigns and
anything in this list:
root@coyote:~# ls /dev/disk/by-id
ata-ATAPI_iHAS424_B_3524253_327133504865 ata-Samsung_SSD_870_EVO_1TB_S626NF0R302509W-part1Â Â Â wwn-0x5002538f413394a5
ata-Gigastone_SSD_GST02TBG221146 ata-Samsung_SSD_870_EVO_1TB_S626NF0R302509W-part2 wwn-0x5002538f413394a5-part1
ata-Gigastone_SSD_GST02TBG221146-part1 ata-Samsung_SSD_870_EVO_1TB_S626NF0R302509W-part3 wwn-0x5002538f413394a5-part2
ata-Gigastone_SSD_GSTD02TB230102
ata-Samsung_SSD_870_QVO_1TB_S5RRNF0T201730V wwn-0x5002538f413394a5-part3 ata-Gigastone_SSD_GSTD02TB230102-part1 ata-Samsung_SSD_870_QVO_1TB_S5RRNF0T201730V-part1Â Â Â wwn-0x5002538f413394a9
ata-Gigastone_SSD_GSTG02TB230206 ata-Samsung_SSD_870_QVO_1TB_S5RRNF0T201730V-part2 wwn-0x5002538f413394a9-part1
ata-Gigastone_SSD_GSTG02TB230206-part1 ata-Samsung_SSD_870_QVO_1TB_S5RRNF0T201730V-part3 wwn-0x5002538f413394a9-part2
ata-Samsung_SSD_870_EVO_1TB_S626NF0R302498T ata-SPCC_Solid_State_Disk_AA231107S304KG00080 wwn-0x5002538f413394a9-part3 ata-Samsung_SSD_870_EVO_1TB_S626NF0R302498T-part1 ata-SPCC_Solid_State_Disk_AA231107S304KG00080-part1Â wwn-0x5002538f413394ae ata-Samsung_SSD_870_EVO_1TB_S626NF0R302498T-part2Â md-name-coyote:0
                              wwn-0x5002538f413394ae-part1
ata-Samsung_SSD_870_EVO_1TB_S626NF0R302498T-part3 md-name-coyote:0-part1 wwn-0x5002538f413394ae-part2 ata-Samsung_SSD_870_EVO_1TB_S626NF0R302502EÂ Â Â Â Â Â Â md-name-coyote:2
                              wwn-0x5002538f413394ae-part3
ata-Samsung_SSD_870_EVO_1TB_S626NF0R302502E-part1Â md-name-_none_:1
                              wwn-0x5002538f413394b0
ata-Samsung_SSD_870_EVO_1TB_S626NF0R302502E-part2 md-uuid-3d5a3621:c0e32c8a:e3f7ebb3:318edbfb wwn-0x5002538f413394b0-part1 ata-Samsung_SSD_870_EVO_1TB_S626NF0R302502E-part3 md-uuid-3d5a3621:c0e32c8a:e3f7ebb3:318edbfb-part1 wwn-0x5002538f413394b0-part2
ata-Samsung_SSD_870_EVO_1TB_S626NF0R302507V md-uuid-57a88605:27f5a773:5be347c1:7c5e7342 wwn-0x5002538f413394b0-part3 ata-Samsung_SSD_870_EVO_1TB_S626NF0R302507V-part1 md-uuid-bb6e03ce:19d290c8:5171004f:0127a392Â Â Â Â Â Â Â Â Â wwn-0x5002538f42205e8e
ata-Samsung_SSD_870_EVO_1TB_S626NF0R302507V-part2 usb-SPCC_Sol_id_State_Disk_1234567897E6-0:0 wwn-0x5002538f42205e8e-part1 ata-Samsung_SSD_870_EVO_1TB_S626NF0R302507V-part3 usb-SPCC_Sol_id_State_Disk_1234567897E6-0:0-part1 wwn-0x5002538f42205e8e-part2
ata-Samsung_SSD_870_EVO_1TB_S626NF0R302509W usb-USB_Mass_Storage_Device_816820130806-0:0 wwn-0x5002538f42205e8e-part3 root@coyote:~#
I dare you to find the disk that udev calls sdc in the above wall of text.
Why can't you understand that I want a unique label for all of this
stuff that is NOT a wall of HEX numbers no one can remember. Its not mounted, so blkid does NOT see it.
On 2/14/24 19:48, Andy Smith wrote:You pique my curiosity because this is going to be my backup system, but
Hi,
On Wed, Feb 14, 2024 at 05:09:02PM -0500, gene heskett wrote:
I have made 1 full partiton om each one, a labeled those partitions as >>> SiPwr_0 and SiPwr_1
Please show us the command you used¹ to do that, so we know what
exactly you are talking about, because as previously discussed
there's a lot of different things that you like to call "partition
labels".
This is what gparted calls a "partition label" and certainly does not
need a 4.5 megabyte camera image to see. or even a 50k screen snap.
Taking this screenshot was a pita, because the gparted window disappears behind the terminal screen when you click on take another shot, so you
have to quit, then find the gparted on the tool bar to bring it back to
the front, then move it and the terminal so its not totally hidden. Then rerun spectacle again waste a click bringing it fwd, then 30 seconds
later the spectacal instructions finally show up and after 5 minutes of screwing around, finally get the screen shot attached to prove I'm not lieing.
If we take that literally that would be a GPT partition name, but
you've used this same terminology before and meant a filesystem
label.
My only question it will those partition names survive lvcreating an
11T lvm
out of these and 2 more 2T gigastones.
Assuming you meant partition name the first time as well, nothing
you do other than a disk wipe or re-name should alter those
partition names.
But your chosen partition names don't make a lot of sense to me.
You've picked names based on the type/manufacturer of device so you
may as well have just used the names from /dev/disk/by-id/… which
already have that information and are already never going to change.
I don't know why you want to complicate matters.
If instead you put filesystems on these partitions and labelled
*those*, well, no, LVM goes under filesystems so those filesystems
and their labels (and contents) are not long for this world.
I have not dealt with an lvm in about 15+ years trying it once
when it first came out with a high disaster rating then.
I hope you are putting a level of redundancy under that LVM or are
using the redundancy features of LVM (which you need to go out of
your way to do). Otherwise by default what you'll have is not
redundant and a device failure will lose at least the contents of
that device, possibly more.
Regards,
Andy
¹ and while you are there, maybe a post-it note with "I will show
  the exact command I used any time I write to debian-user" stuck to
  the top of the display of the screen you use to compose emails
  would help, because basically every thread you post here lacks
  that information.
Cheers, Gene Heskett, CET.
On 15/02/2024 08:48, gene heskett wrote:sdc
This is what gparted calls a "partition label" and certainly does not
need a 4.5 megabyte camera image to see. or even a 50k screen snap.
lsblk --fs -o +PARTLABELÂ /dev/sdc
NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTS PARTLABEL
.
On 2/14/24 19:48, Andy Smith wrote:
But your chosen partition names don't make a lot of sense to me.
You've picked names based on the type/manufacturer of device so you
may as well have just used the names from /dev/disk/by-id/… which
already have that information and are already never going to change.
I don't know why you want to complicate matters.
Will the by-id string fit in the space reserved for a label?
I dare you to find the disk that udev calls sdc in the above wall of text.
Why can't you understand that I want a unique label for all of this stuff that is NOT a wall of HEX numbers no one can remember. Its not mounted, so blkid does NOT see it.
On 2/14/24 19:48, Andy Smith wrote:
On Wed, Feb 14, 2024 at 05:09:02PM -0500, gene heskett wrote:
I have made 1 full partiton om each one, a labeled those partitions as SiPwr_0 and SiPwr_1
Please show us the command you used¹ to do that, so we know what
exactly you are talking about, because as previously discussed
there's a lot of different things that you like to call "partition
labels".
This is what gparted calls a "partition label"
and certainly does not need a 4.5 megabyte camera image to see. or
even a 50k screen snap. Taking this screenshot was a pita, because
the gparted window disappears behind the terminal screen when you
click on take another shot, so you have to quit, then find the
gparted on the tool bar to bring it back to the front, then move
it and the terminal so its not totally hidden. Then rerun
spectacle again waste a click bringing it fwd, then 30 seconds
later the spectacal instructions finally show up and after 5
minutes of screwing around, finally get the screen shot attached
to prove I'm not lieing.
On Wed, Feb 14, 2024 at 09:56:07PM -0500, gene heskett wrote:
On 2/14/24 19:48, Andy Smith wrote:
I hope you are putting a level of redundancy under that LVM or are using the redundancy features of LVM (which you need to go out of
your way to do). Otherwise by default what you'll have is not
redundant and a device failure will lose at least the contents of
that device, possibly more.
You pique my curiosity because this is going to be my backup system, but not
a syllable about how to do it. You tell me its fine 3 paragraphs up. then tell me lvcreate will wipe it out. I'm asking for answers, not more connumdrums..
You've split your reply to my mail across three different emails and
now you're replying to a part about redundancy, but asking questions
about something completely different, all while referring to bits
that are not proximal to where your text is, so it's unclear to me
exactly what you are asking about.
You asked if "labels" would survive their associated partition being
put into LVM.
I said, "yes if you mean partition names, no if you mean filesystem
labels".
To my implied question about your redundancy plans (if any), you
then complain that I have not given you "a syllable about how to do
it". Do *what*? I don't yet know what your plans are in that regard.
If you have questions, ask them.
On 2/14/24 19:48, Andy Smith wrote:
I hope you are putting a level of redundancy under that LVM or are
using the redundancy features of LVM (which you need to go out of
your way to do). Otherwise by default what you'll have is not
redundant and a device failure will lose at least the contents of
that device, possibly more.
You pique my curiosity because this is going to be my backup system, but not a syllable about how to do it. You tell me its fine 3 paragraphs up. then tell me lvcreate will wipe it out. I'm asking for answers, not more connumdrums..
On Wed, Feb 14, 2024 at 08:48:31PM -0500, gene heskett wrote:
On 2/14/24 19:48, Andy Smith wrote:
Please show us the command you used¹ to do that, so we know what
exactly you are talking about, because as previously discussed
there's a lot of different things that you like to call "partition labels".
This is what gparted calls a "partition label"
Okay, thanks for clarifying. This, or preferably a copy-paste of the
actual parted command session would suffice.
I don't know what the relevance is of the rest of the following
paragraph - your life story is not required and you were not accused
of lying, just asked to clarify.
Do remember that this mailing lists does not accept attachments (and
very few mailing lists in general do), so any time you are tempted
to send a photo to a mailing list it is probably an error. We did
not see whatever it was, but it doesn't sound relevant.
Andy Smith <andy@strugglers.net> wrote:
Do remember that this mailing lists does not accept attachments (and
very few mailing lists in general do), so any time you are tempted
to send a photo to a mailing list it is probably an error. We did
not see whatever it was, but it doesn't sound relevant.
FWIW, the photo that Gene attached was certainly attached to the mail
that the list sent to me, so I suppose that this list does permit attachments, at least in some circumstances.
Hi,
On Wed, Feb 14, 2024 at 09:56:07PM -0500, gene heskett wrote:
On 2/14/24 19:48, Andy Smith wrote:You pique my curiosity because this is going to be my backup system, but not >> a syllable about how to do it. You tell me its fine 3 paragraphs up. then
I hope you are putting a level of redundancy under that LVM or are
using the redundancy features of LVM (which you need to go out of
your way to do). Otherwise by default what you'll have is not
redundant and a device failure will lose at least the contents of
that device, possibly more.
tell me lvcreate will wipe it out. I'm asking for answers, not more
connumdrums..
You've split your reply to my mail across three different emails and
now you're replying to a part about redundancy, but asking questions
about something completely different, all while referring to bits
that are not proximal to where your text is, so it's unclear to me
exactly what you are asking about.
You asked if "labels" would survive their associated partition being
put into LVM.
I said, "yes if you mean partition names, no if you mean filesystem
labels".
To my implied question about your redundancy plans (if any), you
then complain that I have not given you "a syllable about how to do
it". Do *what*? I don't yet know what your plans are in that regard.
If you have questions, ask them.
Regards,
Andy
On 2/15/24 11:21, Andy Smith wrote:
You asked if "labels" would survive their associated partition being
put into LVM.
I said, "yes if you mean partition names, no if you mean filesystem labels".
I'm still confused and it is not all the well clarified by looking at gparted, a shot of which I posted.
From what you posted it now sounds like labels on the ext4filesystems that you created.
If you have questions, ask them.
Like which version of a raid is the best at tolerating a failed drive, which give he best balance between redundancy and capacity.
Hello,
On Thu, Feb 15, 2024 at 05:32:34PM +0000, debian-user@howorth.org.uk wrote:
Andy Smith <andy@strugglers.net> wrote:
Do remember that this mailing lists does not accept attachments (and
very few mailing lists in general do), so any time you are tempted
to send a photo to a mailing list it is probably an error. We did
not see whatever it was, but it doesn't sound relevant.
FWIW, the photo that Gene attached was certainly attached to the mail
that the list sent to me, so I suppose that this list does permit
attachments, at least in some circumstances.
Oh yes you're right, I see it too now I've looked properly!
So now I actually think Gene means a filesystem label?
Sigh, this really does not need to be this difficult.
Anyway I see that the image of gparted says there's an ext4
filesystem there. So, Gene: when you put those partitions into LVM
(when you make them LVM Physical Volumes) the filesystems on them
will be trashed, and so will the filesystem labels.
Thanks,
Andy
On Thu, Feb 15, 2024 at 03:19:54PM -0500, gene heskett wrote:
On 2/15/24 11:21, Andy Smith wrote:
You asked if "labels" would survive their associated partition being
put into LVM.
I said, "yes if you mean partition names, no if you mean filesystem labels".
I'm still confused and it is not all the well clarified by looking at gparted, a shot of which I posted.
This could all be answered easily if you'd just post the copy-paste
of your terminal scrollback for what you actually did. Hopefully you
don't now object to me asking what you meant since apparently even
you do not know if you mean partition names or filesystem labels.
From what you posted it now sounds like labels on the ext4filesystems that you created.
Now the question remains howinhell do I put a label on a drive
such that it does survive making a raid or lvm device with it? To
not have a way to id its the drive in slot n of a multislot rack
stops me in my tracks.
Particularly with these gigastones, I 5 of them but when all are plugged in there are only 3 becauae there are 2 pairs of matching serial numbers in the by-id output, by-id sees all 5 drives, but udev see's only the unique
serial numbers. gparted can change the devices blkid, getting a new one from rng so while you all think that's the greatest thing since bottled beer, I know better.
Hi,
On Thu, Feb 15, 2024 at 03:19:54PM -0500, gene heskett wrote:
On 2/15/24 11:21, Andy Smith wrote:
You asked if "labels" would survive their associated partition beingI'm still confused and it is not all the well clarified by looking at
put into LVM.
I said, "yes if you mean partition names, no if you mean filesystem
labels".
gparted, a shot of which I posted.
This could all be answered easily if you'd just post the copy-paste
of your terminal scrollback for what you actually did. Hopefully you
don't now object to me asking what you meant since apparently even
you do not know if you mean partition names or filesystem labels.
From what you posted it now sounds like labels on the ext4filesystems that you created.
What you're trying to do (LVM on MD RAID?) is quite complicated and
you clearly don't have much experience in this area. That's okay but
it does mean that you're likely to make a lot of mistakes with a
thing that holds your data, so you need to be prepared for that.
For example, you mentioned only as an aside that you intended to get
two more drives and put the four of them into an LVM, but you did
not know that this would blow away the filesystems already on the
drives, and that this would not by itself provide you with any
redundancy. So if you hadn't said anything and I hadn't questioned
this, you could well have spent a lot of time creating something
that isn't correct and needs to be torn down again, possibly with
data loss.
Again that's okay — we learn by experimentation — but you're going
to have to prepare yourself for doing this over again many times.
And I also want to reiterate that you're going to have questions,
and that is good, but if we here on this list are not to be driven
insane by the ambiguities and misunderstandings, please, please,
PLEASE post logs of the commands you type on this adventure when you
ask them.
Please.
If you have questions, ask them.
Like which version of a raid is the best at tolerating a failed drive, which >> give he best balance between redundancy and capacity.
This is a complex subject. Before we get into it, what are you
trying to achieve? Like, what is your end goal with these four
drives?
MD RAID isn't the only way to achieve redundancy. You also haven't
explained why you need LVM. Depending on your needs, maybe a
filesystem with redundancy and volume management features in it
would be better. Like btrfs or zfs.
Given the problems you had with MD RAID in the past I still maintain
that you'd likely be better off just getting a storage appliance of
some kind.
Thanks,
Andy
Now the question remains howinhell do I put a label on a drive such
that it does survive making a raid or lvm device with it?
Hi,May I miss-understood the wiki, xfs is stated as not being complete for
On Thu, Feb 15, 2024 at 03:19:54PM -0500, gene heskett wrote:
On 2/15/24 11:21, Andy Smith wrote:
You asked if "labels" would survive their associated partition beingI'm still confused and it is not all the well clarified by looking at
put into LVM.
I said, "yes if you mean partition names, no if you mean filesystem
labels".
gparted, a shot of which I posted.
This could all be answered easily if you'd just post the copy-paste
of your terminal scrollback for what you actually did. Hopefully you
don't now object to me asking what you meant since apparently even
you do not know if you mean partition names or filesystem labels.
From what you posted it now sounds like labels on the ext4filesystems that you created.
What you're trying to do (LVM on MD RAID?) is quite complicated and
you clearly don't have much experience in this area. That's okay but
it does mean that you're likely to make a lot of mistakes with a
thing that holds your data, so you need to be prepared for that.
For example, you mentioned only as an aside that you intended to get
two more drives and put the four of them into an LVM, but you did
not know that this would blow away the filesystems already on the
drives, and that this would not by itself provide you with any
redundancy. So if you hadn't said anything and I hadn't questioned
this, you could well have spent a lot of time creating something
that isn't correct and needs to be torn down again, possibly with
data loss.
Again that's okay — we learn by experimentation — but you're going
to have to prepare yourself for doing this over again many times.
And I also want to reiterate that you're going to have questions,
and that is good, but if we here on this list are not to be driven
insane by the ambiguities and misunderstandings, please, please,
PLEASE post logs of the commands you type on this adventure when you
ask them.
Please.
If you have questions, ask them.Like which version of a raid is the best at tolerating a failed drive, which >> give he best balance between redundancy and capacity.
This is a complex subject. Before we get into it, what are you
trying to achieve? Like, what is your end goal with these four
drives?
MD RAID isn't the only way to achieve redundancy. You also haven't
explained why you need LVM. Depending on your needs, maybe a
filesystem with redundancy and volume management features in it
would be better. Like btrfs or zfs.
Given the problems you had with MD RAID in the past I still maintainOne of the 1T samsungs in the md raid10 isn't entirely happy but mdadm
that you'd likely be better off just getting a storage appliance of
some kind.
Thanks,
Andy
Hi,
On Thu, Feb 15, 2024 at 03:59:30PM -0500, gene heskett wrote:
Now the question remains howinhell do I put a label on a drive
such that it does survive making a raid or lvm device with it? To
not have a way to id its the drive in slot n of a multislot rack
stops me in my tracks.
Given that an MD RAID array or a LVM Logical Volume may be spread
across many different underlying storage devices, the question
doesn't make sense. Due to the fact that filesystems go on block
devices, and RAID arrays and LVM LVs can be block devices, a
filesystem label in that instance would represent possibly multiple underlying storage devices. So step back and tell us what are you
actually trying to achieve, rather than insisting on your X solution
to your Y problem.
Suppose you have the MD array /dev/md42. What are you conceptually
wanting to do with that in relation to labels of some kind? What
information is it that you want?
Support you have LVM logical volume /dev/myvg/mylv. What are you
conceptually wanting to do with that in relation to labels of some
kind? What information is it that you want?
Particularly with these gigastones, I 5 of them but when all are plugged in >> there are only 3 becauae there are 2 pairs of matching serial numbers in the >> by-id output, by-id sees all 5 drives, but udev see's only the unique
serial numbers. gparted can change the devices blkid, getting a new one from >> rng so while you all think that's the greatest thing since bottled beer, I >> know better.
Once you explain what information you're trying to get when you
start with an LVM or MD device, I can probably advise how to get it,
but just to make clear: I don't think it's a good idea to continue
to use such broken devices. We don't need to debate that since I
know you've been posting about that a lot and clearly have decided
to push ahead. I just think you haven't seen the end of the problems
with that issue.
Regards,
Andy
On Thu 15 Feb 2024 at 20:44:52 (+0000), Andy Smith wrote:And this "partition" name survives?, and can be unique?, and can be used
On Thu, Feb 15, 2024 at 03:19:54PM -0500, gene heskett wrote:
On 2/15/24 11:21, Andy Smith wrote:
You asked if "labels" would survive their associated partition beingI'm still confused and it is not all the well clarified by looking at
put into LVM.
I said, "yes if you mean partition names, no if you mean filesystem
labels".
gparted, a shot of which I posted.
This could all be answered easily if you'd just post the copy-paste
of your terminal scrollback for what you actually did. Hopefully you
don't now object to me asking what you meant since apparently even
you do not know if you mean partition names or filesystem labels.
From what you posted it now sounds like labels on the ext4filesystems that you created.
Gene effectively shoots himself in the foot by using gparted (GUI)
instead of, say, gdisk where it's easy to paste what was done, or
for someone, say me, to post an example:
# gdisk /dev/sdz
GPT fdisk (gdisk) version 1.0.3
Partition table scan:
MBR: not present
BSD: not present
APM: not present
GPT: not present
Creating new GPT entries.
Command (? for help): o
This option deletes all partitions and creates a new protective MBR.
Proceed? (Y/N): y
Command (? for help): p
Disk /dev/sdb: 3907029168 sectors, 1.8 TiB
Model: Desktop
Sector size (logical/physical): 512/512 bytes
Disk identifier (GUID): A1093790-9A1A-4A7E-A807-B9CC6F7CF77E
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 3907029134
Partitions will be aligned on 2048-sector boundaries
Total free space is 3907029101 sectors (1.8 TiB)
Number Start (sector) End (sector) Size Code Name
Command (? for help): n
Partition number (1-128, default 1):
First sector (34-3907029134, default = 2048) or {+-}size{KMGTP}:
Last sector (2048-3907029134, default = 3907029134) or {+-}size{KMGTP}:
Current type is 'Linux filesystem'
Hex code or GUID (L to show codes, Enter = 8300):
Changed type of partition to 'Linux filesystem'
Command (? for help): c
Using 1
Enter name: Lulu01
Command (? for help): i
Using 1
Partition GUID code: 0FC63DAF-8483-4772-8E79-3D69D8477DE4 (Linux filesystem)
Partition unique GUID: 37CF9EDF-C695-428E-9889-2F52C40DFCA5
First sector: 2048 (at 1024.0 KiB)
Last sector: 3907029134 (at 1.8 TiB)
Partition size: 3907027087 sectors (1.8 TiB)
Attribute flags: 0000000000000000
Partition name: 'Lulu01'
Command (? for help): w
Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
PARTITIONS!!
Do you want to proceed? (Y/N): y
OK; writing new GUID partition table (GPT) to /dev/sdb.
The operation has completed successfully.
#
# gdisk -l /dev/sdz
GPT fdisk (gdisk) version 1.0.3
Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: present
Found valid GPT with protective MBR; using GPT.
Disk /dev/sdb: 3907029168 sectors, 1.8 TiB
Model: Desktop
Sector size (logical/physical): 512/512 bytes
Disk identifier (GUID): A1093790-9A1A-4A7E-A807-B9CC6F7CF77E
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 3907029134
Partitions will be aligned on 2048-sector boundaries
Total free space is 2014 sectors (1007.0 KiB)
Number Start (sector) End (sector) Size Code Name
1 2048 3907029134 1.8 TiB 8300 Lulu01
#
Cheers,
David.
.
- Use an additional tiny dummy partition in which you can put any info
you like.
On 2/15/24 15:45, Andy Smith wrote:
MD RAID isn't the only way to achieve redundancy. You also haven't explained why you need LVM. Depending on your needs, maybe aMay I miss-understood the wiki, xfs is stated as not being complete
filesystem with redundancy and volume management features in it
would be better. Like btrfs or zfs.
for linux, a zfx is I think commercial?
Can you update that?
On 2/15/24 16:20, Andy Smith wrote:
Suppose you have the MD array /dev/md42. What are you conceptually
wanting to do with that in relation to labels of some kind? What information is it that you want?
Support you have LVM logical volume /dev/myvg/mylv. What are you conceptually wanting to do with that in relation to labels of some
kind? What information is it that you want?
I want to know with absolute certainty, with of the 4 drives in that raid10, actually has a belly ache. When it has a belly ache.
On 2/15/24 16:20, David Wright wrote:
# gdisk -l /dev/sdz
GPT fdisk (gdisk) version 1.0.3
Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: present
Found valid GPT with protective MBR; using GPT.
Disk /dev/sdb: 3907029168 sectors, 1.8 TiB
Model: Desktop
Sector size (logical/physical): 512/512 bytes
Disk identifier (GUID): A1093790-9A1A-4A7E-A807-B9CC6F7CF77E
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 3907029134
Partitions will be aligned on 2048-sector boundaries
Total free space is 2014 sectors (1007.0 KiB)
Number Start (sector) End (sector) Size Code NameAnd this "partition" name survives?
1 2048 3907029134 1.8 TiB 8300 Lulu01
#
.
and can be unique?
and can be used in a mount cmd?
if all 3 questions above can be answered with a yes is the answer
I've been trying to squeeze out all along.
On 2/15/24 15:45, Andy Smith wrote:
MD RAID isn't the only way to achieve redundancy. You also haven't explained why you need LVM. Depending on your needs, maybe aMay I miss-understood the wiki, xfs is stated as not being complete for linux, a zfx is I think commercial?
filesystem with redundancy and volume management features in it
would be better. Like btrfs or zfs.
Can you update that?
the gui access delay (30+ seconds) problems I have did NOT go away
when I moved /home off the raid to another SSD
gene heskett <gheskett@shentel.net> wrote:I wasn't awake enough to bookmark it. I'm not done with wiki yet, if I
On 2/15/24 15:45, Andy Smith wrote:
MD RAID isn't the only way to achieve redundancy. You also haven'tMay I miss-understood the wiki, xfs is stated as not being complete
explained why you need LVM. Depending on your needs, maybe a
filesystem with redundancy and volume management features in it
would be better. Like btrfs or zfs.
for linux, a zfx is I think commercial?
Can you update that?
Sorry, which wiki page do you think says XFS is not complete?
.
On 2/15/24 11:21, Andy Smith wrote:
... redundancy plans ...Like which version of a raid is the best at tolerating a failed drive,
which give he best balance between redundancy and capacity.
... gigastones, I 5 of them but when all
are plugged in there are only 3 becauae there are 2 pairs of matching
serial numbers ...
On 2/15/24 12:59, gene heskett wrote:
... gigastones, I 5 of them but when all
are plugged in there are only 3 becauae there are 2 pairs of
matching serial numbers ...
I recall 2 pairs of SSD's with matching serial numbers. Please remove
one SSD of each pair so that the remaining SSD's all have unique
serial numbers. Return them for a refund while you still can. If you cannot, put them in another computer or put them on the shelf as
spares.
On 2/15/24 16:20, David Wright wrote:
On Thu 15 Feb 2024 at 20:44:52 (+0000), Andy Smith wrote:
On Thu, Feb 15, 2024 at 03:19:54PM -0500, gene heskett wrote:
On 2/15/24 11:21, Andy Smith wrote:
You asked if "labels" would survive their associated partition being put into LVM.
I said, "yes if you mean partition names, no if you mean filesystem labels".
I'm still confused and it is not all the well clarified by looking at gparted, a shot of which I posted.
This could all be answered easily if you'd just post the copy-paste
of your terminal scrollback for what you actually did. Hopefully you don't now object to me asking what you meant since apparently even
you do not know if you mean partition names or filesystem labels.
From what you posted it now sounds like labels on the ext4filesystems that you created.
Gene effectively shoots himself in the foot by using gparted (GUI)
instead of, say, gdisk where it's easy to paste what was done, or
for someone, say me, to post an example:
# gdisk -l /dev/sdz
GPT fdisk (gdisk) version 1.0.3
Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: present
Found valid GPT with protective MBR; using GPT.
Disk /dev/sdb: 3907029168 sectors, 1.8 TiB
Model: Desktop
Sector size (logical/physical): 512/512 bytes
Disk identifier (GUID): A1093790-9A1A-4A7E-A807-B9CC6F7CF77E
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 3907029134
Partitions will be aligned on 2048-sector boundaries
Total free space is 2014 sectors (1007.0 KiB)
Number Start (sector) End (sector) Size Code Name
1 2048 3907029134 1.8 TiB 8300 Lulu01
#
And this "partition" name survives?, and can be unique?, and can be
used in a mount cmd? That's how I'll do it then. This if all 3
questions above can be answered with a yes is the answer I've been
trying to squeeze out all along. Thank you.
One of the 1T samsungs in the md raid10 isn't entirely happy but mdadm
has not fussed about it, and smartctl seems to say its ok after testing.
 Other than that the gui access delay (30+ seconds) problems I have did
NOT go away when I moved /home off the raid to another SSD, so I may
move it back. One of the reasons I ma rsync'ing this /home back to it
every other day or so, takes < 5 minutes.
One of the 1T samsungs in the md raid10 isn't entirely happy but mdadm has >> not fussed about it, and smartctl seems to say its ok after testing.Please get a small SSD, do a fresh install, and test for the access delay.
Other than that the gui access delay (30+ seconds) problems I have did
NOT go away when I moved /home off the raid to another SSD, so I may move
it back. One of the reasons I ma rsync'ing this /home back to it every
other day or so, takes < 5 minutes.
If the delay is not present, incrementally add and test applications.
If you encounter the delay, please stop and post the details; console sessions are best. If not, then connect the disks with /home and test.
If you encounter the delay, then please stop and post the details. If you
do not encounter the delay, then your system is fixed.
Take a Clonezilla image.
I want to know with absolute certainty, with of the 4 drives in that
raid10, actually has a belly ache. When it has a belly ache. I can't see
any reason on this ball of rock and water, why I should be expected to replace a drive at a time until the belly ache goes away.
One of the 1T samsungs in the md raid10 isn't entirely happy but mdadm has >>> not fussed about it, and smartctl seems to say its ok after testing.Please get a small SSD, do a fresh install, and test for the access delay. >> If the delay is not present, incrementally add and test applications.
 Other than that the gui access delay (30+ seconds) problems I have did >>> NOT go away when I moved /home off the raid to another SSD, so I may move >>> it back. One of the reasons I ma rsync'ing this /home back to it every
other day or so, takes < 5 minutes.
If you encounter the delay, please stop and post the details; console
sessions are best. If not, then connect the disks with /home and test.
If you encounter the delay, then please stop and post the details. If you >> do not encounter the delay, then your system is fixed.
Take a Clonezilla image.
FWIW, my crystal ball says "30s => software timeout rather than hardware problem"
One of the 1T samsungs in the md raid10 isn't entirely happy but mdadm has >>> not fussed about it, and smartctl seems to say its ok after testing.Please get a small SSD, do a fresh install, and test for the access delay. >> If the delay is not present, incrementally add and test applications.
 Other than that the gui access delay (30+ seconds) problems I have did >>> NOT go away when I moved /home off the raid to another SSD, so I may move >>> it back. One of the reasons I ma rsync'ing this /home back to it every
other day or so, takes < 5 minutes.
If you encounter the delay, please stop and post the details; console
sessions are best. If not, then connect the disks with /home and test.
If you encounter the delay, then please stop and post the details. If you >> do not encounter the delay, then your system is fixed.
Take a Clonezilla image.
FWIW, my crystal ball says "30s => software timeout rather than hardware problem"
Stefan
.
FWIW, my crystal ball says "30s => software timeout rather than hardware problem"
On Fri 16 Feb 2024 at 14:48:12 (+0000), Andy Smith wrote:
No, because it's a filesystem label for the ext4 fs created on
/dev/sdz1. If sdz1 is turned into an LVM Physical Volume, there
won't be an ext4 filesystem on it any more. If sdz1 is turned into a
member of an MD array, there won't be an ext4 filesystem on it any
more. The labels go with the filesystem.
It isn't a filesystem LABEL.
You've not yet been clear about what you want, but from what little information you have provided you've been told multiple times by↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑
multiple people that filesystem labels won't help.
… which would be moot if only Gene could create partition PARTLABELs successfully.
On 2/15/24 17:44, gene heskett wrote:
 Other than that the gui access delay (30+ seconds) problems I have did NOT go away when I moved /home off the raid to another SSD [...]
FWIW, my crystal ball says "30s => software timeout rather than hardware problem"
Hello,
On Fri, Feb 16, 2024 at 02:02:59PM -0600, David Wright wrote:
On Fri 16 Feb 2024 at 14:48:12 (+0000), Andy Smith wrote:
No, because it's a filesystem label for the ext4 fs created on
/dev/sdz1. If sdz1 is turned into an LVM Physical Volume, there
won't be an ext4 filesystem on it any more. If sdz1 is turned into a
member of an MD array, there won't be an ext4 filesystem on it any
more. The labels go with the filesystem.
It isn't a filesystem LABEL.
Oh dear, I am lost. I don't use gparted but at least one person in
this thread has said that Gene created a filesystem label not a
partition name, and Gene doesn't know which he created, so I've gone
from guessing partition name to fs label and now back to partition
name again.
I'm totally willing to believe that you know what you've created
there though, so fair enough.
You've not yet been clear about what you want, but from what little↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑ >>
information you have provided you've been told multiple times by
multiple people that filesystem labels won't help.
… which would be moot if only Gene could create partition PARTLABELs
successfully.
Sure, but we still don't know what Gene is trying to do or why
partition names would be useful to him so I am kind of sceptical
that this leads anywhere.
Thanks,
Andy
I've printed drawers to fill those slots. The top slot has a bpi-m5 in it, the bottom slot has a 5 volt 10 amp psu in it. slot 2 will have 2 of those nearly 4T SSD's in a 2 drive adapter, with full disk partitions on them, so obviously I should name the top one as "si-pwr-s2t". the bottom one then s/b si-pwr-s2b
slot-3 then s/b si-pwr-s3t and si-pwr-s3b.
slot-4 then is giga-s4t1 and giga-s4t2. ditto for the bottom one. named giga-s4b1 and giga-s4b2. 1 partition to hold amanda's database and one to serve as amanda's holding disk.
Whats so meaningless to you that you can't see the utility in that?
On 2/16/24 15:47, Stefan Monnier wrote:
One of the 1T samsungs in the md raid10 isn't entirely happy butPlease get a small SSD, do a fresh install, and test for the
mdadm has not fussed about it, and smartctl seems to say its ok
after testing. Other than that the gui access delay (30+ seconds)
problems I have did NOT go away when I moved /home off the raid
to another SSD, so I may move it back. One of the reasons I ma
rsync'ing this /home back to it every other day or so, takes < 5
minutes.
access delay. If the delay is not present, incrementally add and
test applications. If you encounter the delay, please stop and
post the details; console sessions are best. If not, then connect
the disks with /home and test. If you encounter the delay, then
please stop and post the details. If you do not encounter the
delay, then your system is fixed. Take a Clonezilla image.
FWIW, my crystal ball says "30s => software timeout rather than
hardware problem"
Stefan
We are on the same page, but what is causing the timeout?
On Fri, Feb 16, 2024 at 12:12:06PM -0800, David Christensen wrote:Is there some way to identify that roadblock?
On 2/15/24 17:44, gene heskett wrote:
[...]
 Other than that the gui access delay (30+ seconds) problems I have did >>> NOT go away when I moved /home off the raid to another SSD [...]
I think at this point few are surprised by that. Last round of debugging
we pretty much eliminated disk access as likey cause of those delays.
The most hopeful cause for a candidate, IIRC, was some thingy deep in the DE trying to access an unavailable resource.
Cheers
On 2/16/24 21:13, Andy Smith wrote:
Hello,
On Fri, Feb 16, 2024 at 02:02:59PM -0600, David Wright wrote:
On Fri 16 Feb 2024 at 14:48:12 (+0000), Andy Smith wrote:
No, because it's a filesystem label for the ext4 fs created on
/dev/sdz1. If sdz1 is turned into an LVM Physical Volume, there
won't be an ext4 filesystem on it any more. If sdz1 is turned into a
member of an MD array, there won't be an ext4 filesystem on it any
more. The labels go with the filesystem.
It isn't a filesystem LABEL.
Oh dear, I am lost. I don't use gparted but at least one person in
this thread has said that Gene created a filesystem label not a
partition name, and Gene doesn't know which he created, so I've gone
from guessing partition name to fs label and now back to partition
name again.
I'm totally willing to believe that you know what you've created
there though, so fair enough.
You've not yet been clear about what you want, but from what little                       ↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑
information you have provided you've been told multiple times by
multiple people that filesystem labels won't help.
… which would be moot if only Gene could create partition PARTLABELs
successfully.
Sure, but we still don't know what Gene is trying to do or whyThat part if the ^%$ drives ever get here, I just looked at the front
partition names would be useful to him so I am kind of sceptical
that this leads anywhere.
deck and it has 2" of fresh white stuff on it.
To describe what I am building, this is a 5 slot bare drive cage. You
could throw tom cats thru it from most angles so I printed pretty sides
for it.
I've printed drawers to fill those slots. The top slot has a bpi-m5 in
it, the bottom slot has a 5 volt 10 amp psu in it. slot 2 will have 2 of those nearly 4T SSD's in a 2 drive adapter, with full disk partitions on them, so obviously I should name the top one as "si-pwr-s2t". the bottom
one then s/b si-pwr-s2b
slot-3 then s/b si-pwr-s3t and si-pwr-s3b.
slot-4 then is giga-s4t1 and giga-s4t2. ditto for the bottom one. named giga-s4b1 and giga-s4b2. 1 partition to hold amanda's database and one
to serve as amanda's holding disk.
Whats so meaningless to you that you can't see the utility in that? That
has not been explained, so please educate me as to why you think its worthless?
Thanks,
Andy
Cheers, Gene Heskett, CET.
On 2/17/24 00:35, tomas@tuxteam.de wrote:
On Fri, Feb 16, 2024 at 12:12:06PM -0800, David Christensen wrote:
On 2/15/24 17:44, gene heskett wrote:
[...]
 Other than that the gui access delay (30+ seconds) problems I have did
NOT go away when I moved /home off the raid to another SSD [...]
I think at this point few are surprised by that. Last round of debugging
we pretty much eliminated disk access as likey cause of those delays.
The most hopeful cause for a candidate, IIRC, was some thingy deep in the DE
trying to access an unavailable resource.
CheersIs there some way to identify that roadblock?
It sure seems to me there ought to be a way to identify whatever it is that is causing it..
Take care, stay warm and well Tomas
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 297 |
Nodes: | 16 (0 / 16) |
Uptime: | 00:29:51 |
Calls: | 6,669 |
Calls today: | 1 |
Files: | 12,216 |
Messages: | 5,338,432 |