On 05/07/2021 12:30, Paul wrote:
RobH wrote:
On 04/07/2021 20:53, Bit Twister wrote:
On Sun, 4 Jul 2021 20:23:08 +0100, RobH wrote:
I'm thinking about getting a NUC which has an M.2 drive and space
for a
separate ssd.
My present setup is a desktop tower, large Fractal Design case, which >>>>> has a 256Gb ssd for the OS and boot, with a 300GB spinning disc for my >>>>> /home folder. The /home folder is just under 200Gb presently
Is there a way of actually cloning or transferring the /home folder
from
the spinning disc to a M.2 drive on a NUC.
Personally I use rsync to transfer folders/files/partitions.
For example as root where the systems are named tower and nuc
Examples assume you are on tower
to push from tower to nuc
cd /home/RobH
rsync -aAHSXxv --delete $PWD/ $LOGNAME@nuc:$PWD
to pull from nuc to tower
cd /home/RobH
rsync -aAHSXxv --delete $LOGNAME@nuc:$PWD/ $PWD
Homework: man rsync
Thanks, but how would I make the connection between the tower and the
nuc. Lan cable, usb or what else.
If your tower has PCI Express x4 spare slot or a PCI Express x16 slot,
you can use one of these carriers for an M.2. Electrically, pretty
simple, but it's for the PCI Express flavor. PCI Express slots have
3.3V power and 12V power - if a hardware needs +5V, then there needs
to be a tiny SMPS converter from +12V to make +5V using a buck converter.
https://www.amazon.ca/UGREEN-Adapter-Support-Include-Screwdriver/dp/B07YFW5HBN
*******
There are also USB3.1 enclosures for M.2, that use a controller
chip from JMicron or RealTek or VIA or the like. I can't remember who
makes
the chip, but these have been around for a year or two. They can
do up to 1GB/sec (twice the regular USB3 rate).
One problem with these items, is what interface they present. This
might be the wrong one.
https://www.amazon.ca/USB3-1-Type-C-Enclosure-External-Adapter/dp/B07HC5QQNQ >>
"Does not support NVME SSD, does not support PCIE SSD, Does not
support
MSATA SSD, just support SATA based SSD."
Maybe this is the correct one.
https://www.amazon.ca/dp/B08C2THR25
"Compatible with M.2 NVMe PCIe M key,PCIe B&M key SSD.
NOT Compatible with M.2 SATA SSDs, M.2 PCIe AHCI SSDs,
M.2 PCIe devices such as WiFi and capture cards,
mSATA SSDs, and non-M.2 form factor SSDs.
Applicable
to sizes 2242 / 2260 / 2280 solid state drivers."
The problem with the enclosure, is the possibility of overheat, or
the possibility the drive draws more power than the USB3.1 cable
can carry. Could be 5V @ 900mA or 4.5W minus the power used
by the controller chip itself. Since the I/O rate is somewhat
limited, compared to the flat-out rate of the M.2, the M.2 won't
draw the rated power at the time of usage. The power will be
somewhat less.
One of the advantages of the carrier inside the PC concept, is
there could be a bit of surface airflow for cooling. If you look
at the carrier, there's no SMPS for power conversion on it. That
implies the M.2 runs off 3.3V or 12V, and 3.3V is more likely
to be the voltage. The USB enclosure then, would need a 5V to
3.3V SMPS switcher, to make the necessary voltage.
I have no platform to test M.2 here, so have no reason to
collect the USB adapters. One guy who was interested, never
mentioned the topic afterwards, so still no feedback from
anyone about the usage of them.
With the USB adapter, a UASP mass storage style driver should
be sufficient to see it. For the PCIe carrier, there would need
to be an NVMe driver of some sort in the OS. Windows 10 might
install such a driver for you, but I don't know the details
of who wrote the driver, what company (Microsoft?) made the
driver. Linux likely has such a driver too. With the PCIe carrier
card concept, that does not mean you could boot off the carrier.
It means while an OS has booted off something else in your tower
case, it could work on the carrier-carried drive if desired.
It's a data only situation suited to cloning tasks. It does not
require the BIOS to have an NVMe driver in BIOS-level code.
There was a time, when BIOS level code, if the modules in
there did not "recognize" hardware, they would go out of
their way to thwart usage. To give a grand example, my
stonkin Optiplex 780 refurb, if you try to put a non-video
card into the x16 slot (perfect for that NVMe PCIe carrier
card), the BIOS will stop you in your tracks and NOT BOOT
until you pull the fucking card out. I could not believe
I was seeing 1990s behavior in such a machine, but there you
go, mother knows best. There is one recorded instance of
that happening on a retail motherboard, but the manufacturer
apologized and fixed that in the second BIOS release so
the slot would take anything. There is NO reason to be
policing slots like that. PCI Express should be like
toilet paper, it should just work. The idea is, the user
should only have to worry about matching their drive
to the carrier, and not whether the PC would "reject"
the damn thing. I had tried to put a USB3.1 card in the
slot, and it would not boot with that card present. Your
tower will be better-behaved than that.
Paul
Thanks Paul.
I'll look for an adaptor on Amazon UK.
My board has 1 x PCIe 3.0 x16, 1 x PCIe 2.0 x16 (x4 mode) and 2 x PCIe
2.0 x1 (auto turn-off if PCI Express 2.0 x16 slot are occupied)
If any of those would work with an adaptor.
RobH wrote:
On 05/07/2021 12:30, Paul wrote:
RobH wrote:
On 04/07/2021 20:53, Bit Twister wrote:
On Sun, 4 Jul 2021 20:23:08 +0100, RobH wrote:
I'm thinking about getting a NUC which has an M.2 drive and space
for a
separate ssd.
My present setup is a desktop tower, large Fractal Design case, which >>>>>> has a 256Gb ssd for the OS and boot, with a 300GB spinning disc
for my
/home folder. The /home folder is just under 200Gb presently
Is there a way of actually cloning or transferring the /home
folder from
the spinning disc to a M.2 drive on a NUC.
Personally I use rsync to transfer folders/files/partitions.
For example as root where the systems are named tower and nuc
Examples assume you are on tower
to push from tower to nuc
cd /home/RobH
rsync -aAHSXxv --delete $PWD/ $LOGNAME@nuc:$PWD
to pull from nuc to tower
cd /home/RobH
rsync -aAHSXxv --delete $LOGNAME@nuc:$PWD/ $PWD
Homework: man rsync
Thanks, but how would I make the connection between the tower and
the nuc. Lan cable, usb or what else.
If your tower has PCI Express x4 spare slot or a PCI Express x16 slot,
you can use one of these carriers for an M.2. Electrically, pretty
simple, but it's for the PCI Express flavor. PCI Express slots have
3.3V power and 12V power - if a hardware needs +5V, then there needs
to be a tiny SMPS converter from +12V to make +5V using a buck
converter.
https://www.amazon.ca/UGREEN-Adapter-Support-Include-Screwdriver/dp/B07YFW5HBN
*******
There are also USB3.1 enclosures for M.2, that use a controller
chip from JMicron or RealTek or VIA or the like. I can't remember who
makes
the chip, but these have been around for a year or two. They can
do up to 1GB/sec (twice the regular USB3 rate).
One problem with these items, is what interface they present. This
might be the wrong one.
https://www.amazon.ca/USB3-1-Type-C-Enclosure-External-Adapter/dp/B07HC5QQNQ
"Does not support NVME SSD, does not support PCIE SSD, Does not
support
MSATA SSD, just support SATA based SSD."
Maybe this is the correct one.
https://www.amazon.ca/dp/B08C2THR25
"Compatible with M.2 NVMe PCIe M key,PCIe B&M key SSD.
NOT Compatible with M.2 SATA SSDs, M.2 PCIe AHCI SSDs,
M.2 PCIe devices such as WiFi and capture cards,
mSATA SSDs, and non-M.2 form factor SSDs.
Applicable
to sizes 2242 / 2260 / 2280 solid state drivers."
The problem with the enclosure, is the possibility of overheat, or
the possibility the drive draws more power than the USB3.1 cable
can carry. Could be 5V @ 900mA or 4.5W minus the power used
by the controller chip itself. Since the I/O rate is somewhat
limited, compared to the flat-out rate of the M.2, the M.2 won't
draw the rated power at the time of usage. The power will be
somewhat less.
One of the advantages of the carrier inside the PC concept, is
there could be a bit of surface airflow for cooling. If you look
at the carrier, there's no SMPS for power conversion on it. That
implies the M.2 runs off 3.3V or 12V, and 3.3V is more likely
to be the voltage. The USB enclosure then, would need a 5V to
3.3V SMPS switcher, to make the necessary voltage.
I have no platform to test M.2 here, so have no reason to
collect the USB adapters. One guy who was interested, never
mentioned the topic afterwards, so still no feedback from
anyone about the usage of them.
With the USB adapter, a UASP mass storage style driver should
be sufficient to see it. For the PCIe carrier, there would need
to be an NVMe driver of some sort in the OS. Windows 10 might
install such a driver for you, but I don't know the details
of who wrote the driver, what company (Microsoft?) made the
driver. Linux likely has such a driver too. With the PCIe carrier
card concept, that does not mean you could boot off the carrier.
It means while an OS has booted off something else in your tower
case, it could work on the carrier-carried drive if desired.
It's a data only situation suited to cloning tasks. It does not
require the BIOS to have an NVMe driver in BIOS-level code.
There was a time, when BIOS level code, if the modules in
there did not "recognize" hardware, they would go out of
their way to thwart usage. To give a grand example, my
stonkin Optiplex 780 refurb, if you try to put a non-video
card into the x16 slot (perfect for that NVMe PCIe carrier
card), the BIOS will stop you in your tracks and NOT BOOT
until you pull the fucking card out. I could not believe
I was seeing 1990s behavior in such a machine, but there you
go, mother knows best. There is one recorded instance of
that happening on a retail motherboard, but the manufacturer
apologized and fixed that in the second BIOS release so
the slot would take anything. There is NO reason to be
policing slots like that. PCI Express should be like
toilet paper, it should just work. The idea is, the user
should only have to worry about matching their drive
to the carrier, and not whether the PC would "reject"
the damn thing. I had tried to put a USB3.1 card in the
slot, and it would not boot with that card present. Your
tower will be better-behaved than that.
Paul
Thanks Paul.
I'll look for an adaptor on Amazon UK.
My board has 1 x PCIe 3.0 x16, 1 x PCIe 2.0 x16 (x4 mode) and 2 x PCIe
2.0 x1 (auto turn-off if PCI Express 2.0 x16 slot are occupied)
If any of those would work with an adaptor.
You can leave your video in the x16 slot, use the x4 wired one for the carrier board if you want. Even the Optiplex would probably approve
of a non-video slot being used for the card.
It doesn't matter if the M2 is PCIe Rev3, if the carrier is plugged
into a Rev2 slot, the transfer will still work. Even if the slot
is Rev1.1, it should work.
The requirement is roughly, that the connector fit somewhere on
the motherboard, and that the motherboard doesn't have a warped
sense of humor about what is allowed in the slots.
There were some issues in the past, with PCIe Rev1.1 slots on
a VIA chipset board, where I think if a PCIe Rev2 video card
was plugged in (and it tried to use Rev2 first), the VIA slot
would not negotiate and the card would not be connected. It
was something like that. But for most modern boards still
in usage, you're not likely to run into that. I have a Core2
board that is retired, that may be able to demonstrate
that bug if required.
Paul
What about this way:
Install a fresh copy of Ubuntu on my SSD, migrate /home to the M.2
drive,then restore a backup. I'm thinking that would give back all the
files I have like base and calc etc.
Any pitfalls doing it that way.
RobH wrote:
What about this way:
Install a fresh copy of Ubuntu on my SSD, migrate /home to the M.2
drive,then restore a backup. I'm thinking that would give back all
the files I have like base and calc etc.
Any pitfalls doing it that way.
Migrating to an M.2 has never come up here
before, so I can't say I've tested such procedures.
I don't typically cart /homes around, and while I
assume they have version numbers on .config materials
and the softwares support migration, I haven't experimented
with it.
The slash tree has your packages in it. Your home may have
private versions of packages you've unpacked or built. If you've
installed the OS "fresh", then the tree is initially devoid of
all the wonders your previous slash had accumulated.
Generally speaking, moving house is a giant pain in the ass,
no matter how you do it. Just cloning the whole thing
and snecking it over, is one of the easier house-moving
methods (for some value of lazy). If you have a Live USB stick,
there'd be no problem booting a new NUC and finding a way to
bring over goods. But you need networking standards and
network level tools for the job, if you insist on
in-box cloning with adapters.
I recommend some level of "adapters" when you acquire
unique hardware types in the computer room. My USB3 to SATA
cable has been used many many times here, once SSDs appeared
in the room. If I acquired something with PCIe lanes
on it, I'd want a portable adapter for it too, but I'd
also want to check that the adapter design was relatively
bulletproof, so there weren't a lot of surprises.
My tech selections here, are based on portability. What
things can I be reasonably assured I can use on a second
computer, if the first "croaks". That's why I own adapters.
Like, the machine I'm typing on, is acting up, but all the
materials on the machine can easily be moved.
If I stick with FAT32, NTFS, EXT4 partitions, I can use
my Macrium Reflect CD to do backups and restores. I could
use a nice GUI, back up the tower, restore to the NUC, done.
Then, expect the NUC to boot. And work from there (no guarantee
it will be flawless, but pretty sweet if it does work). You could
use Clonezilla I suppose, to back up an image, but I've only
used Clonezilla a couple times, and because of the blasted
interface, I couldn't tell you now what I was doing. All
I remember, is a complaint from Clonezilla that it
"could not find partclonedd".
Paul
I don't typically cart /homes around, and while I
assume they have version numbers on .config materials
and the softwares support migration, I haven't experimented
with it.
Paul wrote:
I don't typically cart /homes around, and while I
assume they have version numbers on .config materials
and the softwares support migration, I haven't experimented
with it.
Not is not on Linux. Example: every thing on typical install on
/dev/sda1, and we'll add sdb1 drive and partition
1) Add new drive and mount it temporarily
sudo mount /dev/sdb1 /mnt
2) Copy home to new drive preserving permissions
sudo cp -rp /home/* /mnt/
3) Edit fstab
sudo nano /etc/fstab
Append:
# new home
/dev/sdb1 /home ext4 defaults 0 2
(Or if you want to use the UUID use blkid to find it)
sudo blkid | grep sd
then use uuid for /dev/sdb1 in fstab as follows: UUID=whatever-it-was-in-blkid /home ext4 defaults 0 2
4) Rename existing home
sudo mv /home /old-home
5) Make new home mount point
sudo mkdir /home
6) Mount the new home on new drive
sudo mount -a
Now you are on new drive, without even rebooting. Check all is correct.
if you mess up and didn't preserve ownership with multi profiles was to
fix, for easy user:
sudo chown --recursive peter:peter /home/peter
sudo chown --recursive paul:paul /home/paul
sudo chown --recursive mary:mary/home/mary
Cleanup:
sudo umount /mnt
# <fudd>"Be vewy, vewy, kiahful"</fudd>
sudo rm -rf /home/old-home
Now for OP, won't access via network be much slower than a drive locally
on the system, even if the NAS uses an ssd?
On 05-07-2021 23:45, Jonathan N. Little wrote:
Paul wrote:
I don't typically cart /homes around, and while I
assume they have version numbers on .config materials
and the softwares support migration, I haven't experimented
with it.
Not is not on Linux. Example: every thing on typical install on
/dev/sda1, and we'll add sdb1 drive and partition
1) Add new drive and mount it temporarily
sudo mount /dev/sdb1 /mnt
2) Copy home to new drive preserving permissions
sudo cp -rp /home/* /mnt/
3) Edit fstab
sudo nano /etc/fstab
Append:
# new home
/dev/sdb1 /home ext4 defaults 0 2
(Or if you want to use the UUID use blkid to find it)
sudo blkid | grep sd
then use uuid for /dev/sdb1 in fstab as follows:
UUID=whatever-it-was-in-blkid /home ext4 defaults 0 2
4) Rename existing home
sudo mv /home /old-home
5) Make new home mount point
sudo mkdir /home
6) Mount the new home on new drive
sudo mount -a
Now you are on new drive, without even rebooting. Check all is correct.
if you mess up and didn't preserve ownership with multi profiles was to
fix, for easy user:
sudo chown --recursive peter:peter /home/peter
sudo chown --recursive paul:paul /home/paul
sudo chown --recursive mary:mary/home/mary
Cleanup:
sudo umount /mnt
# <fudd>"Be vewy, vewy, kiahful"</fudd>
sudo rm -rf /home/old-home
Now for OP, won't access via network be much slower than a drive locally
on the system, even if the NAS uses an ssd?
You might use:
sudo rsync -aAHXSnv --numeric-ids --exclude=.cache/** /home/ /mnt
which preserves all ownership and permissions
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 292 |
Nodes: | 16 (2 / 14) |
Uptime: | 197:06:16 |
Calls: | 6,617 |
Calls today: | 1 |
Files: | 12,168 |
Messages: | 5,315,717 |