So Seagate and other makers are getting ready to introduce 20 TB HDD's
to the market. According to Seagate, its fastest drives are capable of sustained 250 MB/s transfers (if you believe them). It would take 30+
hours to entirely fill such a drive with data at maximum speed! Is that
too much time, no matter how much capacity you are getting? Is that
basically unusable capacity? I know you can say that a drive that large
would be filled over a number of years, and no one would be filling it
all up in one go.
But that's probably true in a home environment, but what about an
enterprise environment? What if that drive were part of a RAID array,
and one of those drives failed and needed to be replaced? In RAID
parity, the entire drive has to be written to, because the parity is
required on all drives at once. Imagine you start synchronizing a
replacement drive like that, and it takes 30 hours to do that? That's a
long enough time that it's conceivable another drive within that array
would fail too, before it's had a chance to completely resync with the
array. So sure, you can get that capacity with an HDD, but should you
really be storing your data on something that slow? HDD's can't get much faster.
So Seagate and other makers are getting ready to introduce 20 TB HDD's
to the market. According to Seagate, its fastest drives are capable of sustained 250 MB/s transfers (if you believe them). It would take 30+
hours to entirely fill such a drive with data at maximum speed! Is that
too much time, no matter how much capacity you are getting?
Is that basically unusable capacity?
I know you can say that a drive that large would be filled over a
number of years, and no one would be filling it all up in one go.
But that's probably true in a home environment, but what about an
enterprise environment?
What if that drive were part of a RAID array, and one of those drives
failed and needed to be replaced? In RAID parity, the entire drive
has to be written to, because the parity is required on all drives at
once.
Imagine you start synchronizing a replacement drive like that, and
it takes 30 hours to do that?
That's a long enough time that it's conceivable another drive within
that array would fail too, before it's had a chance to completely
resync with the array.
So sure, you can get that capacity with an HDD, but should you
really be storing your data on something that slow?
HDD's can't get much faster.
So Seagate and other makers are getting ready to introduce 20 TB HDD's
to the market. According to Seagate, its fastest drives are capable of sustained 250 MB/s transfers (if you believe them). It would take 30+
hours to entirely fill such a drive with data at maximum speed! Is that
too much time, no matter how much capacity you are getting? Is that
basically unusable capacity? I know you can say that a drive that large
would be filled over a number of years, and no one would be filling it
all up in one go.
But that's probably true in a home environment, but what about an
enterprise environment? What if that drive were part of a RAID array,
and one of those drives failed and needed to be replaced? In RAID
parity, the entire drive has to be written to, because the parity is
required on all drives at once. Imagine you start synchronizing a
replacement drive like that, and it takes 30 hours to do that? That's a
long enough time that it's conceivable another drive within that array
would fail too, before it's had a chance to completely resync with the
array. So sure, you can get that capacity with an HDD, but should you
really be storing your data on something that slow? HDD's can't get much faster.
20 terrorbites would be an "archive" drive with shingles.
The sensible drives go up to 16 TB? Even that is going to take ages for a scandisk.
On 5/16/2020 5:31 AM, pedro1492@lycos.com wrote:
20 terrorbites would be an "archive" drive with shingles.
The sensible drives go up to 16 TB? Even that is going to take ages for a scandisk.
I think even 16 TB is way too large, shingles or not. It would still
take nearly 18 hours.
What would be a type of HDD that a system could handle practically now?
I think perhaps the upper limit is 8 TB? That would take nearly 9 hours
to fill. 6 TB would take 6.5 hours, 4 TB would take 4.5 hours.
On Sat, 16 May 2020 07:50:32 -0400, Yousuf Khan
<bbbl67@spammenot.yahoo.com> wrote:
On 5/16/2020 5:31 AM, pedro1492@lycos.com wrote:
20 terrorbites would be an "archive" drive with shingles.
The sensible drives go up to 16 TB? Even that is going to take ages for a scandisk.
I haven't done a scandisk in quite a few years, and prior to that it was another few years since the previous one. It's not something I worry about, nor do I worry about how long it takes to fill a drive with data. My
primary concerns are how many SATA ports and drive bays I have on hand.
Those are the limiting factors.
I think even 16 TB is way too large, shingles or not. It would still
take nearly 18 hours.
We all have different needs. My server has 16 SATA ports and 15 drive bays, so the OS lives on an SSD that lays on the floor of the case. The data
drives are 4TB x5 and 2TB x10, for a raw capacity of 40TB, formatted to 36.3TB. I use DriveBender to pool all of the drives into a single volume. Windows is happy with that. Since there are no SATA ports or drive bays available, upgrading for more storage means replacing one or more of the current drives. External drives aren't a serious long-term option.
On 5/16/2020 5:02 PM, Mark Perkins wrote:
On Sat, 16 May 2020 07:50:32 -0400, Yousuf Khan
<bbbl67@spammenot.yahoo.com> wrote:
On 5/16/2020 5:31 AM, pedro1492@lycos.com wrote:
20 terrorbites would be an "archive" drive with shingles.
The sensible drives go up to 16 TB? Even that is going to take ages for a scandisk.
I haven't done a scandisk in quite a few years, and prior to that it was
another few years since the previous one. It's not something I worry about, >> nor do I worry about how long it takes to fill a drive with data. My
primary concerns are how many SATA ports and drive bays I have on hand.
Those are the limiting factors.
Well, nobody does Scandisks more than once in several years. I'm sure
Pedro meant that as an extreme example, but not something that is >unreasonable to expect to do occasionally.
I think even 16 TB is way too large, shingles or not. It would still
take nearly 18 hours.
We all have different needs. My server has 16 SATA ports and 15 drive bays, >> so the OS lives on an SSD that lays on the floor of the case. The data
drives are 4TB x5 and 2TB x10, for a raw capacity of 40TB, formatted to
36.3TB. I use DriveBender to pool all of the drives into a single volume.
Windows is happy with that. Since there are no SATA ports or drive bays
available, upgrading for more storage means replacing one or more of the
current drives. External drives aren't a serious long-term option.
But the point is, neither are internal ones these days, it seems.
Assuming even if these are mainly used in enterprise settings, they<snip>
would likely be part of a RAID array. Now if the RAID array is new and
all of these drives were put in new as part of the initial setup,
Now, looking up what Drive Bender is, it seems to be a virtual volume >concatenator. So it's not really a RAID, individual drives die and only
the data on them are lost, unless they are backed up. So even in that
case, if one of these massive drives is part of your DB setup, replacing
that drive will be a major pain in the butt even while restoring from
backups. It really begs the question how long are you willing to waitfor a drive to get repopulated, knowing that while this is happening
it's also going to be maxing out the rest of your system for the amount
of hours that the restore operation is happening?
My point is that I think people will only be willing to wait a few
hours, perhaps 4 or 5 hours at most, before they say it's not worth it,
in a home environment.
In an enterprise environment, that tolerance may
get extended out to 8 or 10 hours. So at some point, all of this
capacity is useless, because it's impractical to manage with the current >drive and interface speeds.
If SSD's were cheaper per byte, then even SSD's running on a SATA
interface would still be viable at the same capacities we see HDD's at
right now. So a 16 or 20 TB SSD would be usable devices, but 16 or 20 TB >HDD's aren't.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 285 |
Nodes: | 16 (2 / 14) |
Uptime: | 74:31:41 |
Calls: | 6,489 |
Calls today: | 2 |
Files: | 12,096 |
Messages: | 5,275,936 |