• Ambient temperature control

    From Don Y@21:1/5 to All on Sun Jun 30 18:14:32 2024
    Assuming you can keep a device in its "normal operating (temperature)
    range", how advantageous is it (think MTBF) to drive that ambient
    down? And, is there a sweet spot (as there is a cost to lowering the temperature)?

    Also, is there any advantage to minimizing the hysteresis between
    the ACTUAL operating temperature extremes in such a control strategy
    (given that lower hysteresis usually comes at an increased cost)?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Bill Sloman@21:1/5 to Don Y on Mon Jul 1 13:16:54 2024
    On 1/07/2024 11:14 am, Don Y wrote:
    Assuming you can keep a device in its "normal operating (temperature)
    range", how advantageous is it (think MTBF) to drive that ambient
    down?  And, is there a sweet spot (as there is a cost to lowering the temperature)?

    It's generally figured that decay and degeneration proceed more slowly
    at lower temperatures. A popular rule of thumb is that a 10C temperature
    drop halves the rate.

    The Arrhenius equation says it depends on the activation energy of the
    process, and the factor of two in 10C would reflect an
    activation energy of 52,900 J/mol (52.9 kJ/mol) which is pretty ordinary.

    Also, is there any advantage to minimizing the hysteresis between
    the ACTUAL operating temperature extremes in such a control strategy
    (given that lower hysteresis usually comes at an increased cost)?

    Only primitive (bang-bang) control schemes have hysteresis. Learn about proportional-integral-derivative control (and it's more advanced
    variations).

    --
    Bill Sloman, Sydney



    --
    This email has been checked for viruses by Norton antivirus software. www.norton.com

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Martin Brown@21:1/5 to Don Y on Mon Jul 1 12:24:29 2024
    On 01/07/2024 02:14, Don Y wrote:
    Assuming you can keep a device in its "normal operating (temperature)
    range", how advantageous is it (think MTBF) to drive that ambient
    down?  And, is there a sweet spot (as there is a cost to lowering the temperature)?

    There can be for some high performance low level OPamps. Deliberately
    running them as cold as is allowed helps take the LF noise floor down
    and by more than you would predict from Johnson noise. ISTR there was a
    patent for doing this back in the 1980's. Prior to that they tended to
    heat the front end to obtain temperature stability and low drift.

    https://ppubs.uspto.gov/dirsearch-public/print/downloadPdf/4883957

    Made possible with the advent of decent solid state TECs.

    Also, is there any advantage to minimizing the hysteresis between
    the ACTUAL operating temperature extremes in such a control strategy
    (given that lower hysteresis usually comes at an increased cost)?

    Depends how temperature sensitive the thing is that you are protecting.
    The example I recall they were aiming for medium term stable 6 sig fig measurements with the lowest possible noise.

    --
    Martin Brown

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Martin Brown on Mon Jul 1 06:41:26 2024
    On 7/1/2024 4:24 AM, Martin Brown wrote:
    On 01/07/2024 02:14, Don Y wrote:
    Assuming you can keep a device in its "normal operating (temperature)
    range", how advantageous is it (think MTBF) to drive that ambient
    down?  And, is there a sweet spot (as there is a cost to lowering the
    temperature)?

    There can be for some high performance low level OPamps. Deliberately running them as cold as is allowed helps take the LF noise floor down and by more than
    you would predict from Johnson noise. ISTR there was a patent for doing this back in the 1980's. Prior to that they tended to heat the front end to obtain temperature stability and low drift.

    https://ppubs.uspto.gov/dirsearch-public/print/downloadPdf/4883957

    Made possible with the advent of decent solid state TECs.

    We don't design products (industrial/consumer) that are "finicky" -- as
    it leads to higher TCOs. You don't want to need to control the environment *or* have "skilled tradesmen" on staff to maintain/assure correct
    performance.

    The most common example (that I can think of) where temperature is
    controlled FBO the electronics would be datacenters. But, from the
    research I've done, there, they simply set a desired temperature for
    the cold aisle and largely ignore the resulting hot aisle temperature
    (except to ensure it doesn't climb out-of-bounds). I.e., they
    don't close the loop on the hot aisle to control the cold aisle's
    setpoint (cascaded control).

    And, they don't get the cold aisle "as cold as possible" so they
    acknowledge there are diminishing returns in doing so -- likely
    cheaper to just pan on a (potentially) shorter upgrade cycle
    than to waste electricity trying to eek out a bit more life.

    Interestingly, I can't find anything other than "lore" to
    explain why a *particular* cold aisle temperature is chosen.
    Amusing to see how much folks DON'T know about the science
    they apply!

    When I designed my disk sanitizer, I did a fair bit of research
    regarding temperature effects on drives -- because we process a
    shitload (thousands) of *used* drives, annually and you don't want to
    reuse a drive that has an increased chance of failure (based on
    its previous environment, SMART data or observations while exercising
    it). The old "10 degree C" saw proved to be totally inappropriate,
    *there*.

    OTOH, I suspect it *is* worth noting for power supplies (as I
    see most failures being toasted power supplies in otherwise
    "healthy" products). I suspect power *cycling* is a culprit, there
    as I've seen failed solder joints where it looked like repeated
    thermal expansion had led to the failure.

    Also, is there any advantage to minimizing the hysteresis between
    the ACTUAL operating temperature extremes in such a control strategy
    (given that lower hysteresis usually comes at an increased cost)?

    Depends how temperature sensitive the thing is that you are protecting. The example I recall they were aiming for medium term stable 6 sig fig measurements
    with the lowest possible noise.

    I've needed to control temperature in applications where it was
    key to the *process* being controlled. E.g., monitoring exhaust
    air temperature to determine the "state" of the bed and a cascade
    loop on the inlet air handler to drive that to a desired state.

    But, there, you have lots of money for the equipment and can buy good/precise/fast control with things like face-and-bypass as
    the primary controlled variable (so the control loop for the
    heater/chiller can be cruder and more energy efficient).

    "In the small", refrigeration is the only practical means of
    lowering ambient temperatures. And, that adds to operating costs.
    If you can tolerate a wider deadband then the cooling cost
    can be lower (e.g., cool to X degrees and let it *soak*, there,
    before letting it warm to Y degrees instead of foolishly
    trying to maintain the environment at some Z>X and <Y).
    As you likely have LESS ability to precisely size the HVAC
    to fit such a small load, deadband becomes a key consequence
    of that selection process.

    [Gotta wonder why data centers in northern latitudes don't
    exploit outside air more agressively during the winter
    months!]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From legg@21:1/5 to blockedofcourse@foo.invalid on Mon Jul 1 10:34:46 2024
    On Sun, 30 Jun 2024 18:14:32 -0700, Don Y
    <blockedofcourse@foo.invalid> wrote:

    Assuming you can keep a device in its "normal operating (temperature)
    range", how advantageous is it (think MTBF) to drive that ambient
    down? And, is there a sweet spot (as there is a cost to lowering the >temperature)?

    If all you're thinking of is MTBF, adding the complexity of an active
    cooling element is a big step in the wrong direction for the system.

    Reducing the thermal impedance of the source, to ambient is the
    usual way to go, when addressing a specific aging factor.

    https://ve3ute.ca/2000a.html

    If you're thinking of performance, It's cheaper and more reliable
    to concentrate on reducing the temperature of the point source, not
    the rest of the planet.

    RL

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From john larkin @21:1/5 to legg on Mon Jul 1 07:46:52 2024
    On Mon, 01 Jul 2024 10:34:46 -0400, legg <legg@nospam.magma.ca> wrote:

    On Sun, 30 Jun 2024 18:14:32 -0700, Don Y
    <blockedofcourse@foo.invalid> wrote:

    Assuming you can keep a device in its "normal operating (temperature) >>range", how advantageous is it (think MTBF) to drive that ambient
    down? And, is there a sweet spot (as there is a cost to lowering the >>temperature)?

    If all you're thinking of is MTBF, adding the complexity of an active
    cooling element is a big step in the wrong direction for the system.

    Reducing the thermal impedance of the source, to ambient is the
    usual way to go, when addressing a specific aging factor.

    https://ve3ute.ca/2000a.html

    If you're thinking of performance, It's cheaper and more reliable
    to concentrate on reducing the temperature of the point source, not
    the rest of the planet.

    RL

    Tubes? The cathodes fail eventually. Reduce filament voltage and
    suffer the reduced gain. Better yet, don't use tubes.

    But for most parts that dissipate power, the big win is to have some
    air flow. A fan can reduce the theta of your parts by 2:1.

    Nowadays, parts are very good, with failure rates in the ballpark of
    one failure per billion hours, the Bellcore and MIL217 FITS numbers.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Phil Hobbs@21:1/5 to Martin Brown on Mon Jul 1 12:24:42 2024
    On 2024-07-01 07:24, Martin Brown wrote:
    On 01/07/2024 02:14, Don Y wrote:
    Assuming you can keep a device in its "normal operating (temperature)
    range", how advantageous is it (think MTBF) to drive that ambient
    down?  And, is there a sweet spot (as there is a cost to lowering the
    temperature)?

    There can be for some high performance low level OPamps. Deliberately
    running them as cold as is allowed helps take the LF noise floor down
    and by more than you would predict from Johnson noise. ISTR there was a patent for doing this back in the 1980's. Prior to that they tended to
    heat the front end to obtain temperature stability and low drift.

    BITD you tended to get popcorn noise from ions migrating around the
    surface and in deposited (rather than thermal) oxide. Cooling helped
    that a lot. Nowadays processes are generally clean enough that you
    don't get a lot of mobile ions.

    https://ppubs.uspto.gov/dirsearch-public/print/downloadPdf/4883957

    Made possible with the advent of decent solid state TECs.

    Also, is there any advantage to minimizing the hysteresis between
    the ACTUAL operating temperature extremes in such a control strategy
    (given that lower hysteresis usually comes at an increased cost)?

    Depends how temperature sensitive the thing is that you are protecting.
    The example I recall they were aiming for medium term stable 6 sig fig measurements with the lowest possible noise.

    You don't want to use a thermostat with TECs anyway--they die very
    rapidly, especially the soft-solder ones (Laird/Melcor).

    Cheers

    Phil Hobbs

    --
    Dr Philip C D Hobbs
    Principal Consultant
    ElectroOptical Innovations LLC / Hobbs ElectroOptics
    Optics, Electro-optics, Photonics, Analog Electronics
    Briarcliff Manor NY 10510

    http://electrooptical.net
    http://hobbs-eo.com

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to legg on Mon Jul 1 13:44:39 2024
    On 7/1/2024 7:34 AM, legg wrote:
    On Sun, 30 Jun 2024 18:14:32 -0700, Don Y
    <blockedofcourse@foo.invalid> wrote:

    Assuming you can keep a device in its "normal operating (temperature)
    range", how advantageous is it (think MTBF) to drive that ambient
    down? And, is there a sweet spot (as there is a cost to lowering the
    temperature)?

    If all you're thinking of is MTBF, adding the complexity of an active
    cooling element is a big step in the wrong direction for the system.

    Shifting the reliability burden into a (relatively) low-tech, ubiquitous subsystem allows failures to be maintained by "non-technical" people.
    It also allows for easier redundancy -- you can add another AHU "in parallel" with an existing unit a lot easier than redesigning the electronic
    system to be more reliable over a larger operating range of temperatures.

    Reducing the thermal impedance of the source, to ambient is the
    usual way to go, when addressing a specific aging factor.

    If ambient approaches the limits of the design, then what?

    You design something to be able to operate at 50C (on paper).
    It gets deployed *at* 50C. What sort of failure rate do you expect
    at that elevated temperature vs. operating that same piece of kit
    at 30C by introducing active cooling? (the assumption being that said
    cooling can be maintained/repaired by a local run-of-the-mill agency)

    The question tries to address that issue -- and, the consequences of
    how "well" you strive to maintain a "better" operating environment.

    E.g., cooling the environment to 10C and then letting it creep back
    up to 50C before repeating the cycle would be different than keeping
    the device "at" 30C.

    Why set a cold aisle temperature of 20C and not 30C? 40C? Why not
    operate the devices at their specified ambient limits?

    Continuous Operation: 10C to 35C, 10% to 80% relative humidity (RH).
    10% of annual operating hours: 5C to 40C, 5% to 85%RH. 1% of annual
    operating hours: -5C to 45C, 5% to 90%RH.

    I.e., the cited device CAN operate at 45C. But, at what cost
    (reliability)?

    https://ve3ute.ca/2000a.html

    If you're thinking of performance, It's cheaper and more reliable
    to concentrate on reducing the temperature of the point source, not
    the rest of the planet.

    RL

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Bill Sloman@21:1/5 to Phil Hobbs on Tue Jul 2 13:22:13 2024
    On 2/07/2024 2:24 am, Phil Hobbs wrote:
    On 2024-07-01 07:24, Martin Brown wrote:
    On 01/07/2024 02:14, Don Y wrote:
    Assuming you can keep a device in its "normal operating (temperature)
    range", how advantageous is it (think MTBF) to drive that ambient
    down?  And, is there a sweet spot (as there is a cost to lowering the
    temperature)?

    There can be for some high performance low level OPamps. Deliberately
    running them as cold as is allowed helps take the LF noise floor down
    and by more than you would predict from Johnson noise. ISTR there was
    a patent for doing this back in the 1980's. Prior to that they tended
    to heat the front end to obtain temperature stability and low drift.

    BITD you tended to get popcorn noise from ions migrating around the
    surface and in deposited (rather than thermal) oxide.  Cooling helped
    that a lot.  Nowadays processes are generally clean enough that you
    don't get a lot of mobile ions.

    https://ppubs.uspto.gov/dirsearch-public/print/downloadPdf/4883957

    Made possible with the advent of decent solid state TECs.

    Also, is there any advantage to minimizing the hysteresis between
    the ACTUAL operating temperature extremes in such a control strategy
    (given that lower hysteresis usually comes at an increased cost)?

    Depends how temperature sensitive the thing is that you are
    protecting. The example I recall they were aiming for medium term
    stable 6 sig fig measurements with the lowest possible noise.

    You don't want to use a thermostat with TECs anyway--they die very
    rapidly, especially the soft-solder ones (Laird/Melcor).


    Anybody who tries to use bang-bang control with a TEC will run into
    that. TEC's are non-linear devices, and work best when the current
    through them doesn't vary much.

    Sloman A.W., Buggs P., Molloy J., and Stewart D. “A
    microcontroller-based driver to stabilise the temperature of an optical
    stage to 1mK in the range 4C to 38C, using a Peltier heat pump and a
    thermistor sensor” Measurement Science and Technology, 7 1653-64 (1996)

    used a TEC, and the product didn't die in the field, or not a least over
    the roughly ten years it was on the market. My boss had had a run in
    with bang-bang control of TECs in another (earlier) product, and that
    hadn't gone well, so we were well aware of the problem (and the paper
    does go into it, briefly).

    --
    Bill Sloman, Sydney

    --
    This email has been checked for viruses by Norton antivirus software. www.norton.com

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From legg@21:1/5 to All on Tue Jul 2 10:30:36 2024
    On Mon, 1 Jul 2024 13:44:39 -0700, Don Y <blockedofcourse@foo.invalid>
    wrote:

    On 7/1/2024 7:34 AM, legg wrote:
    On Sun, 30 Jun 2024 18:14:32 -0700, Don Y
    <blockedofcourse@foo.invalid> wrote:

    Assuming you can keep a device in its "normal operating (temperature)
    range", how advantageous is it (think MTBF) to drive that ambient
    down? And, is there a sweet spot (as there is a cost to lowering the
    temperature)?

    If all you're thinking of is MTBF, adding the complexity of an active
    cooling element is a big step in the wrong direction for the system.

    Shifting the reliability burden into a (relatively) low-tech, ubiquitous >subsystem allows failures to be maintained by "non-technical" people.
    It also allows for easier redundancy -- you can add another AHU "in parallel" >with an existing unit a lot easier than redesigning the electronic
    system to be more reliable over a larger operating range of temperatures.

    Reducing the thermal impedance of the source, to ambient is the
    usual way to go, when addressing a specific aging factor.

    If ambient approaches the limits of the design, then what?

    You design something to be able to operate at 50C (on paper).
    It gets deployed *at* 50C. What sort of failure rate do you expect
    at that elevated temperature vs. operating that same piece of kit
    at 30C by introducing active cooling? (the assumption being that said >cooling can be maintained/repaired by a local run-of-the-mill agency)

    The question tries to address that issue -- and, the consequences of
    how "well" you strive to maintain a "better" operating environment.

    E.g., cooling the environment to 10C and then letting it creep back
    up to 50C before repeating the cycle would be different than keeping
    the device "at" 30C.

    Why set a cold aisle temperature of 20C and not 30C? 40C? Why not
    operate the devices at their specified ambient limits?

    Continuous Operation: 10C to 35C, 10% to 80% relative humidity (RH).
    10% of annual operating hours: 5C to 40C, 5% to 85%RH. 1% of annual
    operating hours: -5C to 45C, 5% to 90%RH.

    I.e., the cited device CAN operate at 45C. But, at what cost
    (reliability)?

    https://ve3ute.ca/2000a.html

    If you're thinking of performance, It's cheaper and more reliable
    to concentrate on reducing the temperature of the point source, not
    the rest of the planet.

    RL

    What's the mtbf of a fan? a compressor? a pump?
    . . . . or a clamp and a block of aluminum?

    Ambient and component temperatures are freely obtained and carefully
    controlled elements in mtbf documentation recording methods.

    The former requires $$ equipment.

    RL

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From legg@21:1/5 to All on Tue Jul 2 10:24:58 2024
    On Mon, 01 Jul 2024 07:46:52 -0700, john larkin
    <jlarkin_highland_tech> wrote:

    On Mon, 01 Jul 2024 10:34:46 -0400, legg <legg@nospam.magma.ca> wrote:

    On Sun, 30 Jun 2024 18:14:32 -0700, Don Y
    <blockedofcourse@foo.invalid> wrote:

    Assuming you can keep a device in its "normal operating (temperature) >>>range", how advantageous is it (think MTBF) to drive that ambient
    down? And, is there a sweet spot (as there is a cost to lowering the >>>temperature)?

    If all you're thinking of is MTBF, adding the complexity of an active >>cooling element is a big step in the wrong direction for the system.

    Reducing the thermal impedance of the source, to ambient is the
    usual way to go, when addressing a specific aging factor.

    https://ve3ute.ca/2000a.html

    If you're thinking of performance, It's cheaper and more reliable
    to concentrate on reducing the temperature of the point source, not
    the rest of the planet.

    RL

    Tubes? The cathodes fail eventually. Reduce filament voltage and
    suffer the reduced gain. Better yet, don't use tubes.

    But for most parts that dissipate power, the big win is to have some
    air flow. A fan can reduce the theta of your parts by 2:1.

    Nowadays, parts are very good, with failure rates in the ballpark of
    one failure per billion hours, the Bellcore and MIL217 FITS numbers.

    This was an example of a demonstrated and documented failure mode
    in a specific component (glass electrolysis) that is/was largely
    ignored by the general user.

    If you know what the specific aging mechanism is that you're
    trying to address, your methods of improving mtbf will be more
    effective.

    RL

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From legg@21:1/5 to All on Tue Jul 2 10:33:50 2024
    On Mon, 1 Jul 2024 06:41:26 -0700, Don Y <blockedofcourse@foo.invalid>
    wrote:

    On 7/1/2024 4:24 AM, Martin Brown wrote:
    On 01/07/2024 02:14, Don Y wrote:
    Assuming you can keep a device in its "normal operating (temperature)
    range", how advantageous is it (think MTBF) to drive that ambient
    down? And, is there a sweet spot (as there is a cost to lowering the
    temperature)?

    There can be for some high performance low level OPamps. Deliberately running
    them as cold as is allowed helps take the LF noise floor down and by more than
    you would predict from Johnson noise. ISTR there was a patent for doing this >> back in the 1980's. Prior to that they tended to heat the front end to obtain
    temperature stability and low drift.

    https://ppubs.uspto.gov/dirsearch-public/print/downloadPdf/4883957

    Made possible with the advent of decent solid state TECs.

    We don't design products (industrial/consumer) that are "finicky" -- as
    it leads to higher TCOs. You don't want to need to control the environment >*or* have "skilled tradesmen" on staff to maintain/assure correct >performance.

    The most common example (that I can think of) where temperature is
    controlled FBO the electronics would be datacenters. But, from the
    research I've done, there, they simply set a desired temperature for
    the cold aisle and largely ignore the resulting hot aisle temperature
    (except to ensure it doesn't climb out-of-bounds). I.e., they
    don't close the loop on the hot aisle to control the cold aisle's
    setpoint (cascaded control).

    And, they don't get the cold aisle "as cold as possible" so they
    acknowledge there are diminishing returns in doing so -- likely
    cheaper to just pan on a (potentially) shorter upgrade cycle
    than to waste electricity trying to eek out a bit more life.

    Interestingly, I can't find anything other than "lore" to
    explain why a *particular* cold aisle temperature is chosen.
    Amusing to see how much folks DON'T know about the science
    they apply!

    When I designed my disk sanitizer, I did a fair bit of research
    regarding temperature effects on drives -- because we process a
    shitload (thousands) of *used* drives, annually and you don't want to
    reuse a drive that has an increased chance of failure (based on
    its previous environment, SMART data or observations while exercising
    it). The old "10 degree C" saw proved to be totally inappropriate,
    *there*.

    OTOH, I suspect it *is* worth noting for power supplies (as I
    see most failures being toasted power supplies in otherwise
    "healthy" products). I suspect power *cycling* is a culprit, there
    as I've seen failed solder joints where it looked like repeated
    thermal expansion had led to the failure.

    Also, is there any advantage to minimizing the hysteresis between
    the ACTUAL operating temperature extremes in such a control strategy
    (given that lower hysteresis usually comes at an increased cost)?

    Depends how temperature sensitive the thing is that you are protecting. The >> example I recall they were aiming for medium term stable 6 sig fig measurements
    with the lowest possible noise.

    I've needed to control temperature in applications where it was
    key to the *process* being controlled. E.g., monitoring exhaust
    air temperature to determine the "state" of the bed and a cascade
    loop on the inlet air handler to drive that to a desired state.

    But, there, you have lots of money for the equipment and can buy >good/precise/fast control with things like face-and-bypass as
    the primary controlled variable (so the control loop for the
    heater/chiller can be cruder and more energy efficient).

    "In the small", refrigeration is the only practical means of
    lowering ambient temperatures. And, that adds to operating costs.
    If you can tolerate a wider deadband then the cooling cost
    can be lower (e.g., cool to X degrees and let it *soak*, there,
    before letting it warm to Y degrees instead of foolishly
    trying to maintain the environment at some Z>X and <Y).
    As you likely have LESS ability to precisely size the HVAC
    to fit such a small load, deadband becomes a key consequence
    of that selection process.

    [Gotta wonder why data centers in northern latitudes don't
    exploit outside air more agressively during the winter
    months!]

    What are the HVAC costs in data processing and server facilities?

    That's just to maintain ambient <40C.

    RL

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to legg on Tue Jul 2 08:26:24 2024
    On 7/2/2024 7:30 AM, legg wrote:
    What's the mtbf of a fan? a compressor? a pump?
    . . . . or a clamp and a block of aluminum?

    As long as it isn't significantly worse than the impact of NOT
    having it, you don't care -- because some (relatively unskilled)
    local contractor can fix those things. You don't have to
    hire a skilled member of staff to be on-hand to deal with the
    "more sophisticated" technology's potential failures.

    I'd much rather have an HVAC guy come in and repair the AHU in
    the datacenter -- even if it was an annual event -- than have
    to risk servers crashing or having to be replaced (and the
    data recovered). The former is a "cheap", ubiquitous skillset;
    the latter considerably costlier and critical.

    Ambient and component temperatures are freely obtained and carefully controlled elements in mtbf documentation recording methods.

    The former requires $$ equipment.

    RL

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to legg on Tue Jul 2 08:21:44 2024
    On 7/2/2024 7:33 AM, legg wrote:
    [Gotta wonder why data centers in northern latitudes don't
    exploit outside air more agressively during the winter
    months!]

    What are the HVAC costs in data processing and server facilities?

    That's just to maintain ambient <40C.

    But datacenters have big heat *generating* loads that they are
    trying to offset. So, you would expect to have sufficient
    cooling (and the costs thereof) to remove that "added" load.

    Imagine setting a device out in the desert.
    Or, *in* the passenger compartment of a parked car.
    Or, in an uninsulated attic.

    The device's dissipation isn't a significant factor in the
    ambient temperature that it experiences -- the "ambient volume"
    around it is sufficiently large that it doesn't add to the
    problem. Adding a *fan* (in the device) won't do squat to
    improve the situation.

    If you're just trying to control the temperature of a component,
    you have different options than if you are trying to control
    the ambient temperature that a "device" experiences.

    If you're trying to ensure the ambient for the device is such
    that it "encourages" reliability, then you have a different
    problem. Think of the environment your KWHr meter experiences;
    the builder chose its location -- likely without any concern
    over sun exposure, etc.

    In the 60's, one could state that you needed a particular
    environment to operate a particular device (e.g., "a computer
    room" for the computer). Nowadays, your device has to tolerate
    the environment (e.g., factory floor -- even in factories that
    have few "organic" occupants that could bias the ambient
    towards a more comfortable level). *Or*, modify the "local"
    environment to a degree that lets it achieve its performance/longevity
    goals.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From john larkin @21:1/5 to legg on Tue Jul 2 09:19:45 2024
    On Tue, 02 Jul 2024 10:24:58 -0400, legg <legg@nospam.magma.ca> wrote:

    On Mon, 01 Jul 2024 07:46:52 -0700, john larkin
    <jlarkin_highland_tech> wrote:

    On Mon, 01 Jul 2024 10:34:46 -0400, legg <legg@nospam.magma.ca> wrote:

    On Sun, 30 Jun 2024 18:14:32 -0700, Don Y
    <blockedofcourse@foo.invalid> wrote:

    Assuming you can keep a device in its "normal operating (temperature) >>>>range", how advantageous is it (think MTBF) to drive that ambient
    down? And, is there a sweet spot (as there is a cost to lowering the >>>>temperature)?

    If all you're thinking of is MTBF, adding the complexity of an active >>>cooling element is a big step in the wrong direction for the system.

    Reducing the thermal impedance of the source, to ambient is the
    usual way to go, when addressing a specific aging factor.

    https://ve3ute.ca/2000a.html

    If you're thinking of performance, It's cheaper and more reliable
    to concentrate on reducing the temperature of the point source, not
    the rest of the planet.

    RL

    Tubes? The cathodes fail eventually. Reduce filament voltage and
    suffer the reduced gain. Better yet, don't use tubes.

    But for most parts that dissipate power, the big win is to have some
    air flow. A fan can reduce the theta of your parts by 2:1.

    Nowadays, parts are very good, with failure rates in the ballpark of
    one failure per billion hours, the Bellcore and MIL217 FITS numbers.

    This was an example of a demonstrated and documented failure mode
    in a specific component (glass electrolysis) that is/was largely
    ignored by the general user.

    If you know what the specific aging mechanism is that you're
    trying to address, your methods of improving mtbf will be more
    effective.

    RL

    Given non-junk products from you-know-where, most electronics failures
    are not from classic parts failure. Few real products, in the field,
    get close to the standard-calculated-method MTBF rates. They die from
    bad design, bad packaging and soldering, or external effects like ESD.

    Sometimes one of our customers will ask for a calculated MTBF, so we
    dutifully crank one out. We both know that the number is prfetty much
    fantasy.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From legg@21:1/5 to All on Wed Jul 3 09:05:26 2024
    On Tue, 2 Jul 2024 08:26:24 -0700, Don Y <blockedofcourse@foo.invalid>
    wrote:

    On 7/2/2024 7:30 AM, legg wrote:
    What's the mtbf of a fan? a compressor? a pump?
    . . . . or a clamp and a block of aluminum?

    As long as it isn't significantly worse than the impact of NOT
    having it, you don't care -- because some (relatively unskilled)
    local contractor can fix those things. You don't have to
    hire a skilled member of staff to be on-hand to deal with the
    "more sophisticated" technology's potential failures.

    I'd much rather have an HVAC guy come in and repair the AHU in
    the datacenter -- even if it was an annual event -- than have
    to risk servers crashing or having to be replaced (and the
    data recovered). The former is a "cheap", ubiquitous skillset;
    the latter considerably costlier and critical.

    You know what a brass tack is?

    RL

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From legg@21:1/5 to All on Wed Jul 3 09:03:12 2024
    On Tue, 02 Jul 2024 09:19:45 -0700, john larkin
    <jlarkin_highland_tech> wrote:

    On Tue, 02 Jul 2024 10:24:58 -0400, legg <legg@nospam.magma.ca> wrote:

    On Mon, 01 Jul 2024 07:46:52 -0700, john larkin
    <jlarkin_highland_tech> wrote:

    On Mon, 01 Jul 2024 10:34:46 -0400, legg <legg@nospam.magma.ca> wrote:

    On Sun, 30 Jun 2024 18:14:32 -0700, Don Y
    <blockedofcourse@foo.invalid> wrote:

    Assuming you can keep a device in its "normal operating (temperature) >>>>>range", how advantageous is it (think MTBF) to drive that ambient >>>>>down? And, is there a sweet spot (as there is a cost to lowering the >>>>>temperature)?

    If all you're thinking of is MTBF, adding the complexity of an active >>>>cooling element is a big step in the wrong direction for the system.

    Reducing the thermal impedance of the source, to ambient is the
    usual way to go, when addressing a specific aging factor.

    https://ve3ute.ca/2000a.html

    If you're thinking of performance, It's cheaper and more reliable
    to concentrate on reducing the temperature of the point source, not
    the rest of the planet.

    RL

    Tubes? The cathodes fail eventually. Reduce filament voltage and
    suffer the reduced gain. Better yet, don't use tubes.

    But for most parts that dissipate power, the big win is to have some
    air flow. A fan can reduce the theta of your parts by 2:1.

    Nowadays, parts are very good, with failure rates in the ballpark of
    one failure per billion hours, the Bellcore and MIL217 FITS numbers.

    This was an example of a demonstrated and documented failure mode
    in a specific component (glass electrolysis) that is/was largely
    ignored by the general user.

    If you know what the specific aging mechanism is that you're
    trying to address, your methods of improving mtbf will be more
    effective.

    RL

    Given non-junk products from you-know-where, most electronics failures
    are not from classic parts failure. Few real products, in the field,
    get close to the standard-calculated-method MTBF rates. They die from
    bad design, bad packaging and soldering, or external effects like ESD.

    Sometimes one of our customers will ask for a calculated MTBF, so we >dutifully crank one out. We both know that the number is prfetty much >fantasy.

    A standard calculation can be rubbish - often it will to be
    deliberately fudged to get an acceptable result - ignoring
    actual temp, stress or mtbf measurements in favor of guestimates
    or assumptions.

    'You can't handle the truth !'

    An external esd event is predictable and the strike count can
    be addressed for a specified environment, by built-in design,
    by opperator precaution or by environmental proscription.

    RL

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to legg on Wed Jul 3 06:43:40 2024
    On 7/3/2024 6:05 AM, legg wrote:
    On Tue, 2 Jul 2024 08:26:24 -0700, Don Y <blockedofcourse@foo.invalid>
    wrote:

    On 7/2/2024 7:30 AM, legg wrote:
    What's the mtbf of a fan? a compressor? a pump?
    . . . . or a clamp and a block of aluminum?

    As long as it isn't significantly worse than the impact of NOT
    having it, you don't care -- because some (relatively unskilled)
    local contractor can fix those things. You don't have to
    hire a skilled member of staff to be on-hand to deal with the
    "more sophisticated" technology's potential failures.

    I'd much rather have an HVAC guy come in and repair the AHU in
    the datacenter -- even if it was an annual event -- than have
    to risk servers crashing or having to be replaced (and the
    data recovered). The former is a "cheap", ubiquitous skillset;
    the latter considerably costlier and critical.

    You know what a brass tack is?

    Exactly that! You (as an owner of a piece of kit that you RELY on and
    have invested considerable time/monies) don't care if it's theoretical reliability is lowered; what you care about is how *effectively* reliable
    that device will be. How costly (time/money/inconvenience) is it to
    KEEP it in service?

    This is more than just reliability *or* availability.

    If you had to replace a server because a cooling system outage allowed it
    to experience 50C, you'd likely be significantly inconvenienced.

    If, however, it can continue to operate at 50C -- but with some damage
    that will eventually manifest in a reduced lifetime/reliability -- then
    you can weather the short term "problem" and plan on taking action
    to avoid the anticipated problem -- additional maintenance.

    If it is the nature of your business to replace items regularly,
    then it's likely that your replacement interval has already factor
    into it these types of "disturbances".

    If, OTOH, you don't expect to be replacing (expensive) kit, then
    anything that compromises that assumption wants to be avoided. How
    often do you replace major appliances? HVAC systems? How inexpensive (time/money/inconvenience) would the replacement need to be in order
    for you to tolerate a shorter lifespan?

    Or, how much MORE would you be willing to pay to avoid that
    replacement?

    [There are many devices that I would gladly "pay double" for the
    ASSURANCE (not some legalistic "warranty" but the genuine
    knowledge) that a device *won't* break in a given period of
    time. I.e., the equivalent of having a cold spare on hand -- but
    without the space required to store it or the effort required
    to put it into operation]

    If your products have lifespans on the order of a decade or less,
    (or, if they are inexpensive to buy/replace) then you likely never
    consider these things.

    [Our KWHr meter will be replaced this week. Along with every
    neighbor's. This is the only way the expense of such an activity
    can be reasonably managed -- sending out a linesman to replace ONE
    meter would be extremely costly! But, having a crew step-and-repeat
    down the block is much more manageable. What added feature would
    motivate them to replace them a *second* time while they still
    have serviceable life?]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From legg@21:1/5 to All on Fri Jul 5 08:06:26 2024
    On Wed, 3 Jul 2024 06:43:40 -0700, Don Y <blockedofcourse@foo.invalid>
    wrote:

    On 7/3/2024 6:05 AM, legg wrote:
    On Tue, 2 Jul 2024 08:26:24 -0700, Don Y <blockedofcourse@foo.invalid>
    wrote:

    On 7/2/2024 7:30 AM, legg wrote:
    What's the mtbf of a fan? a compressor? a pump?
    . . . . or a clamp and a block of aluminum?

    As long as it isn't significantly worse than the impact of NOT
    having it, you don't care -- because some (relatively unskilled)
    local contractor can fix those things. You don't have to
    hire a skilled member of staff to be on-hand to deal with the
    "more sophisticated" technology's potential failures.

    I'd much rather have an HVAC guy come in and repair the AHU in
    the datacenter -- even if it was an annual event -- than have
    to risk servers crashing or having to be replaced (and the
    data recovered). The former is a "cheap", ubiquitous skillset;
    the latter considerably costlier and critical.

    You know what a brass tack is?

    Exactly that! You (as an owner of a piece of kit that you RELY on and
    have invested considerable time/monies) don't care if it's theoretical >reliability is lowered; what you care about is how *effectively* reliable >that device will be. How costly (time/money/inconvenience) is it to
    KEEP it in service?

    This is more than just reliability *or* availability.

    If you had to replace a server because a cooling system outage allowed it
    to experience 50C, you'd likely be significantly inconvenienced.

    If, however, it can continue to operate at 50C -- but with some damage
    that will eventually manifest in a reduced lifetime/reliability -- then
    you can weather the short term "problem" and plan on taking action
    to avoid the anticipated problem -- additional maintenance.

    If it is the nature of your business to replace items regularly,
    then it's likely that your replacement interval has already factor
    into it these types of "disturbances".

    If, OTOH, you don't expect to be replacing (expensive) kit, then
    anything that compromises that assumption wants to be avoided. How
    often do you replace major appliances? HVAC systems? How inexpensive >(time/money/inconvenience) would the replacement need to be in order
    for you to tolerate a shorter lifespan?

    Or, how much MORE would you be willing to pay to avoid that
    replacement?

    [There are many devices that I would gladly "pay double" for the
    ASSURANCE (not some legalistic "warranty" but the genuine
    knowledge) that a device *won't* break in a given period of
    time. I.e., the equivalent of having a cold spare on hand -- but
    without the space required to store it or the effort required
    to put it into operation]

    If your products have lifespans on the order of a decade or less,
    (or, if they are inexpensive to buy/replace) then you likely never
    consider these things.

    [Our KWHr meter will be replaced this week. Along with every
    neighbor's. This is the only way the expense of such an activity
    can be reasonably managed -- sending out a linesman to replace ONE
    meter would be extremely costly! But, having a crew step-and-repeat
    down the block is much more manageable. What added feature would
    motivate them to replace them a *second* time while they still
    have serviceable life?]

    This is just a space maker.

    RL

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)