• Embedded Linux processors

    From Theo@21:1/5 to All on Mon Oct 24 15:20:26 2022
    I was idly looking to see what was out there in the low end Linux space - something bigger than an ESP32 but more production friendly than a Raspberry Pi. I came across this excellent guide:

    https://jaycarlson.net/embedded-linux/

    He builds dev boards for 10 different chips from 7 vendors, just to see how
    it all goes - both hardware and software. The results are quite
    interesting.

    Any other recommendations for Linux-supporting SoCs that are nice for low volume/hand production?

    Theo

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Theo on Tue Oct 25 18:17:36 2022
    On 10/24/2022 7:20 AM, Theo wrote:
    I was idly looking to see what was out there in the low end Linux space - something bigger than an ESP32 but more production friendly than a Raspberry Pi. I came across this excellent guide:

    https://jaycarlson.net/embedded-linux/

    He builds dev boards for 10 different chips from 7 vendors, just to see how it all goes - both hardware and software. The results are quite
    interesting.

    Any other recommendations for Linux-supporting SoCs that are nice for low volume/hand production?

    As you've qualified the solution space with "Linux-supporting", I
    assume you mean a Linux port is already available (for at least
    the underlying architecture).

    And, as you've discounted the rPi as "less production friendly", I
    assume you're looking for *components*, not *assemblies*.

    Looking for "low-cost linux boards" could give you an idea as to
    the processors chosen for each. But, they typically are "kitchen sink" approaches to problems.

    I'd, instead, look into the kernel and see if you can do away with
    the PMMU (i.e., get it to work with all memory wired down and no
    swap configured; then, remove the code associated with paging).

    This may make some aspects of the implementation impractical. E.g.,
    my RTOS relies on a PMMU to share data across protection domains,
    do zero copy ransfers, etc. But, you may be able to live without
    the things that rely on that mechanism.

    [No idea as I've never looked inside the linux kernel]

    Some of the older kernel versions (and ports) may give you an insight
    into what can/can't be done.

    This could expand the range of processors/SoCs that you could use
    (though likely require some effort for a port).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Theo on Wed Oct 26 10:06:38 2022
    On 24/10/2022 16:20, Theo wrote:
    I was idly looking to see what was out there in the low end Linux space - something bigger than an ESP32 but more production friendly than a Raspberry Pi. I came across this excellent guide:

    https://jaycarlson.net/embedded-linux/

    He builds dev boards for 10 different chips from 7 vendors, just to see how it all goes - both hardware and software. The results are quite
    interesting.

    Any other recommendations for Linux-supporting SoCs that are nice for low volume/hand production?

    Theo

    The key things to determine are what you consider "production friendly",
    and what you need. You want a module, not a chip. Some modules come
    with pins for connections, others with just solder pads, and some are
    made to fit SO-DIMM sockets or similar connectors. Some modules have
    Ethernet, Wifi, Bluetooth, HDMI, USB, and other high-speed interfaces -
    others have much less. Some have on-board eMMC or other NAND flash,
    others rely on external memory or SD-Cards. Some have their power
    supply handling on board and need just a single supply at 3.3v or 5v,
    others need multiple supplies at different levels with careful bringup.
    Some have long lifetimes and will be available for a decade, others
    are from companies that might be gone next month. Some have excellent
    support from the supplier, some have excellent community support, and
    others have virtually no support at all.

    We don't know anything about the product, its needs, or about what you
    can do yourself and what you need supplied. All I can give you is
    general advice here regarding things to consider. And be wary of trying
    to get minimal cost for the module - it can easily cost you more in the
    long run. (Equally, high price of a module is no guarantee.)

    There are many people making SoC's that can work well with Linux, mostly
    ARM Cortex-A but also some RISC-V now. (There are also PPC, MIPS, and a
    few other cores, but those are in more specialised devices like network
    chips.) There are no SoC's that are remotely suitable for hand production.

    Another thing to consider, of course, is whether a Linux module is what
    you really want. There are microcontrollers that are more powerful than
    ESP32 devices, such as NXP's i.mx RT line (with 500-1000 MHz Cortex-M7
    cores). On the software side, there is Zephyr which sits somewhere
    between FreeRTOS and Linux and might be useful. (I haven't tried Zephyr myself.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Don Y on Wed Oct 26 10:06:43 2022
    On 26/10/2022 03:17, Don Y wrote:
    On 10/24/2022 7:20 AM, Theo wrote:
    I was idly looking to see what was out there in the low end Linux space -
    something bigger than an ESP32 but more production friendly than a
    Raspberry
    Pi.  I came across this excellent guide:

    https://jaycarlson.net/embedded-linux/

    He builds dev boards for 10 different chips from 7 vendors, just to
    see how
    it all goes - both hardware and software.  The results are quite
    interesting.

    Any other recommendations for Linux-supporting SoCs that are nice for low
    volume/hand production?

    As you've qualified the solution space with "Linux-supporting", I
    assume you mean a Linux port is already available (for at least
    the underlying architecture).

    And, as you've discounted the rPi as "less production friendly", I
    assume you're looking for *components*, not *assemblies*.

    I wouldn't assume that (though the OP will have to clarify). Pi's are
    fine for prototyping, but there are many reasons why they might not be a suitable choice for real products. However, that does not at all
    suggest that it is a good idea to use chips directly rather than modules.

    Unless your production runs are at least 10,000 a time, it is unlikely
    to be cost-effective to use anything other than pre-populated modules. Designing a board for large ball count BGAs, high speed memories, etc.,
    is not quick or cheap, nor is their production.


    Looking for "low-cost linux boards" could give you an idea as to
    the processors chosen for each.  But, they typically are "kitchen sink" approaches to problems.

    I'd, instead, look into the kernel and see if you can do away with
    the PMMU (i.e., get it to work with all memory wired down and no
    swap configured; then, remove the code associated with paging).


    That could have been good advice - twenty years ago.

    Now it is pointless to aim for such a minimal system. The cheapest
    processors with MMU supported by Linux cost a few dollars. The cheapest non-MMU microcontrollers that are capable of supporting Linux are at
    least ten dollars. Swap has always been optional, but working without
    an MMU leads to a lot of complications and restrictions (such as no
    "fork" calls). No one uses non-MMU Linux except for nerdy fun. (And
    fun is /always/ a good reason for doing something.)


    This may make some aspects of the implementation impractical.  E.g.,
    my RTOS relies on a PMMU to share data across protection domains,
    do zero copy ransfers, etc.  But, you may be able to live without
    the things that rely on that mechanism.

    [No idea as I've never looked inside the linux kernel]

    Some of the older kernel versions (and ports) may give you an insight
    into what can/can't be done.

    This could expand the range of processors/SoCs that you could use
    (though likely require some effort for a port).


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Theo@21:1/5 to David Brown on Wed Oct 26 10:09:03 2022
    David Brown <david.brown@hesbynett.no> wrote:

    https://jaycarlson.net/embedded-linux/

    The key things to determine are what you consider "production friendly",
    and what you need. You want a module, not a chip. Some modules come
    with pins for connections, others with just solder pads, and some are
    made to fit SO-DIMM sockets or similar connectors. Some modules have Ethernet, Wifi, Bluetooth, HDMI, USB, and other high-speed interfaces - others have much less. Some have on-board eMMC or other NAND flash,
    others rely on external memory or SD-Cards. Some have their power
    supply handling on board and need just a single supply at 3.3v or 5v,
    others need multiple supplies at different levels with careful bringup.
    Some have long lifetimes and will be available for a decade, others
    are from companies that might be gone next month. Some have excellent support from the supplier, some have excellent community support, and
    others have virtually no support at all.

    The above article covers all of those things in a nice way: some parts are
    in 64 pin QFNs, some are in 0.8mm BGA which he reckons is doable to hand
    solder (I haven't tried that...). Some have abandonware software stacks, others are in the mainline Linux tree. etc etc

    We don't know anything about the product, its needs, or about what you
    can do yourself and what you need supplied. All I can give you is
    general advice here regarding things to consider. And be wary of trying
    to get minimal cost for the module - it can easily cost you more in the
    long run. (Equally, high price of a module is no guarantee.)

    I don't have a product :-) But really just making a thought experiment
    about what would happen if I did have a product - let's say an IoT thingy (wifi, display, etc) in the <$100 sticker price, initial volumes let's say hundreds.

    The ESP32s are nice as they're a simple, cheap, wifi module. If you wanted
    to cut costs you could use the bare chip. The Pis aren't: the Zero is a
    nice form factor, but you can't buy it in volume. The regular Pis can't
    really be mounted on a custom PCB if you don't have a large enclosure. The Compute Modules are better, but still larger than an ESP32. However you
    can't really buy any of them at the moment, and if you could they would be quite expensive. The Pi2040 is an ok microcontroller but nothing special
    (and wifi is an extra chip). Also none of them have any protection from someone changing or stealing your firmware.

    It is interesting in the above article how much the complexity starts to
    rise once you start going beyond a single chip solution: BGAs, DDR routing, numerous power supplies and sequencing, etc.

    There are many people making SoC's that can work well with Linux, mostly
    ARM Cortex-A but also some RISC-V now. (There are also PPC, MIPS, and a
    few other cores, but those are in more specialised devices like network chips.) There are no SoC's that are remotely suitable for hand production.

    Some of the SIPs and BGAs in the article above are, allegedly. However
    'hand production' is really a proxy for production complexity. If you can build a 4 layer board and hand-mount it, you can build in low-ish volume on
    a relatively cheap pick and place line. If you need a 10 layer board and package-on-package BGA mounting equipment, you can't do that without a much greater volume to amortise the tooling costs.

    Systems on module are a good solution to that but, if some of these SoCs are niche, the modules are even more niche (hard to buy in small quantities, produced by a tiny company, and so on).

    Another thing to consider, of course, is whether a Linux module is what
    you really want. There are microcontrollers that are more powerful than ESP32 devices, such as NXP's i.mx RT line (with 500-1000 MHz Cortex-M7 cores). On the software side, there is Zephyr which sits somewhere
    between FreeRTOS and Linux and might be useful. (I haven't tried Zephyr myself.)

    The iMX RT isn't one I've come across, thanks. That's the kind of thing I'm interested in.

    The software side is one that's frequently neglected: one thing the
    Raspberry Pi folks are really good at is maintaining their software stack.
    A lot of other (Chinese) Linux SoC vendors basically throw it all over the
    wall and let the customers do the maintenance. In some ways it's nice not
    to play in that space. OTOH once you get beyond a certain point it's nice
    to be able to use 'grown up' tools (like a webserver that can easily do TLS, not some stripped down microcontroller TLS stack that only does TLS 1.1 and can't fit any more in RAM, or worse does no TLS at all).

    I'm really mainly curious how this middle part of the market goes, and wondering how others feel about it.

    Theo

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to David Brown on Wed Oct 26 02:34:56 2022
    On 10/26/2022 1:06 AM, David Brown wrote:
    On 26/10/2022 03:17, Don Y wrote:
    On 10/24/2022 7:20 AM, Theo wrote:
    I was idly looking to see what was out there in the low end Linux space - >>> something bigger than an ESP32 but more production friendly than a Raspberry
    Pi.  I came across this excellent guide:

    https://jaycarlson.net/embedded-linux/

    He builds dev boards for 10 different chips from 7 vendors, just to see how >>> it all goes - both hardware and software.  The results are quite
    interesting.

    Any other recommendations for Linux-supporting SoCs that are nice for low >>> volume/hand production?

    As you've qualified the solution space with "Linux-supporting", I
    assume you mean a Linux port is already available (for at least
    the underlying architecture).

    And, as you've discounted the rPi as "less production friendly", I
    assume you're looking for *components*, not *assemblies*.

    I wouldn't assume that (though the OP will have to clarify).  Pi's are fine for
    prototyping, but there are many reasons why they might not be a suitable choice
    for real products.  However, that does not at all suggest that it is a good idea to use chips directly rather than modules.

    Unless your production runs are at least 10,000 a time, it is unlikely to be cost-effective to use anything other than pre-populated modules. Designing a board for large ball count BGAs, high speed memories, etc., is not quick or cheap, nor is their production.

    Did you *read* the article?

    "To this end, I designed a dev board from scratch for each application
    processor reviewed. Well, actually, many dev boards for each processor:
    roughly 25 different designs in total. This allowed me to try out different
    DDR layout and power management strategies — as well as fix some bugs
    along the way."

    Perhaps you've no experience designing (and laying out and prototyping) "modern" parts. It's not rocket science. The days of paying $2K for
    a Leister are ancient history... That was another point of the article.

    Looking for "low-cost linux boards" could give you an idea as to
    the processors chosen for each.  But, they typically are "kitchen sink"
    approaches to problems.

    I'd, instead, look into the kernel and see if you can do away with
    the PMMU (i.e., get it to work with all memory wired down and no
    swap configured; then, remove the code associated with paging).

    That could have been good advice - twenty years ago.

    Now it is pointless to aim for such a minimal system.  The cheapest processors
    with MMU supported by Linux cost a few dollars.

    What do you do when your product *sells* for a few dollars?

    The cheapest non-MMU
    microcontrollers that are capable of supporting Linux are at least ten dollars.

    How do you define "supporting Linux"? I.e., "for which an existing build exists?"

    Most developers are only interested in the API and feature sets that
    they have available to them. If it "looks" like linux, in terms of
    what they can expect it to do for them, they don't likely care about
    the actual implementation.

      Swap has always been optional, but working without an MMU leads to a
    lot of complications and restrictions (such as no "fork" calls).

    Fork needn't "create a copy of the parent process" -- if the
    existing copy of the process can be used without duplication
    (think XIP -- no gobs of RAM into which to copy the new process
    image!). All it need do is create a LOGICALLY new process container
    (which needn't even have "protection" from other processes).

    Fork is probably the *least* valuable use of a PMMU in a system.
    An MMU that gives some (reasonable) control over accesses to
    specific regions IN A UNIFIED ADDRESS SPACE would likely lead
    to more robust code (in and of itself) than supporting a
    classic fork().

      No one uses
    non-MMU Linux except for nerdy fun.  (And fun is /always/ a good reason for doing something.)

    <https://www.kernel.org/doc/html/latest/admin-guide/mm/nommu-mmap.html> <https://www.techonline.com/tech-papers/supporting-linux-without-an-mmu/>

    This may make some aspects of the implementation impractical.  E.g.,
    my RTOS relies on a PMMU to share data across protection domains,
    do zero copy ransfers, etc.  But, you may be able to live without
    the things that rely on that mechanism.

    [No idea as I've never looked inside the linux kernel]

    Some of the older kernel versions (and ports) may give you an insight
    into what can/can't be done.

    This could expand the range of processors/SoCs that you could use
    (though likely require some effort for a port).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Theo on Wed Oct 26 02:51:53 2022
    On 10/26/2022 2:09 AM, Theo wrote:
    I don't have a product :-) But really just making a thought experiment
    about what would happen if I did have a product - let's say an IoT thingy (wifi, display, etc) in the <$100 sticker price, initial volumes let's say hundreds.

    But, you're only looking at Linux (or, any "fleshy" OS) because you
    think it will make WiFi, networking, display, etc. "more straight-forward".
    You don't really care, functionally, if it is "Linux" (i.e., a particular kernel) that makes that happen, do you? As long as the API isn't
    too bizarre...

    The problem, I see, is ending up with lots of "features" that you
    don't really need in a given product.

    Do you *really* need a filesystem -- let alone support for a variety
    of them (and a structure that facilitates supporting many even if you
    only use *one*?).

    Do you really need to be able to support multiple network interfaces
    with a stack that is designed to allow "equivalent" interfaces to
    their drivers to slide under it?

    Once your app is up and running, will the page tables EVER change?

    The ESP32s are nice as they're a simple, cheap, wifi module. If you wanted to cut costs you could use the bare chip. The Pis aren't: the Zero is a
    nice form factor, but you can't buy it in volume. The regular Pis can't really be mounted on a custom PCB if you don't have a large enclosure. The Compute Modules are better, but still larger than an ESP32. However you can't really buy any of them at the moment, and if you could they would be quite expensive. The Pi2040 is an ok microcontroller but nothing special (and wifi is an extra chip). Also none of them have any protection from someone changing or stealing your firmware.

    That last isn't as easy to guard against as you might think...

    It is interesting in the above article how much the complexity starts to
    rise once you start going beyond a single chip solution: BGAs, DDR routing, numerous power supplies and sequencing, etc.

    But there's no black magic, there. This is all "common practice", now.
    If you don't have the skills, you develop them (as the author suggests).
    Layout tools do a lot of this for you. And, if you are looking at
    smallish "products", the hairy parts of the design are usually close
    to the CPU and don't extend far into the field.

    Eyesight gets to be a problem, as you get older. Parts are moving
    in the wrong direction (in terms of size!). <frown> But, a mantis
    or stereo-microscope can be a win, there. Or, subbing the fab
    out to a (local!) group that can also handle some of your rework,
    as may later be needed.

    There are many people making SoC's that can work well with Linux, mostly
    ARM Cortex-A but also some RISC-V now. (There are also PPC, MIPS, and a
    few other cores, but those are in more specialised devices like network
    chips.) There are no SoC's that are remotely suitable for hand production.

    Some of the SIPs and BGAs in the article above are, allegedly. However
    'hand production' is really a proxy for production complexity. If you can build a 4 layer board and hand-mount it, you can build in low-ish volume on
    a relatively cheap pick and place line. If you need a 10 layer board and package-on-package BGA mounting equipment, you can't do that without a much greater volume to amortise the tooling costs.

    Or, a firm that has already made that investment.

    I'm always amused at the folks who design products around modules.
    And, then have to design a daughter-card (which may, in fact, be a
    *mother*!) to address the other issues necessary to their design
    (few real world devices mate to pins on 0.1" centers!)

    So, the effort they were trying to avoid becomes essential as
    a consequence of their use of a "module"

    Systems on module are a good solution to that but, if some of these SoCs are niche, the modules are even more niche (hard to buy in small quantities, produced by a tiny company, and so on).

    Exactly. And, you have no say in HOW they are designed, fabricated,
    packaged, etc.

    Another thing to consider, of course, is whether a Linux module is what
    you really want. There are microcontrollers that are more powerful than
    ESP32 devices, such as NXP's i.mx RT line (with 500-1000 MHz Cortex-M7
    cores). On the software side, there is Zephyr which sits somewhere
    between FreeRTOS and Linux and might be useful. (I haven't tried Zephyr
    myself.)

    The iMX RT isn't one I've come across, thanks. That's the kind of thing I'm interested in.

    The software side is one that's frequently neglected: one thing the
    Raspberry Pi folks are really good at is maintaining their software stack.
    A lot of other (Chinese) Linux SoC vendors basically throw it all over the wall and let the customers do the maintenance. In some ways it's nice not
    to play in that space. OTOH once you get beyond a certain point it's nice
    to be able to use 'grown up' tools (like a webserver that can easily do TLS, not some stripped down microcontroller TLS stack that only does TLS 1.1 and can't fit any more in RAM, or worse does no TLS at all).

    I'm really mainly curious how this middle part of the market goes, and wondering how others feel about it.

    If you want to be in a business (regardless of size), you have to invest
    in the tools necessary to make that business work. The tools can be
    physical assets -- or, intellectual skillsets.

    Only you can identify the likely direction your business (products)
    will take. So, only you can decide which "tools" are sensible
    investments.

    [I don't design the molds for my enclosures. I do "CAD-sketches"
    for a guy that refines them for me with the details of the stuff
    that goes into -- and connects to -- each of them. *He* is one
    of my tools.]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Theo on Wed Oct 26 12:42:28 2022
    On 26/10/2022 11:09, Theo wrote:
    David Brown <david.brown@hesbynett.no> wrote:

    https://jaycarlson.net/embedded-linux/

    The key things to determine are what you consider "production friendly",
    and what you need. You want a module, not a chip. Some modules come
    with pins for connections, others with just solder pads, and some are
    made to fit SO-DIMM sockets or similar connectors. Some modules have
    Ethernet, Wifi, Bluetooth, HDMI, USB, and other high-speed interfaces -
    others have much less. Some have on-board eMMC or other NAND flash,
    others rely on external memory or SD-Cards. Some have their power
    supply handling on board and need just a single supply at 3.3v or 5v,
    others need multiple supplies at different levels with careful bringup.
    Some have long lifetimes and will be available for a decade, others
    are from companies that might be gone next month. Some have excellent
    support from the supplier, some have excellent community support, and
    others have virtually no support at all.

    The above article covers all of those things in a nice way: some parts are
    in 64 pin QFNs, some are in 0.8mm BGA which he reckons is doable to hand solder (I haven't tried that...). Some have abandonware software stacks, others are in the mainline Linux tree. etc etc

    If you are doing all this for fun and learning, where your own time is
    free and reliability is not an issue, then you can do some of this by
    hand. If you are trying to make a product to sell to others and turn a
    profit, it's a completely different situation.

    BGA's are okay to place by hand, but getting a good, even soldering
    result with kitchen-top tools is unlikely. At best, you'll get
    something that works for a while - but put it to real use and the voids, half-contacts, partial short-circuits and other flaws will cause
    failures sooner or later as thermal stresses wear them out. And then
    you have the 0.5 mm pitch QFN packages, the 0402 chicken feed
    components, and all the rest of it.

    And if this is professional, don't forget the testing and certification
    you need, depending on where you are selling it - things like EMC
    testing and radio emission regulations. If your home made device has
    Wifi or Bluetooth, and you want to sell it, the certification process
    will cost you hundreds of thousands of dollars (especially since you
    haven't a hope in hell of passing the tests when you do home production).

    But it can certainly be fun as a hobby and to get a better understanding
    about how all this works.


    We don't know anything about the product, its needs, or about what you
    can do yourself and what you need supplied. All I can give you is
    general advice here regarding things to consider. And be wary of trying
    to get minimal cost for the module - it can easily cost you more in the
    long run. (Equally, high price of a module is no guarantee.)

    I don't have a product :-) But really just making a thought experiment
    about what would happen if I did have a product - let's say an IoT thingy (wifi, display, etc) in the <$100 sticker price, initial volumes let's say hundreds.

    The ESP32s are nice as they're a simple, cheap, wifi module.

    Yes - they are often a first choice for when you want Wifi and/or Bluetooth.

    If you wanted
    to cut costs you could use the bare chip.

    No, you can't. You can't design a working Wifi module for the price of
    100 ESP32 modules, assuming you value the hours spent appropriately for
    an electronics engineer. You can't produce a working Wifi module in
    your kitchen or garage, because the required components are too small to
    handle by hand. And that's before you try and certify the thing so that
    it is legal to sell.

    At my company, we have experienced electronics designers with top-class
    design software. We have production facilities for high-speed automated production of low and mid volume production runs, capable of placing and soldering parts that are barely visible to the naked eye, with optical
    and x-ray inspection systems. We would not consider making a product
    with Wifi using bare chips - we would use ready-made modules. If we
    can't do it, /you/ can't do it.


    The Pis aren't: the Zero is a
    nice form factor, but you can't buy it in volume.

    Of course you can order them in volume. More appropriate, perhaps, are
    the Pi compute modules - which you can also order in volume. You are
    asking for hundreds, while distributors will happily take orders in tens
    of thousands for these.

    However, like almost everything else in the electronics industry these
    days, you'll be hard pushed to find much stock of Pis, or any other
    Linux module, Linux-capable SoC, or the other components involved. So
    if you need something in the short term, take whatever you can find in
    stock.

    Hopefully the current component shortage situation won't last forever,
    and then you'll be able to order Pi Zeros and Pi Compute Modules in
    whatever quantity suits.

    The regular Pis can't
    really be mounted on a custom PCB if you don't have a large enclosure. The Compute Modules are better, but still larger than an ESP32. However you can't really buy any of them at the moment, and if you could they would be quite expensive. The Pi2040 is an ok microcontroller but nothing special (and wifi is an extra chip). Also none of them have any protection from someone changing or stealing your firmware.

    It is interesting in the above article how much the complexity starts to
    rise once you start going beyond a single chip solution: BGAs, DDR routing, numerous power supplies and sequencing, etc.

    Linux systems are /never/ a single chip solution. And yes, it is can
    often be the other chips that are the biggest challenges - or their
    supporting small components.


    There are many people making SoC's that can work well with Linux, mostly
    ARM Cortex-A but also some RISC-V now. (There are also PPC, MIPS, and a
    few other cores, but those are in more specialised devices like network
    chips.) There are no SoC's that are remotely suitable for hand production.

    Some of the SIPs and BGAs in the article above are, allegedly. However
    'hand production' is really a proxy for production complexity. If you can build a 4 layer board and hand-mount it, you can build in low-ish volume on
    a relatively cheap pick and place line. If you need a 10 layer board and package-on-package BGA mounting equipment, you can't do that without a much greater volume to amortise the tooling costs.


    That is partly correct, partly misunderstanding.

    The board layer count affects the cost of the pcb itself, and the effort
    (and tools) required for the design. It doesn't affect the board
    manufacturing (you don't make the pcb yourself), although it can limit
    the suppliers that can make it for you.

    If you want to make professional quality boards and sell them, then you
    do not do it with hand mounting - even if some guy on the internet says
    it's possible. If you don't have the volumes involved to have the
    production tools needed for automated pick and place, optical
    inspection, proper solder ovens, etc., outsource the board production.
    There is no shortage of companies who will do this even for runs of
    hundreds of boards - you can choose between more local suppliers that
    will have well-trained staff that will work with you to improve the
    design, all the way to anonymous far-eastern companies that will work
    cheaply and give you exactly what you ask for, mistakes and all.

    There are some kinds of boards that are fine for small scale
    manufacturing with simple machines - Linux boards are not one of them.
    A base board for mounting a Linux module, might be a lot more practical
    for your own production.

    Systems on module are a good solution to that but, if some of these SoCs are niche, the modules are even more niche (hard to buy in small quantities, produced by a tiny company, and so on).


    The niche SoCs are not normally on modules. The people who buy a SoC
    with MIPS or PPC cores do so because they are making massive network
    switches, car engine controllers, and the like.

    Another thing to consider, of course, is whether a Linux module is what
    you really want. There are microcontrollers that are more powerful than
    ESP32 devices, such as NXP's i.mx RT line (with 500-1000 MHz Cortex-M7
    cores). On the software side, there is Zephyr which sits somewhere
    between FreeRTOS and Linux and might be useful. (I haven't tried Zephyr
    myself.)

    The iMX RT isn't one I've come across, thanks. That's the kind of thing I'm interested in.

    The more "fun" parts in the family are fair sized BGA's. They are a
    nice group of parts.


    The software side is one that's frequently neglected: one thing the
    Raspberry Pi folks are really good at is maintaining their software stack.
    A lot of other (Chinese) Linux SoC vendors basically throw it all over the wall and let the customers do the maintenance. In some ways it's nice not
    to play in that space. OTOH once you get beyond a certain point it's nice
    to be able to use 'grown up' tools (like a webserver that can easily do TLS, not some stripped down microcontroller TLS stack that only does TLS 1.1 and can't fit any more in RAM, or worse does no TLS at all).


    IMHO the "encrypt everything" movement is a silly idea and a massive
    waste of effort and resources. Sure, you want your bank website traffic
    to use SSL, but it is completely unnecessary for the great majority of
    web traffic.

    But I agree that sometimes it is nice to have plenty of resources in
    your embedded system, whatever you use them for.

    I'm really mainly curious how this middle part of the market goes, and wondering how others feel about it.

    Theo

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Theo@21:1/5 to Don Y on Wed Oct 26 11:39:02 2022
    Don Y <blockedofcourse@foo.invalid> wrote:
    On 10/26/2022 2:09 AM, Theo wrote:
    I don't have a product :-) But really just making a thought experiment about what would happen if I did have a product - let's say an IoT thingy (wifi, display, etc) in the <$100 sticker price, initial volumes let's say hundreds.

    But, you're only looking at Linux (or, any "fleshy" OS) because you
    think it will make WiFi, networking, display, etc. "more straight-forward". You don't really care, functionally, if it is "Linux" (i.e., a particular kernel) that makes that happen, do you? As long as the API isn't
    too bizarre...

    The reason people use Linux is for the software stacks. It allows you to
    write in a more friendly language, have better libraries for doing
    complicated things, use existing tooling, not have to worry about boring housekeeping things like the networking (does your thing support IPv6?
    Linux has for decades, does your homebrew embedded RTOS? What about WPA3?). Can you interact securely with whatever cloud service your widget needs to
    do its thing? (especially if that service is not designed specifically for talking to low-end widgets)

    Essentially you trade off ease of software development for hardware
    complexity. If you're playing in the low volume game, development effort
    and time to market is more important than saving cents on production costs.
    If you're selling by the million the tradeoff is different.

    The problem, I see, is ending up with lots of "features" that you
    don't really need in a given product.

    Do you *really* need a filesystem -- let alone support for a variety
    of them (and a structure that facilitates supporting many even if you
    only use *one*?).

    If you want to run <tool> and that needs a filesystem, yes you do. I'm sure you could reimplement it to do without, but that takes effort.

    Do you really need to be able to support multiple network interfaces
    with a stack that is designed to allow "equivalent" interfaces to
    their drivers to slide under it?

    Once your app is up and running, will the page tables EVER change?

    That depends on the app. The point here is to be able to use existing
    software without having to re-engineer it. Once you start re-engineering things, that's where your time goes.

    The ESP32s are nice as they're a simple, cheap, wifi module. If you wanted to cut costs you could use the bare chip. The Pis aren't: the Zero is a nice form factor, but you can't buy it in volume. The regular Pis can't really be mounted on a custom PCB if you don't have a large enclosure. The Compute Modules are better, but still larger than an ESP32. However you can't really buy any of them at the moment, and if you could they would be quite expensive. The Pi2040 is an ok microcontroller but nothing special (and wifi is an extra chip). Also none of them have any protection from someone changing or stealing your firmware.

    That last isn't as easy to guard against as you might think...

    Indeed, which is why microcontrollers have various secure boot and encrypted firmware support.

    (which aren't perfect, but prevent somebody just pulling your flash chip and reading it out)

    It is interesting in the above article how much the complexity starts to rise once you start going beyond a single chip solution: BGAs, DDR routing, numerous power supplies and sequencing, etc.

    But there's no black magic, there. This is all "common practice", now.
    If you don't have the skills, you develop them (as the author suggests). Layout tools do a lot of this for you. And, if you are looking at
    smallish "products", the hairy parts of the design are usually close
    to the CPU and don't extend far into the field.

    Indeed, no black magic, just time and cost. Don't do it if you don't need
    it.

    If you want to be in a business (regardless of size), you have to invest
    in the tools necessary to make that business work. The tools can be
    physical assets -- or, intellectual skillsets.

    Only you can identify the likely direction your business (products)
    will take. So, only you can decide which "tools" are sensible
    investments.

    The thing here is choosing your battles. Spend your time on the things that add value to the product. Don't make life needlessly harder when that's not necessary. Everything *can* be done, but some things shouldn't *need* to be done. If you're in the high-volume game, saving $1m using cheaper parts
    makes sense. If you're in the low-volume game, you might only save $1000
    but spend $10K in time doing so.

    Theo

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Theo on Wed Oct 26 04:39:02 2022
    On 10/26/2022 3:39 AM, Theo wrote:
    Don Y <blockedofcourse@foo.invalid> wrote:
    On 10/26/2022 2:09 AM, Theo wrote:
    I don't have a product :-) But really just making a thought experiment
    about what would happen if I did have a product - let's say an IoT thingy >>> (wifi, display, etc) in the <$100 sticker price, initial volumes let's say >>> hundreds.

    But, you're only looking at Linux (or, any "fleshy" OS) because you
    think it will make WiFi, networking, display, etc. "more straight-forward". >> You don't really care, functionally, if it is "Linux" (i.e., a particular
    kernel) that makes that happen, do you? As long as the API isn't
    too bizarre...

    The reason people use Linux is for the software stacks.

    But that same argument applies to any device (including microcontrollers).
    Its why manufacturers develop support libraries and the like -- because
    they want to make their products (components) easier to design into a
    final product.

    Would you care if it was ManufacturerOS instead of Linux -- if it
    supported the "mechanisms" that you need? Even if it was "closed"
    source? (are you *really* going to do any kernel hacking?)

    It allows you to
    write in a more friendly language, have better libraries for doing complicated things, use existing tooling, not have to worry about boring housekeeping things like the networking (does your thing support IPv6?
    Linux has for decades, does your homebrew embedded RTOS? What about WPA3?). Can you interact securely with whatever cloud service your widget needs to
    do its thing? (especially if that service is not designed specifically for talking to low-end widgets)

    But you're assuming your product NEEDS those things. I don't have
    a filesystem anywhere in my design. Persistent storage is handled
    by a database -- because storage often wants to be *structured*, not
    a collection of unstructured files that the application has to
    parse (and verify) the structure thereof.

    Because I don't support the notion of a filesystem, everything
    related to that is unnecessary.

    Only devices designed to *be* displays HAVE displays. So, why burden
    other devices with that overhead/complexity?

    Encryption isn't a bolt-on feature but, rather, inherent in all
    comms. It doesn't make sense (in my application) to have comms
    that aren't encrypted!

    OTOH, everything in my world is object-based with fine-grained capabilities governing the actions that can be invoked on specific objects. So, I can
    let you transmit on a serial port but not receive; and someone else
    configure that serial port but never access the content passing through
    it; and someone else...

    (It's likely that your code configures the port in one place but
    doesn't need to access the content, there -- so, why should it be
    ABLE to do so? Likewise, why should something that is interested
    in accessing the content be able to alter the configuration -- likely
    "by accident"?)

    I don't need to "bolt this onto" some other implementation (and
    hope there are no cracks through which an exploit/bug can creep
    UNDER that) as it's part of the system's foundation. Because I
    have to tolerate foreign code that could actively try to subvert
    the security of my design (attempting to do something for which
    you don't have a suitable capability traps to the OS -- which,
    by default, kills off your process; you're either a malevolent
    entity or a bug... in either case, no reason to let you continue
    to execute!)

    I have no global namespace (a filesystem typically is the namespace
    for most products). So, task A doesn't even know that object X exists
    (even if the developer of task A is 100.00% sure it does!) and, because
    A doesn't have a name for object X, there is no way it can access it.

    I.e., I build the mechanisms that are appropriate for my product
    and care little about what a "desktop OS" thinks is appropriate.

    Yet, I can still build the common libraries that you're used to seeing
    ATOP my mechanisms. So, when you pepper your code with diagnostic
    "printf()s", they get delivered to an appropriate diagnostic device
    *somewhere* in the system -- the location and implementation of which
    is not important to your code.

    Essentially you trade off ease of software development for hardware complexity. If you're playing in the low volume game, development effort
    and time to market is more important than saving cents on production costs. If you're selling by the million the tradeoff is different.

    It's not just quantities. What would you do if you developed yet another
    "low volume" product -- would you start your quantity count from zero, there?

    Also, there may be other factors at play in your market. E.g., we would
    sell *12* tablet presses in a year. That's TWELVE (no typos). How sloppy could we be with our implementation at that production level? Why not use all OTS assemblies to make life easy?!

    Ah, but OTS assemblies are often designed to fit a variety of applications. Lots of "if (WHATEVER)..." scattered through the codebase as it tries to configure itself for a specific application.

    But, WHATEVER will either be always true or always false (for a given configuration). Meaning, one branch of the code will NEVER be executed.
    In some industries, it wouldn't be allowed to remain in the product
    ("dead code") -- regardless of how few units you made!

    "What's all this filesystem code doing in the kernel? You don't
    HAVE a filesystem in your product!"

    Ooops!

    The problem, I see, is ending up with lots of "features" that you
    don't really need in a given product.

    Do you *really* need a filesystem -- let alone support for a variety
    of them (and a structure that facilitates supporting many even if you
    only use *one*?).

    If you want to run <tool> and that needs a filesystem, yes you do. I'm sure you could reimplement it to do without, but that takes effort.

    That argument can apply to any proposed criteria. The question you
    should ask is: "Do you NEED to run <tool> *in* your product? Or,
    are you just resorting to creeping featurism and running it because
    you *can*?"

    You can run bc(1) on a linux box. Should you offer a calculator
    utility to the product's users just because you *can* do so?
    You can run a web server. Does doing so actually add value -- to
    offset the added complexity and opportunity for bugs?

    I bought a scanner, recently. I wanted a network connection to transport
    the scanned images to a remote host (farther away than a USB cable would tolerate). It's got more *cruft* in it (TELNET, HTTPd, SSH, etc.) than I
    can imagine anyone needing or wanting! A long list of features is also a
    long list of likely *bugs*!

    [E.g., I can do 1200dpi scans using the USB interface but someone forgot to
    add that capability via the network interface! So, the i/f that I most
    desire is crippled -- despite all the extra cruft taht they've spent time developing/maintaining!]

    Do you really need to be able to support multiple network interfaces
    with a stack that is designed to allow "equivalent" interfaces to
    their drivers to slide under it?

    Once your app is up and running, will the page tables EVER change?

    That depends on the app.

    As I said, above, "That argument can apply to any proposed criteria."
    using that sort of logic, one can argue that you should embelish your
    solution with as much cruft as possible -- to cover all bases!

    The point here is to be able to use existing
    software without having to re-engineer it. Once you start re-engineering things, that's where your time goes.

    You're assuming you won't have to understand any of the things that you
    are embracing. Is your product support going to be reliant on linux forums? When a customer calls with a problem, are you going to have to HOPE someone takes an interest in understanding your product, its implementation and
    the expressed problem?

    So, when you've an "upset (not yet "angry") customer, you're going to
    cross your fingers and hope for a solution -- because you don't
    understand the "component" you are using (are you sure it's configured properly?)

    The ESP32s are nice as they're a simple, cheap, wifi module. If you wanted >>> to cut costs you could use the bare chip. The Pis aren't: the Zero is a >>> nice form factor, but you can't buy it in volume. The regular Pis can't >>> really be mounted on a custom PCB if you don't have a large enclosure. The >>> Compute Modules are better, but still larger than an ESP32. However you >>> can't really buy any of them at the moment, and if you could they would be >>> quite expensive. The Pi2040 is an ok microcontroller but nothing special >>> (and wifi is an extra chip). Also none of them have any protection from >>> someone changing or stealing your firmware.

    That last isn't as easy to guard against as you might think...

    Indeed, which is why microcontrollers have various secure boot and encrypted firmware support.

    (which aren't perfect, but prevent somebody just pulling your flash chip and reading it out)

    Yes. But, there are often other ways to get at the data.

    If you are small enough (and your products don't have high margins that
    make them attractive to a cloner), you can likely get away with this -- save for the individual "hacker" who takes an interest in your particular
    device.

    [And, of course, said hacker can now disseminate anything he learns
    easily in ways that make it easy for folks to stumble onto his efforts
    with the help of a search engine]

    If your device is simple enough -- and you've not done anything to protect
    it "legally" -- then it's easier for someone to just copy the *notion*
    and not worry about your specific implementation.

    [Ages ago, we manufactured a radar unit for boats. A japanese company came
    by wanting to sell our units in asia. We were very accommodating. They eventually just *copied* the design and we got nothing out of the deal
    (save for an initial sale of a dozen units). But, in the copying, they
    also made enhancements to our design -- some of which were so obvious,
    in hindsight, that we kicked ourselves for not having thought of them
    in the original design! I.e., in some ways, their version of OUR
    product was better than our own!]

    It is interesting in the above article how much the complexity starts to >>> rise once you start going beyond a single chip solution: BGAs, DDR routing, >>> numerous power supplies and sequencing, etc.

    But there's no black magic, there. This is all "common practice", now.
    If you don't have the skills, you develop them (as the author suggests).
    Layout tools do a lot of this for you. And, if you are looking at
    smallish "products", the hairy parts of the design are usually close
    to the CPU and don't extend far into the field.

    Indeed, no black magic, just time and cost. Don't do it if you don't need it.

    That's true if you look at the effort as a "one off". You wouldn't buy a
    logic analyzer if you were only debugging ONE, relatively simple design.
    OTOH, the time lost debugging that first design WITHOUT a LA could have
    reduced the effective cost of the LA purchased for the *second* design!

    I've found it usually pays to make investments in tools, skills, etc.
    But, because I've known where I wanted my career to go. So, I knew
    that an investment today would pay off in the future by making me better equipped to tackle a future project (that I may already have planned on!)

    OTOH, things that I *know* are one-offs have too high a bar to justify
    any long-term commitments/investments.

    E.g., I'm making a Rube Goldberg-esque kinematic sculpture in the back
    yard. Every piece is hand-made -- because there will only EVER be one
    of these. Why invest in castings if I'm going to only use each once?

    If you want to be in a business (regardless of size), you have to invest
    in the tools necessary to make that business work. The tools can be
    physical assets -- or, intellectual skillsets.

    Only you can identify the likely direction your business (products)
    will take. So, only you can decide which "tools" are sensible
    investments.

    The thing here is choosing your battles. Spend your time on the things that add value to the product. Don't make life needlessly harder when that's not necessary. Everything *can* be done, but some things shouldn't *need* to be done. If you're in the high-volume game, saving $1m using cheaper parts makes sense. If you're in the low-volume game, you might only save $1000
    but spend $10K in time doing so.

    But that requires you to know what your PRODUCTS (plural) are likely to be. Only you can know what your future actions/needs are *likely* to be.

    If I wanted to go in the kinematic sculpture business, I'd be approaching *mine* very differently -- even if it meant mine being more costly and
    taking longer to complete (due to all of the "investments" for future
    efforts). *My* finished result would likely look more "professional"...
    but, it would also look to be just one of N such units!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Don Y on Wed Oct 26 13:25:55 2022
    On 26/10/2022 11:34, Don Y wrote:
    On 10/26/2022 1:06 AM, David Brown wrote:
    On 26/10/2022 03:17, Don Y wrote:
    On 10/24/2022 7:20 AM, Theo wrote:
    I was idly looking to see what was out there in the low end Linux
    space -
    something bigger than an ESP32 but more production friendly than a
    Raspberry
    Pi.  I came across this excellent guide:

    https://jaycarlson.net/embedded-linux/

    He builds dev boards for 10 different chips from 7 vendors, just to
    see how
    it all goes - both hardware and software.  The results are quite
    interesting.

    Any other recommendations for Linux-supporting SoCs that are nice
    for low
    volume/hand production?

    As you've qualified the solution space with "Linux-supporting", I
    assume you mean a Linux port is already available (for at least
    the underlying architecture).

    And, as you've discounted the rPi as "less production friendly", I
    assume you're looking for *components*, not *assemblies*.

    I wouldn't assume that (though the OP will have to clarify).  Pi's are
    fine for prototyping, but there are many reasons why they might not be
    a suitable choice for real products.  However, that does not at all
    suggest that it is a good idea to use chips directly rather than modules.

    Unless your production runs are at least 10,000 a time, it is unlikely
    to be cost-effective to use anything other than pre-populated modules.
    Designing a board for large ball count BGAs, high speed memories,
    etc., is not quick or cheap, nor is their production.

    Did you *read* the article?

    I didn't, no - I was responding to what /you/ wrote in reply to what the
    OP asked. That was the relevant issue. (I've now read the article, and
    it has not changed my opinions significantly.)


       "To this end, I designed a dev board from scratch for each application
       processor reviewed. Well, actually, many dev boards for each processor:
       roughly 25 different designs in total. This allowed me to try out different
       DDR layout and power management strategies — as well as fix some bugs
       along the way."

    Perhaps you've no experience designing (and laying out and prototyping) "modern" parts.  It's not rocket science.  The days of paying $2K for
    a Leister are ancient history...  That was another point of the article.


    I do have experience at it, yes. And it takes knowledge, tools, and
    time. I didn't say the OP could not do it - I don't know his abilities.
    I said it was not cost-effective.

    Looking for "low-cost linux boards" could give you an idea as to
    the processors chosen for each.  But, they typically are "kitchen sink" >>> approaches to problems.

    I'd, instead, look into the kernel and see if you can do away with
    the PMMU (i.e., get it to work with all memory wired down and no
    swap configured; then, remove the code associated with paging).

    That could have been good advice - twenty years ago.

    Now it is pointless to aim for such a minimal system.  The cheapest
    processors with MMU supported by Linux cost a few dollars.

    What do you do when your product *sells* for a few dollars?


    Is that a trick question? You don't use Linux.

    The cheapest non-MMU microcontrollers that are capable of supporting
    Linux are at least ten dollars.

    How do you define "supporting Linux"?  I.e., "for which an existing build exists?"


    Yes, or for which it is practical to make a build that could be used in
    a real system (as distinct from just for fun and bragging rights, such
    as the guy who got Linux "running" on an AVR).

    Most developers are only interested in the API and feature sets that
    they have available to them.  If it "looks" like linux, in terms of
    what they can expect it to do for them, they don't likely care about
    the actual implementation.

    I don't understand what you are trying to say here. Are we to guess
    what /looks/ like Linux, but /isn't/ Linux? You think people who want
    embedded Linux would be happy with a BSD? (Some might, but certainly
    not all.) Or a Windows system with WSL? Or FreeRTOS and LWIP with
    POSIX-style socket APIs?


      Swap has always been optional, but working without an MMU leads to a
    lot of complications and restrictions (such as no "fork" calls).

    Fork needn't "create a copy of the parent process" -- if the
    existing copy of the process can be used without duplication
    (think XIP -- no gobs of RAM into which to copy the new process
    image!).  All it need do is create a LOGICALLY new process container
    (which needn't even have "protection" from other processes).

    Fork /always/ has to create a /logical/ copy of the parent process -
    that's what it does. Without an MMU, all /writeable/ memory areas need
    to be duplicated at the fork by full copy, whereas with an MMU the pages
    are marked "copy on write" and only actually duplicated when needed.
    ("fork" existed before MMU processors were used for *nix.)

    In MMU-less Linux, "fork" is simply not supported as it would be too inefficient and complicated. You need to use vfork() then execve(), or posix_spawn(), or clone(), with certain restrictions.


    Fork is probably the *least* valuable use of a PMMU in a system.

    It is one of the biggest headaches when porting real Linux software to
    MMU-less Linux. It has become less of an issue for some software,
    because it has become more common to write programs that can run on
    Windows as well as Linux, and Windows does not support "fork()" either.

    An MMU that gives some (reasonable) control over accesses to
    specific regions IN A UNIFIED ADDRESS SPACE would likely lead
    to more robust code (in and of itself) than supporting a
    classic fork().


    You are talking about an MPU (memory protection unit), not an MMU
    (memory management unit). MPU's are common on 32-bit microcontrollers,
    and let you restrict access to different parts of memory.

    MMU's are used to change the mapping between logical addresses used by
    code, and physical addresses used by the hardware. They provide many
    functions in addition to supporting "fork()", such as giving
    applications a contiguous view of memory despite fragmentation in the
    physical memory, and letting shared libraries have different physical
    and logical addresses.

    An MMU makes life massively simpler, more flexible and more efficient in
    a "big" OS where different programs are loaded and run at different times.

      No one uses non-MMU Linux except for nerdy fun.  (And fun is
    /always/ a good reason for doing something.)

    <https://www.kernel.org/doc/html/latest/admin-guide/mm/nommu-mmap.html>
    <https://www.techonline.com/tech-papers/supporting-linux-without-an-mmu/>

    Yes - people did use it before, and now they don't. The day it becomes inconvenient to continue the support for it in the kernel, will be the
    day it gets dropped.


    This may make some aspects of the implementation impractical.  E.g.,
    my RTOS relies on a PMMU to share data across protection domains,
    do zero copy ransfers, etc.  But, you may be able to live without
    the things that rely on that mechanism.

    [No idea as I've never looked inside the linux kernel]

    Some of the older kernel versions (and ports) may give you an insight
    into what can/can't be done.

    This could expand the range of processors/SoCs that you could use
    (though likely require some effort for a port).



    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dimiter_Popoff@21:1/5 to David Brown on Wed Oct 26 16:06:45 2022
    On 10/26/2022 13:42, David Brown wrote:
    ...

    IMHO the "encrypt everything" movement is a silly idea and a massive
    waste of effort and resources.  Sure, you want your bank website traffic
    to use SSL, but it is completely unnecessary for the great majority of
    web traffic.

    The "encrypt everything" movement is not just silly, it is *shite*.
    And it is not just about the web, if goes also for mail etc.
    It is OK to have the encryption _capability_ but doing it all over the
    place is just a way to push the sales of more silicon. They used to
    do this by just bloating software so PC-s would become "old" within
    <5 years; now that they have tens of *giabytes* of RAM they need
    a way to justify selling even more.
    Overall may be not a bad thing, this has kept the industry advancing,
    but to those who can see how things work it looks not just silly,
    it looks.... (OK, here comes the Irish/Scottish word again).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to All on Wed Oct 26 12:20:39 2022
    On 10/26/2022 6:06 AM, Dimiter_Popoff wrote:
    On 10/26/2022 13:42, David Brown wrote:
    ...

    IMHO the "encrypt everything" movement is a silly idea and a massive waste of
    effort and resources.  Sure, you want your bank website traffic to use SSL, >> but it is completely unnecessary for the great majority of web traffic.

    The "encrypt everything" movement is not just silly, it is *shite*.
    And it is not just about the web, if goes also for mail etc.
    It is OK to have the encryption _capability_ but doing it all over the
    place is just a way to push the sales of more silicon. They used to
    do this by just bloating software so PC-s would become "old" within
    <5 years; now that they have tens of *giabytes* of RAM they need
    a way to justify selling even more.

    Digital comms are used for increasingly more purposes.
    Encrypting everything saves you from wondering if something SHOULD
    be encrypted, or not, at a "per communique" level.

    I've received correspondence from financial institutions along
    the lines of:
    "This is to confirm your recent transfer of $X from the
    account ending in 123 to the account ending in 456."
    Yay! You didn't disclose my account numbers. But, my *name*
    is on the email along with the size of the transaction and
    when it occurred!
    "This is to confirm your closing of the accounts ending
    in 123 and 456."
    Even if *I* wanted them to use PEM, there's no way to force
    the issue; my only recourse is to withhold an email address
    (the consequence of that is losing on-line access to my accounts
    via HTTPS) or move to another financial institution.

    [We receive dozens of print correspondence each week from
    financial institutions regarding our various accounts.
    Even if just "transaction confirmations", that volume of
    cleartext traffic would leak far too much *personal* information.
    Note that few people choose to receive financial statements
    printed on POSTCARDS (which are less expensive to mail)]

    Should the video feeds (over IP) from the security cameras
    be encrypted? After all, anyone standing in those areas can SEE
    what the cameras are seeing so it's hardly a *secret* that needs
    to be protected! Isn't the camera's purpose as a deterrent?

    What about the MUZAK audio? Clearly anyone within earshot of the
    speakers can hear it... Or, the overhead "paging" system?

    And, obviously no need to encrypt VoIP traffic? Or, command
    and control traffic on the factory floor? What employee would
    willingly eavesdrop OR SUBVERT such traffic?

    Or, the video feed *from* the security office (to know if they're
    actually actively watching the other feeds!). etc.

    Surely, your baby monitor need not be encrypted (?) -- who wants
    to watch a sleeping infant? Or, see who's at your front door?
    Back yard? Determine if anyone is moving around the vicinity of
    your thermostat?

    Who'd want to hack a pacemaker? Or, someone else's car? etc.

    [This is c.a.E, after all!]

    If encryption is the normal means of communication, then the
    consequences of someone making a poor decision (regarding
    whether or not to send something in cleartext) goes away. It's
    one less issue to address in the potential attack surface.
    One less "afterthought" (as security seems to be, in most products)

    [Increasingly, hardware support for encryption is available
    in newer processors -- because of a perceived demand for it!
    Imagine WIRELESS comms where access to the transmission "media"
    is effortless!]

    Overall may be not a bad thing, this has kept the industry advancing,
    but to those who can see how things work it looks not just silly,
    it looks.... (OK, here comes the Irish/Scottish word again).

    I suspect the *hacked* pacemaker patient might have a different
    take on it! :>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Don Y on Thu Oct 27 11:28:53 2022
    On 26/10/2022 21:20, Don Y wrote:
    On 10/26/2022 6:06 AM, Dimiter_Popoff wrote:
    On 10/26/2022 13:42, David Brown wrote:
    ...

    IMHO the "encrypt everything" movement is a silly idea and a massive
    waste of effort and resources.  Sure, you want your bank website
    traffic to use SSL, but it is completely unnecessary for the great
    majority of web traffic.

    The "encrypt everything" movement is not just silly, it is *shite*.
    And it is not just about the web, if goes also for mail etc.
    It is OK to have the encryption _capability_ but doing it all over the
    place is just a way to push the sales of more silicon. They used to
    do this by just bloating software so PC-s would become "old" within
    <5 years; now that they have tens of *giabytes* of RAM they need
    a way to justify selling even more.

    Digital comms are used for increasingly more purposes.
    Encrypting everything saves you from wondering if something SHOULD
    be encrypted, or not, at a "per communique" level.


    That's a reasonable argument, on the surface. But like many such
    simplistic rules, it discourages thinking, knowledge, nuances and
    appropriate usage. It is much like "zero tolerance" rules - they mean
    "zero thought" and often throw out the baby with the bath water.

    Different types of communication or storage have different requirements,
    and the benefits and costs of encryption are correspondingly varied.
    There are /many/ costs to using encryption - not just processor cycles
    or code and ram space. There's complexity in the code and the scope for
    bugs, the near impossibility of debugging or monitoring traffic or
    recovering data in encrypted storage, and the need to handle
    ever-changing standards and expiring keys and certificates.

    And while it might appear that "encrypt everything" means that even
    those that don't really understand the issues will still make "safe"
    systems because they use encryption by default, it is simply not true.
    Those who don't understand the appropriate security needs for a
    particular use-case are unlikely to use /appropriate/ encryption, and
    can easily get it wrong (such as poor handling of the keys). And now
    instead of saying "I don't understand this, I'll ask someone who does",
    they will think "it's all encrypted and therefore secure". They'll
    think their website is safe because it uses TLS, without considering
    that the bad guys can connect on the same encrypted links and hack in
    with the same weak passwords - only now as their traffic is encrypted,
    it's harder to track them.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dave Nadler@21:1/5 to Theo on Tue Nov 1 16:59:16 2022
    On 10/24/2022 10:20 AM, Theo wrote:
    I was idly looking to see what was out there in the low end Linux space - something bigger than an ESP32 but more production friendly than a Raspberry Pi. ...

    Any other recommendations for Linux-supporting SoCs that are nice for low volume/hand production?

    Theo

    Here are a few SOM I've looked at, trying to avoid SOC difficulties: https://www.mouser.com/c/?q=QSMP-15
    Some firms I've worked with have been happy with Toradex (for new
    designs use Verdin family): https://www.toradex.com/computer-on-modules/verdin-arm-family
    Lower end: https://www.digikey.com/en/products/detail/microchip-technology/ATSAMA5D27C-D5M-CUR/7801902

    I guess I'm a wimp, but I really don't want to deal with DDR routing
    and EMC issues for small runs...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)