• How to redirect to non-standard file descriptors?

    From Robert Latest@21:1/5 to All on Sun Oct 3 14:36:02 2021
    Hello,
    I'm trying to write a bash script that expects user input for every file in a directory. Here's what I came up with:

    #!/bin/sh

    touch 1 2 3 4 5 # for testing

    ls | while read fn; do
    echo File $fn
    read -p 'Do what:' what
    echo What $what
    done

    It's kind of obvious why this doesn't work: Both 'read's read from the same stdin, so instead of waiting for user input for file 1, it just reads '2' into 'what' and so on. Obviously the output of 'ls' should go into a different file descriptor than stdout. So I tried this:

    ls 1>&4 | while read -u 4 fn; do

    but it gives the error "bad file descriptor".

    I must say I understand very little of the redirection chapter in "man bash". For instance, I never understood why, when I want to capture command's stderr, I need to do

    command > output.txt 2>&1

    rather than

    command 2>&1 > output.txt

    It doesn't seem logical that a redirection specifier after the target file would influence what goes into that file.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lew Pitcher@21:1/5 to Robert Latest on Sun Oct 3 14:42:05 2021
    On Sun, 03 Oct 2021 14:36:02 +0000, Robert Latest wrote:

    Hello,
    I'm trying to write a bash script that expects user input for every file in a directory. Here's what I came up with:

    #!/bin/sh

    touch 1 2 3 4 5 # for testing

    ls | while read fn; do
    echo File $fn
    read -p 'Do what:' what
    echo What $what
    done

    It's kind of obvious why this doesn't work: Both 'read's read from the same stdin, so instead of waiting for user input for file 1, it just reads '2' into
    'what' and so on. Obviously the output of 'ls' should go into a different file
    descriptor than stdout.

    Nope.

    Your script fails because both reads read from stdin, and stdin has been redirected to the stdout of the ls(1) command.

    What you /want/ is to have the first read
    read fn
    read from stdin (still connected to the stdout of ls(1) )

    and the second read
    read -p 'Do what:' what
    to read from your terminal

    You do this by redirecting the stdin of the /second/ read so that
    it reads from /dev/tty, as in

    ls | while read fn; do
    echo File $fn
    read -p 'Do what:' what </dev/tty
    echo What $what
    done

    [snip]


    --
    Lew Pitcher
    "In Skills, We Trust"

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ben Bacarisse@21:1/5 to Robert Latest on Sun Oct 3 16:50:08 2021
    Robert Latest <boblatest@yahoo.com> writes:

    I'm trying to write a bash script that expects user input for every file in a directory. Here's what I came up with:

    #!/bin/sh

    touch 1 2 3 4 5 # for testing

    ls | while read fn; do
    echo File $fn
    read -p 'Do what:' what
    echo What $what
    done

    It's kind of obvious why this doesn't work: Both 'read's read from the same stdin, so instead of waiting for user input for file 1, it just reads '2' into
    'what' and so on. Obviously the output of 'ls' should go into a different file
    descriptor than stdout.

    You've had an answer, but I don't think it's ideal. Unix works best by
    using stdin and stout where possible. One day you'll want the answers
    to come from a file and not the user's tty. (And if you don't someone
    else will!)

    You should simply side-step the two reads by looping over the files
    directly:

    for fn in *; do
    echo File "$fn"
    read -p 'Do what:' what
    echo What "$what"
    done

    Note that I've also quoted all uses of the variables.

    ls 1>&4 | while read -u 4 fn; do

    You may be able to do it with read's -u n argument, but I've never found
    the need to delve into that sort of thing. Simpler is better wherever
    it's possible.

    --
    Ben.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lew Pitcher@21:1/5 to Ben Bacarisse on Sun Oct 3 16:04:20 2021
    On Sun, 03 Oct 2021 16:50:08 +0100, Ben Bacarisse wrote:

    Robert Latest <boblatest@yahoo.com> writes:

    I'm trying to write a bash script that expects user input for every file in a
    directory. Here's what I came up with:

    #!/bin/sh

    touch 1 2 3 4 5 # for testing

    ls | while read fn; do
    echo File $fn
    read -p 'Do what:' what
    echo What $what
    done

    It's kind of obvious why this doesn't work: Both 'read's read from the same >> stdin, so instead of waiting for user input for file 1, it just reads '2' into
    'what' and so on. Obviously the output of 'ls' should go into a different file
    descriptor than stdout.

    You've had an answer, but I don't think it's ideal. Unix works best by
    using stdin and stout where possible. One day you'll want the answers
    to come from a file and not the user's tty. (And if you don't someone
    else will!)

    I bow to your expertise. Your solution covers more situations than mine
    did. I learned something today :-)

    [snip]
    --
    Lew Pitcher
    "In Skills, We Trust"

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David W. Hodgins@21:1/5 to Robert Latest on Sun Oct 3 13:37:32 2021
    On Sun, 03 Oct 2021 13:19:42 -0400, Robert Latest <boblatest@yahoo.com> wrote:
    I bow to both of your's expertise. I usually do the for loop myself but in this case the input doesn't come from a simple 'ls' but from a more convoluted
    thing which of course could be backquoted and used in 'for', but the filenames
    may have spaces in it and so on... anyway, I used the /dev/tty redirect.

    Then use an array, and for the spaces etc, be sure to always properly quote the variables. For example ...
    IFS=$'\n'
    SFDiskOutput=($($sudocmd $sfdiskcmd \-l \-uS /dev/sd? 2>&1)) # list drives and block sizes
    <snip test for error etc.>
    for SFDiskLine in "${SFDiskOutput[@]}" ; do

    Regards, Dave Hodgins

    --
    Change dwhodgins@nomail.afraid.org to davidwhodgins@teksavvy.com for
    email replies.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Robert Latest@21:1/5 to Lew Pitcher on Sun Oct 3 17:19:42 2021
    Lew Pitcher wrote:
    On Sun, 03 Oct 2021 16:50:08 +0100, Ben Bacarisse wrote:
    I bow to your expertise. Your solution covers more situations than mine
    did. I learned something today :-)

    Ben, Lew,

    I bow to both of your's expertise. I usually do the for loop myself but in this case the input doesn't come from a simple 'ls' but from a more convoluted thing which of course could be backquoted and used in 'for', but the filenames may have spaces in it and so on... anyway, I used the /dev/tty redirect.

    More weird stuff seems to be possible with 'exec'; here's some stuff I found in /sbin:

    /sbin$ grep '>&[3-9]' *
    dhclient-script: exec 0>&9 9>&-
    discover-pkginstall: echo $RET | sed 's/,//g' 1>&8
    mkinitramfs: exec 4>&1 >&3 3>&-
    mkinitramfs: find . 4>&-; echo "ec1=$?;" >&4
    mkinitramfs: echo "ec2=$?;" >&4

    I don't understand it and don't need it at the moment.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ben Bacarisse@21:1/5 to Robert Latest on Sun Oct 3 21:44:41 2021
    Robert Latest <boblatest@yahoo.com> writes:

    Lew Pitcher wrote:
    On Sun, 03 Oct 2021 16:50:08 +0100, Ben Bacarisse wrote:
    I bow to your expertise. Your solution covers more situations than mine
    did. I learned something today :-)

    Ben, Lew,

    I bow to both of your's expertise. I usually do the for loop myself
    but in this case the input doesn't come from a simple 'ls' but from a
    more convoluted thing which of course could be backquoted and used in
    'for', but the filenames may have spaces in it and so on... anyway, I
    used the /dev/tty redirect.

    Presumably the "and so on" does not include newlines because they will
    cause the 'read' method to fail. If that's not a problem you can use

    IFS=$'\n'
    for fn in $(complex command); do ... done

    If you want to manage every single file (ones with newlines included)
    you need something that can use a null character as the separator. For
    example

    find . -maxdepth 1 -print0

    is very much like ls but separates the names with null bytes. If you
    can arrange that sort of input, you can get the names into an array in a
    modern version of bash using

    readarray -d '' files < <(find . -maxdepth 1 -print0)

    The -d '' makes readarray use null as an input separator (something you
    can't do with IFS) and you can then loop over the array

    for fn in "${files[@]}"; do ... done

    --
    Ben.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ben Bacarisse@21:1/5 to oguzismailuysal@gmail.com on Mon Oct 4 02:11:32 2021
    Oğuz <oguzismailuysal@gmail.com> writes:

    On Sunday, October 3, 2021 at 5:36:08 PM UTC+3, Robert Latest wrote:
    Hello,
    I'm trying to write a bash script that expects user input for every file in a
    directory. Here's what I came up with:

    #!/bin/sh

    touch 1 2 3 4 5 # for testing

    ls | while read fn; do
    echo File $fn
    read -p 'Do what:' what
    echo What $what
    done

    It's kind of obvious why this doesn't work: Both 'read's read from the same >> stdin, so instead of waiting for user input for file 1, it just reads '2' into
    'what' and so on. Obviously the output of 'ls' should go into a different file
    descriptor than stdout.

    No, but the inner read should read from the shell's input:
    {
    ls | while read fn; do
    echo File $fn
    read -p 'Do what:' -u 3 what
    echo What $what
    done
    } 3<&0

    This is a nice solution as it does not prevent redirection of the
    command's input. (The usual caveats about newlines in file names, and
    the 'ls' being, in practice, a more complex command.)

    --
    Ben.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?B?T8SfdXo=?=@21:1/5 to Robert Latest on Sun Oct 3 17:50:58 2021
    On Sunday, October 3, 2021 at 5:36:08 PM UTC+3, Robert Latest wrote:
    Hello,
    I'm trying to write a bash script that expects user input for every file in a directory. Here's what I came up with:

    #!/bin/sh

    touch 1 2 3 4 5 # for testing

    ls | while read fn; do
    echo File $fn
    read -p 'Do what:' what
    echo What $what
    done

    It's kind of obvious why this doesn't work: Both 'read's read from the same stdin, so instead of waiting for user input for file 1, it just reads '2' into
    'what' and so on. Obviously the output of 'ls' should go into a different file
    descriptor than stdout.

    No, but the inner read should read from the shell's input:
    {
    ls | while read fn; do
    echo File $fn
    read -p 'Do what:' -u 3 what
    echo What $what
    done
    } 3<&0


    but it gives the error "bad file descriptor".

    I must say I understand very little of the redirection chapter in "man bash". For instance, I never understood why, when I want to capture command's stderr,
    I need to do

    command > output.txt 2>&1

    rather than

    command 2>&1 > output.txt

    It doesn't seem logical that a redirection specifier after the target file would influence what goes into that file.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kenny McCormack@21:1/5 to oguzismailuysal@gmail.com on Mon Oct 4 02:01:00 2021
    In article <25460292-f17f-4c82-b06d-01d30cf2cec0n@googlegroups.com>,
    O uz <oguzismailuysal@gmail.com> wrote:
    ...
    No, but the inner read should read from the shell's input:
    {
    ls | while read fn; do
    echo File $fn
    read -p 'Do what:' -u 3 what
    echo What $what
    done
    } 3<&0

    Or, somewhat more simply:

    #!/bin/bash
    exec 3<&0
    seq 1 10 | while read fn; do
    echo File $fn
    read -p 'Do what:' -u3 what
    echo What $what
    done

    Or even:

    #!/bin/bash
    exec 3<&0
    while read fn; do
    echo File $fn
    read -p 'Do what:' -u3 what
    echo What $what
    done < <(seq 1 10)

    Which often works better than the first method...

    But, as noted many times in this thread, if you want to loop on filenames,
    it is better to do:

    for i in FOO*.*;do ...

    --
    To most Christians, the Bible is like a software license. Nobody
    actually reads it. They just scroll to the bottom and click "I agree."

    - author unknown -

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From William Ahern@21:1/5 to Robert Latest on Mon Oct 4 21:28:12 2021
    Robert Latest <boblatest@yahoo.com> wrote:
    <snip>
    I must say I understand very little of the redirection chapter in "man bash". For instance, I never understood why, when I want to capture command's stderr,
    I need to do

    command > output.txt 2>&1

    rather than

    command 2>&1 > output.txt

    It doesn't seem logical that a redirection specifier after the target file would influence what goes into that file.

    It makes more sense when you understand how redirection is implemented in C:

    2>&1

    evaulates as

    dup2(1, 2);

    and

    >output.txt

    is short-hand for

    1>output.txt

    which evaluates as

    int fd = open("output.txt", O_CREAT|O_RDONLY);
    dup2(fd, 1);

    Now consider that redirection operators as well as the pipe operator (|) are evaluated left to right, and that after performing dup2(1, 2) there's no way
    to recover the previous file which descriptor 2 referenced. If you want to reference that previous file you either need to rearrange the order of operations or explicitly copy (dup) the reference to another descriptor
    number.

    The Unix shell makes more sense when you realize that most seemingly
    abstract features directly map to a small set of otherwise simple syscalls
    such as dup, dup2, fork, waitpid, etc. At least in the context of Unix
    systems, the shell excels at program execution and I/O redirection precisely because it's effectively a literal, in-order evaluation of these syscalls.

    command > output.txt 2>&1

    evaluates much like

    int pid = fork();
    if (pid == 0) { /* child */
    int fd = open("output.txt");
    dup2(fd, 1);
    dup2(1, 2); /* fd:1 is now output.txt, not old stdout */
    execv("command", ...);
    exit(127);
    } else { /* parent */
    int status;
    while (pid != wait(&status)) {
    ...
    }
    }

    Most of the complexity of shell implementations is in string manipulation
    and other bookkeeping that's relatively verbose in C, as well as interactive features like managing the prompt. Early shell implementations, before many
    of those other convenience features were added, were thus exceedingly simple interpreters. Here's the code to the original Bourne shell code that parsed
    and executed a command-line:

    https://www.tuhs.org/cgi-bin/utree.pl?file=V7/usr/src/cmd/sh/xec.c

    Which at cursory inspection seems to evaluate a command as I described,
    except implemented as a loop and switch over the scanner.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Robert Latest@21:1/5 to William Ahern on Tue Oct 5 17:53:40 2021
    William Ahern wrote:

    It makes more sense when you understand how redirection is implemented in C:

    2>&1

    evaulates as

    dup2(1, 2);

    I must admit I've never gotten into the low-level I/O stuff always ised the more high-level stdio implementation.


    https://www.tuhs.org/cgi-bin/utree.pl?file=V7/usr/src/cmd/sh/xec.c

    Now that's some weird-looking code. There must be some interesting stuff in sym.h and defs.h.

    Thanks for the good explanation!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Keith Thompson@21:1/5 to William Ahern on Tue Oct 5 12:11:57 2021
    William Ahern <william@25thandClement.com> writes:
    Robert Latest <boblatest@yahoo.com> wrote:
    <snip>
    I must say I understand very little of the redirection chapter in "man bash".
    For instance, I never understood why, when I want to capture command's stderr,
    I need to do

    command > output.txt 2>&1

    rather than

    command 2>&1 > output.txt

    It doesn't seem logical that a redirection specifier after the target file >> would influence what goes into that file.

    It makes more sense when you understand how redirection is implemented in C:

    [rest of thorough explanation snipped]

    Personally, I find it easier to understand the syntax by thinking of it
    as a left-to-right evaluation of operations, without reference to the
    dup2() calls.

    command # executes command with stdout and stderr inherited from the shell
    output.txt # redirects stdout to file "output.txt", leaving stderr alone
    &1 # redirects stderr to wherever stdout is currently going

    Result: both stdout and stderr are redirected to the file "output.txt".
    (Of course the command doesn't start to execute until all the
    redirections have been done.)

    Compare the other suggested command:

    command # executes command with stdout and stderr inherited from the shell
    &1 # redirects stderr to wherever stdout is currently going
    output.txt # redirects stdout to "output.txt", leaving stderr alone

    Result: stdout goes to "output.txt", but stderr is unchanged.

    I think the key point of confusion (I used to get hung up on this
    myself) is that "2>&1" doesn't say "make stdout and stderr go to the
    same place, and keep them joined over any future redirections". It's a one-time change to stderr.

    If it helps, think of stdin (0), stdout (1), and stderr (2) as
    independent variables, and the redirection operators as assignment
    statements, evaluated left to right. The "assignment "2>&1" changes the
    value of fd 2, but does not affect fd 1; it only obtains fd 1's current
    value.

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    Working, but not speaking, for Philips
    void Void(void) { Void(); } /* The recursive call of the void */

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Helmut Waitzmann@21:1/5 to All on Tue Oct 5 21:55:27 2021
    Robert Latest <boblatest@yahoo.com>:

    I must say I understand very little of the redirection chapter in
    "man bash".

    That's because the bash manual page purpose is not to teach the unix
    file descriptor redirection mechanism.  The bash manual page assumes
    that the reader already is familiar with the unix file descriptors
    and knows what “redirection” of those file descriptors is.

    So, here is a very short introduction to unix file descriptors:


    For accessing files, the kernel maintains two data structures: a
    system-wide table (or array) of opened files, and a per-process
    (i. e. each process has one of its own) table (or array) of file descriptors.  When a process asks the kernel to open a file, for
    example by invoking the “open()” system service, the kernel picks an unused entry of the process' table of file descriptors.  This entry
    will be identified by its position (or index) in that file
    descriptor table: a small number (0, 1, 2, …), a. k. a. the file descriptor number.

    Then the kernel picks an unused entry of the system-wide table of
    open files and records a reference to (for example: the position in
    the system‐wide open‐files table) that entry in the allocated entry
    of the process' table of file descriptors.

    In the allocated entry of the system-wide table of open files, the
    kernel records, which file is to be accessed, the access mode (that
    is, whether the file is opened for reading or for writing, if
    writing will overwrite the file or append to it, etc.), the current
    access position in the opened file, and, how many file descriptor
    table entries are associated with this entry of the system-wide
    table of open files (in this case: 1).

    Finally, the kernel returns the index of the allocated entry of the
    process' file descriptor table to the process.  In the bash manual
    page that index is known as file descriptor number.

    For example, the return value of the system call “open()” will
    return such a number.  See the manual page “open(2)”.

    Note:  The process can't (directly) tell the kernel, which entry
    of the file descriptors table to use, when opening a file.

    But there is a system service, which allows a process to tell the
    kernel, which entry of the file descriptors table to use: “dup2()”. 
    See the manual page “dup2(2)”.  The “dup()” and “dup2()” system services essentially copy one entry of the process' file descriptor
    table to another.


    So, if you tell the shell to start a command by


    command > output.txt

    the shell will first open the file “output.txt” by means of the
    system service “open”, which will (for example) return the file
    descriptor number 42.  Then the shell will tell the kernel by means
    of the system service “dup2” to copy the contents of the entry #42
    of the process' file descriptor table to the entry #1 in the same
    table.  Finally the shell will tell the kernel by means of the
    system service “close” to release (i. e. empty) the entry #42.

    The result of those three system service calls is, that the file
    descriptor #1 will refer to the opened file “output.txt” (rather
    than to the terminal):  When the command writes data to its file
    descriptor #1 (a. k. a. standard output), the data will arrive in
    the file “output.txt”.  That's why this sequence of system service
    calls is often called redirection.  But note:  There is nothing like redirection or forwarding involved in this mechanism.  It's just the
    effect of copying one file descriptor table entry to another.

    Now, if the command is


    command > output.txt 2>&1


    the shell first will do the same and then tell the kernel by means
    of the system service “dup2” to copy the contents of the entry #1 to
    the entry #2.  Now two entries in the process' file descriptor
    table – #1 and #2 – refer to the opened file “output.txt”.

    On the other hand, look, what


    command 2>&1 > output.txt


    would do:  It would copy the entry #1 of the process' file
    descriptor table to the entry #2 and then open the file
    “output.txt”, thus getting the (hypothetical) entry #42, then copy
    that entry #42 to the entry #1.  Of course the entry #2 won't be a
    copy of the entry #42 then, i. e. won't refer to the file
    “output.txt”.

    It doesn't seem logical that a redirection specifier after the
    target file would influence what goes into that file.

    You are fooled by the misnomer file descriptor redirection.  Keep in
    mind, that

    2>&1

    is essentially not much more than copying entry #1 to entry #2. 
    There is no forwarding involved like “if you want to write to the
    file referred by the process' file descriptor table entry #2, look
    up the file descriptor table entry #1 and use that instead”.

    See also <https://en.wikipedia.org/wiki/File_descriptor#top>.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From William Ahern@21:1/5 to Robert Latest on Tue Oct 5 15:22:15 2021
    Robert Latest <boblatest@yahoo.com> wrote:
    William Ahern wrote:

    It makes more sense when you understand how redirection is implemented in C: >>
    2>&1

    evaulates as

    dup2(1, 2);

    I must admit I've never gotten into the low-level I/O stuff always ised the more high-level stdio implementation.


    https://www.tuhs.org/cgi-bin/utree.pl?file=V7/usr/src/cmd/sh/xec.c

    Now that's some weird-looking code. There must be some interesting stuff in sym.h and defs.h.

    Steve Bourne fancied Algol syntax and blazed the trail for abusive C preprocessor magic: https://research.swtch.com/shmacro

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dan Espen@21:1/5 to Robert Latest on Tue Oct 5 21:10:59 2021
    Robert Latest <boblatest@yahoo.com> writes:

    William Ahern wrote:

    It makes more sense when you understand how redirection is implemented in C: >>
    2>&1

    evaulates as

    dup2(1, 2);

    I must admit I've never gotten into the low-level I/O stuff always ised the more high-level stdio implementation.


    https://www.tuhs.org/cgi-bin/utree.pl?file=V7/usr/src/cmd/sh/xec.c

    Now that's some weird-looking code. There must be some interesting stuff in sym.h and defs.h.

    I like it. No curly braces except at the function level.
    Somebody messed up SYSTIMES and SYSUMASK.

    --
    Dan Espen

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kenny McCormack@21:1/5 to dan1espen@gmail.com on Wed Oct 6 10:11:57 2021
    In article <sjit33$133$2@dont-email.me>,
    Dan Espen <dan1espen@gmail.com> wrote:
    ...
    https://www.tuhs.org/cgi-bin/utree.pl?file=V7/usr/src/cmd/sh/xec.c

    Now that's some weird-looking code. There must be some interesting stuff in >> sym.h and defs.h.

    I like it. No curly braces except at the function level.
    Somebody messed up SYSTIMES and SYSUMASK.

    As another poster noted, the syntax is reminiscent of Algol, but what is
    most notable is that it is the same (more or less) syntax as the shell
    itself. (IF/FI, etc).

    The syntax of the shell itself was, they say, based on Algol.

    --
    The randomly chosen signature file that would have appeared here is more than 4 lines long. As such, it violates one or more Usenet RFCs. In order to remain in compliance with said RFCs, the actual sig can be found at the following URL:
    http://user.xmission.com/~gazelle/Sigs/TedCruz

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Geoff Clare@21:1/5 to Keith Thompson on Wed Oct 6 13:40:23 2021
    Keith Thompson wrote:

    command # executes command with stdout and stderr inherited from the shell
    &1 # redirects stderr to wherever stdout is currently going
    output.txt # redirects stdout to "output.txt", leaving stderr alone

    Result: stdout goes to "output.txt", but stderr is unchanged.

    Nit-pick: if stdout and stderr were open to different files before
    the command, stderr is not unchanged; it ends up going to the file
    that stdout was originally going to.

    A typical use of this is to filter stderr:

    some_command 2>&1 >output.txt | some_filter >&2

    The 2>&1 makes stderr go to the pipe.

    --
    Geoff Clare <netnews@gclare.org.uk>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Keith Thompson@21:1/5 to Geoff Clare on Wed Oct 6 07:11:18 2021
    Geoff Clare <geoff@clare.See-My-Signature.invalid> writes:
    Keith Thompson wrote:

    command # executes command with stdout and stderr inherited from the shell
    &1 # redirects stderr to wherever stdout is currently going
    output.txt # redirects stdout to "output.txt", leaving stderr alone

    Result: stdout goes to "output.txt", but stderr is unchanged.

    Nit-pick: if stdout and stderr were open to different files before
    the command, stderr is not unchanged; it ends up going to the file
    that stdout was originally going to.

    Quite correct. (That's not just a nitpick; I was wrong.) Thanks.

    A typical use of this is to filter stderr:

    some_command 2>&1 >output.txt | some_filter >&2

    The 2>&1 makes stderr go to the pipe.

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    Working, but not speaking, for Philips
    void Void(void) { Void(); } /* The recursive call of the void */

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Helmut Waitzmann@21:1/5 to All on Thu Oct 7 09:35:52 2021
    Geoff Clare <geoff@clare.See-My-Signature.invalid>:

    A typical use of this is to filter stderr:

    some_command 2>&1 >output.txt | some_filter >&2

    The 2>&1 makes stderr go to the pipe.


    It's even possible to filter stdout as well as stderr, each of them
    by a filter of its own, for example:

    {
    { some_command 3>&- 4>&- |
    sed -e 's/^/stdout: /' 1>&3 3>&- 2>&4 4>&-
    } 2>&1 |
    sed -e 's/^/stderr: /' 1>&2 4>&- 3>&-
    } 3>&1 4>&2

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ben Bacarisse@21:1/5 to Helmut Waitzmann on Thu Oct 7 23:21:35 2021
    Helmut Waitzmann <nn.throttle@xoxy.net> writes:

    Geoff Clare <geoff@clare.See-My-Signature.invalid>:

    A typical use of this is to filter stderr:

    some_command 2>&1 >output.txt | some_filter >&2

    The 2>&1 makes stderr go to the pipe.


    It's even possible to filter stdout as well as stderr, each of them by a filter of its own, for example:

    {
    { some_command 3>&- 4>&- |
    sed -e 's/^/stdout: /' 1>&3 3>&- 2>&4 4>&-
    } 2>&1 |
    sed -e 's/^/stderr: /' 1>&2 4>&- 3>&-
    } 3>&1 4>&2

    Wow. I take my hat off to you. I knew this was possible, I don't think
    I'd be able to work out how. But it is an illustration of design
    failure. This should not be hard.

    (When I need to do this, I used a named pipe.)

    --
    Ben.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Branimir Maksimovic@21:1/5 to Ben Bacarisse on Thu Oct 7 23:32:14 2021
    On 2021-10-07, Ben Bacarisse <ben.usenet@bsb.me.uk> wrote:
    Helmut Waitzmann <nn.throttle@xoxy.net> writes:

    Geoff Clare <geoff@clare.See-My-Signature.invalid>:

    A typical use of this is to filter stderr:

    some_command 2>&1 >output.txt | some_filter >&2

    The 2>&1 makes stderr go to the pipe.


    It's even possible to filter stdout as well as stderr, each of them by a
    filter of its own, for example:

    { { some_command 3>&- 4>&- | sed -e 's/^/stdout: /' 1>&3 3>&- 2>&4 4>&- } >> 2>&1 | sed -e 's/^/stderr: /' 1>&2 4>&- 3>&- } 3>&1 4>&2

    Wow. I take my hat off to you. I knew this was possible, I don't think I'd be able to work out how. But it is an illustration of design failure. This should not be hard.

    (When I need to do this, I used a named pipe.)

    It's easy to read, it' just not common usage :P


    --

    7-77-777
    Evil Sinner!
    with software, you repeat same experiment, expecting different results...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to Branimir Maksimovic on Fri Oct 8 09:17:23 2021
    On 08.10.2021 01:32, Branimir Maksimovic wrote:
    On 2021-10-07, Ben Bacarisse <ben.usenet@bsb.me.uk> wrote:
    Helmut Waitzmann <nn.throttle@xoxy.net> writes:

    Geoff Clare <geoff@clare.See-My-Signature.invalid>:

    A typical use of this is to filter stderr:

    some_command 2>&1 >output.txt | some_filter >&2

    The 2>&1 makes stderr go to the pipe.


    It's even possible to filter stdout as well as stderr, each of them by a >>> filter of its own, for example:

    { { some_command 3>&- 4>&- | sed -e 's/^/stdout: /' 1>&3 3>&- 2>&4 4>&- } >>> 2>&1 | sed -e 's/^/stderr: /' 1>&2 4>&- 3>&- } 3>&1 4>&2

    Wow. I take my hat off to you. I knew this was possible, I don't think I'd >> be able to work out how. But it is an illustration of design failure. This >> should not be hard.

    (When I need to do this, I used a named pipe.)

    It's easy to read, it' just not common usage :P

    Per design the feature operates on a very low abstraction level. Of
    course you can try to reconstruct the individual pieces, redirecting, duplicating, closing, on the various bracketed levels, but it's not
    obvious and also error prone. On the other hand that allows to solve
    most (all?) redirection tasks. Above solution is best taken as fixed
    code pattern so that you don't have to rebuild that cryptic expression
    every time you need it, and an attached code comment helps to remember
    what it does without the need to re-confirm its functionality.

    Janis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?B?T8SfdXo=?=@21:1/5 to Helmut Waitzmann on Fri Oct 8 03:30:49 2021
    On Thursday, October 7, 2021 at 10:37:48 AM UTC+3, Helmut Waitzmann wrote:
    Geoff Clare <ge...@clare.See-My-Signature.invalid>:
    A typical use of this is to filter stderr:

    some_command 2>&1 >output.txt | some_filter >&2

    The 2>&1 makes stderr go to the pipe.

    It's even possible to filter stdout as well as stderr, each of them
    by a filter of its own, for example:

    {
    { some_command 3>&- 4>&- |
    sed -e 's/^/stdout: /' 1>&3 3>&- 2>&4 4>&-
    } 2>&1 |
    sed -e 's/^/stderr: /' 1>&2 4>&- 3>&-
    } 3>&1 4>&2

    How is this any different from

    { { some_command | sed -e 's/^/stdout: /' >&3 2>&4; } 2>&1 | sed -e 's/^/stderr: /' >&2; } 3>&1 4>&2

    ? What is the point of closing 3 and 4?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Helmut Waitzmann@21:1/5 to All on Sun Oct 10 01:56:37 2021
    Oğuz <oguzismailuysal@gmail.com>:
    On Thursday, October 7, 2021 at 10:37:48 AM UTC+3, Helmut Waitzmann wrote:
    Geoff Clare <ge...@clare.See-My-Signature.invalid>:
    A typical use of this is to filter stderr:

    some_command 2>&1 >output.txt | some_filter >&2

    The 2>&1 makes stderr go to the pipe.

    It's even possible to filter stdout as well as stderr, each of
    them by a filter of its own, for example:

    {
    { some_command 3>&- 4>&- |
    sed -e 's/^/stdout: /' 1>&3 3>&- 2>&4 4>&-
    } 2>&1 |
    sed -e 's/^/stderr: /' 1>&2 4>&- 3>&-
    } 3>&1 4>&2

    How is this any different from

    { { some_command | sed -e 's/^/stdout: /' >&3 2>&4; } 2>&1 | sed -e 's/^/stderr: /' >&2; } 3>&1 4>&2

    ?

    Using "1>&3" rather than ">&3" makes it for me more explicit which
    file descriptor table entry is copied, but both forms are
    equivalent.

    What is the point of closing 3 and 4?


    File descriptor hygiene.  When not closing 3 and 4, "some_command"
    might[1] be able to write to the file descriptors 3 and 4, thus
    circumventing the filters and causing unfiltered output to the
    invoker's file descriptors 1 and 2.  Depending on the invocation
    environment this might cause a security hole.

    [1] Whether "some_command" actually will be able to write to the
    file descriptors 3 and 4 depends on the implementation of the
    (POSIX) shell:  Iirc POSIX neither mandates nor prohibits the
    setting of the close‐on‐exec flag on the file descriptors 3 and 4 by
    the shell.

    And even if "some_command" is not misbehaving, the unnecessarily
    open file descriptors 3 and 4 will decrease the number of available
    file descriptors for "some_command" and for the "sed" filters.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Helmut Waitzmann@21:1/5 to All on Tue Oct 12 21:55:44 2021
    Helmut Waitzmann <nn.throttle@xoxy.net>:
    Oğuz <oguzismailuysal@gmail.com>:

    [discussing the following command]

    {
    { some_command 3>&- 4>&- |
    sed -e 's/^/stdout: /' 1>&3 3>&- 2>&4 4>&-
    } 2>&1 |
    sed -e 's/^/stderr: /' 1>&2 4>&- 3>&-
    } 3>&1 4>&2

    What is the point of closing 3 and 4?


    File descriptor hygiene.  When not closing 3 and 4, "some_command"


    [might be misbehaving by writing to the file descriptors 3 and 4]


    And even if "some_command" is not misbehaving, the unnecessarily
    open file descriptors 3 and 4 will decrease the number of available
    file descriptors for "some_command" and for the "sed" filters.

    And there is another pitfall with file descriptors 3 and 4 left
    open:  Imagine, that "some_command" might eventually put itself into
    the background by closing and reopening file descriptors 0, 1 and 2
    to /dev/null, then forking a child and dying.  As neither the parent
    nor the child knows about the open file descriptors 3 and 4, the
    child will left them open.  Now, if the output of the command should
    be fed to a pipe, for example, by

    {
    {
    { some_command 3>&- 4>&- |
    sed -e 's/^/stdout: /' 1>&3 3>&- 2>&4 4>&-
    } 2>&1 |
    sed -e 's/^/stderr: /' 1>&2 4>&- 3>&-
    } 3>&1 4>&2
    } 2>&1 |
    mailx ...

    "mailx" won't terminate as long as the child of "some_command" is
    alive, because it won't stop waiting for input, because the forked
    child of "some_command" unknowingly holds the input side of the "|
    mailx" FIFO open.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?B?T8SfdXo=?=@21:1/5 to Helmut Waitzmann on Tue Oct 12 22:55:57 2021
    On Tuesday, October 12, 2021 at 10:55:52 PM UTC+3, Helmut Waitzmann wrote:
    Helmut Waitzmann <nn.th...@xoxy.net>:
    Oğuz <oguzism...@gmail.com>:

    [discussing the following command]
    {
    { some_command 3>&- 4>&- |
    sed -e 's/^/stdout: /' 1>&3 3>&- 2>&4 4>&-
    } 2>&1 |
    sed -e 's/^/stderr: /' 1>&2 4>&- 3>&-
    } 3>&1 4>&2
    What is the point of closing 3 and 4?


    File descriptor hygiene. When not closing 3 and 4, "some_command"

    [might be misbehaving by writing to the file descriptors 3 and 4]
    And even if "some_command" is not misbehaving, the unnecessarily
    open file descriptors 3 and 4 will decrease the number of available
    file descriptors for "some_command" and for the "sed" filters.
    And there is another pitfall with file descriptors 3 and 4 left
    open: Imagine, that "some_command" might eventually put itself into
    the background by closing and reopening file descriptors 0, 1 and 2
    to /dev/null, then forking a child and dying. As neither the parent
    nor the child knows about the open file descriptors 3 and 4, the
    child will left them open. Now, if the output of the command should
    be fed to a pipe, for example, by

    {
    {
    { some_command 3>&- 4>&- |
    sed -e 's/^/stdout: /' 1>&3 3>&- 2>&4 4>&-
    } 2>&1 |
    sed -e 's/^/stderr: /' 1>&2 4>&- 3>&-
    } 3>&1 4>&2
    } 2>&1 |
    mailx ...

    "mailx" won't terminate as long as the child of "some_command" is
    alive, because it won't stop waiting for input, because the forked
    child of "some_command" unknowingly holds the input side of the "|
    mailx" FIFO open.

    Oh. This would never occur to me. Thank you

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Robert Latest@21:1/5 to Helmut Waitzmann on Tue Oct 19 17:13:00 2021
    Helmut Waitzmann wrote:
    Robert Latest <boblatest@yahoo.com>:

    I must say I understand very little of the redirection chapter in
    "man bash".

    That's because the bash manual page purpose is not to teach the unix
    file descriptor redirection mechanism.  The bash manual page assumes
    that the reader already is familiar with the unix file descriptors
    and knows what “redirection” of those file descriptors is.

    So, here is a very short introduction to unix file descriptors:


    For accessing files, the kernel maintains two data structures: a
    system-wide table (or array) of opened files, and a per-process (i. e. each process has one of its own) table (or array) of file descriptors.  When a process asks the kernel to open a file, for
    example by invoking the “open()” system service, the kernel picks an unused entry of the process' table of file descriptors.  This entry
    will be identified by its position (or index) in that file
    descriptor table: a small number (0, 1, 2, …), a. k. a. the file descriptor number.

    Then the kernel picks an unused entry of the system-wide table of
    open files and records a reference to (for example: the position in
    the system‐wide open‐files table) that entry in the allocated entry
    of the process' table of file descriptors.

    In the allocated entry of the system-wide table of open files, the
    kernel records, which file is to be accessed, the access mode (that
    is, whether the file is opened for reading or for writing, if
    writing will overwrite the file or append to it, etc.), the current
    access position in the opened file, and, how many file descriptor
    table entries are associated with this entry of the system-wide
    table of open files (in this case: 1).

    Finally, the kernel returns the index of the allocated entry of the
    process' file descriptor table to the process.  In the bash manual
    page that index is known as file descriptor number.

    For example, the return value of the system call “open()” will
    return such a number.  See the manual page “open(2)”.

    Note:  The process can't (directly) tell the kernel, which entry
    of the file descriptors table to use, when opening a file.

    But there is a system service, which allows a process to tell the
    kernel, which entry of the file descriptors table to use: “dup2()”. 
    See the manual page “dup2(2)”.  The “dup()” and “dup2()” system services essentially copy one entry of the process' file descriptor
    table to another.


    So, if you tell the shell to start a command by


    command > output.txt

    the shell will first open the file “output.txt” by means of the
    system service “open”, which will (for example) return the file descriptor number 42.  Then the shell will tell the kernel by means
    of the system service “dup2” to copy the contents of the entry #42
    of the process' file descriptor table to the entry #1 in the same
    table.  Finally the shell will tell the kernel by means of the
    system service “close” to release (i. e. empty) the entry #42.

    The result of those three system service calls is, that the file
    descriptor #1 will refer to the opened file “output.txt” (rather
    than to the terminal):  When the command writes data to its file
    descriptor #1 (a. k. a. standard output), the data will arrive in
    the file “output.txt”.  That's why this sequence of system service
    calls is often called redirection.  But note:  There is nothing like redirection or forwarding involved in this mechanism.  It's just the
    effect of copying one file descriptor table entry to another.

    Now, if the command is


    command > output.txt 2>&1


    the shell first will do the same and then tell the kernel by means
    of the system service “dup2” to copy the contents of the entry #1 to
    the entry #2.  Now two entries in the process' file descriptor
    table – #1 and #2 – refer to the opened file “output.txt”.

    On the other hand, look, what


    command 2>&1 > output.txt


    would do:  It would copy the entry #1 of the process' file
    descriptor table to the entry #2 and then open the file
    “output.txt”, thus getting the (hypothetical) entry #42, then copy
    that entry #42 to the entry #1.  Of course the entry #2 won't be a
    copy of the entry #42 then, i. e. won't refer to the file “output.txt”.

    It doesn't seem logical that a redirection specifier after the
    target file would influence what goes into that file.

    You are fooled by the misnomer file descriptor redirection.  Keep in
    mind, that

    2>&1

    is essentially not much more than copying entry #1 to entry #2. 
    There is no forwarding involved like “if you want to write to the
    file referred by the process' file descriptor table entry #2, look
    up the file descriptor table entry #1 and use that instead”.

    See also <https://en.wikipedia.org/wiki/File_descriptor#top>.


    Good explanation, thank you. I'll forget it within days b/c not actively using it. I hope someone finds it in the future so your effort isn't going to waste!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)