Hello,
I'm trying to write a bash script that expects user input for every file in a directory. Here's what I came up with:
#!/bin/sh
touch 1 2 3 4 5 # for testing
ls | while read fn; do
echo File $fn
read -p 'Do what:' what
echo What $what
done
It's kind of obvious why this doesn't work: Both 'read's read from the same stdin, so instead of waiting for user input for file 1, it just reads '2' into
'what' and so on. Obviously the output of 'ls' should go into a different file
descriptor than stdout.
I'm trying to write a bash script that expects user input for every file in a directory. Here's what I came up with:
#!/bin/sh
touch 1 2 3 4 5 # for testing
ls | while read fn; do
echo File $fn
read -p 'Do what:' what
echo What $what
done
It's kind of obvious why this doesn't work: Both 'read's read from the same stdin, so instead of waiting for user input for file 1, it just reads '2' into
'what' and so on. Obviously the output of 'ls' should go into a different file
descriptor than stdout.
ls 1>&4 | while read -u 4 fn; do
Robert Latest <boblatest@yahoo.com> writes:
I'm trying to write a bash script that expects user input for every file in a
directory. Here's what I came up with:
#!/bin/sh
touch 1 2 3 4 5 # for testing
ls | while read fn; do
echo File $fn
read -p 'Do what:' what
echo What $what
done
It's kind of obvious why this doesn't work: Both 'read's read from the same >> stdin, so instead of waiting for user input for file 1, it just reads '2' into
'what' and so on. Obviously the output of 'ls' should go into a different file
descriptor than stdout.
You've had an answer, but I don't think it's ideal. Unix works best by
using stdin and stout where possible. One day you'll want the answers
to come from a file and not the user's tty. (And if you don't someone
else will!)
I bow to both of your's expertise. I usually do the for loop myself but in this case the input doesn't come from a simple 'ls' but from a more convoluted
thing which of course could be backquoted and used in 'for', but the filenames
may have spaces in it and so on... anyway, I used the /dev/tty redirect.
On Sun, 03 Oct 2021 16:50:08 +0100, Ben Bacarisse wrote:
I bow to your expertise. Your solution covers more situations than mine
did. I learned something today :-)
Lew Pitcher wrote:
On Sun, 03 Oct 2021 16:50:08 +0100, Ben Bacarisse wrote:
I bow to your expertise. Your solution covers more situations than mine
did. I learned something today :-)
Ben, Lew,
I bow to both of your's expertise. I usually do the for loop myself
but in this case the input doesn't come from a simple 'ls' but from a
more convoluted thing which of course could be backquoted and used in
'for', but the filenames may have spaces in it and so on... anyway, I
used the /dev/tty redirect.
On Sunday, October 3, 2021 at 5:36:08 PM UTC+3, Robert Latest wrote:
Hello,
I'm trying to write a bash script that expects user input for every file in a
directory. Here's what I came up with:
#!/bin/sh
touch 1 2 3 4 5 # for testing
ls | while read fn; do
echo File $fn
read -p 'Do what:' what
echo What $what
done
It's kind of obvious why this doesn't work: Both 'read's read from the same >> stdin, so instead of waiting for user input for file 1, it just reads '2' into
'what' and so on. Obviously the output of 'ls' should go into a different file
descriptor than stdout.
No, but the inner read should read from the shell's input:
{
ls | while read fn; do
echo File $fn
read -p 'Do what:' -u 3 what
echo What $what
done
} 3<&0
Hello,
I'm trying to write a bash script that expects user input for every file in a directory. Here's what I came up with:
#!/bin/sh
touch 1 2 3 4 5 # for testing
ls | while read fn; do
echo File $fn
read -p 'Do what:' what
echo What $what
done
It's kind of obvious why this doesn't work: Both 'read's read from the same stdin, so instead of waiting for user input for file 1, it just reads '2' into
'what' and so on. Obviously the output of 'ls' should go into a different file
descriptor than stdout.
but it gives the error "bad file descriptor".
I must say I understand very little of the redirection chapter in "man bash". For instance, I never understood why, when I want to capture command's stderr,
I need to do
command > output.txt 2>&1
rather than
command 2>&1 > output.txt
It doesn't seem logical that a redirection specifier after the target file would influence what goes into that file.
No, but the inner read should read from the shell's input:
{
ls | while read fn; do
echo File $fn
read -p 'Do what:' -u 3 what
echo What $what
done
} 3<&0
I must say I understand very little of the redirection chapter in "man bash". For instance, I never understood why, when I want to capture command's stderr,
I need to do
command > output.txt 2>&1
rather than
command 2>&1 > output.txt
It doesn't seem logical that a redirection specifier after the target file would influence what goes into that file.
It makes more sense when you understand how redirection is implemented in C:
2>&1
evaulates as
dup2(1, 2);
https://www.tuhs.org/cgi-bin/utree.pl?file=V7/usr/src/cmd/sh/xec.c
Robert Latest <boblatest@yahoo.com> wrote:
<snip>
I must say I understand very little of the redirection chapter in "man bash".
For instance, I never understood why, when I want to capture command's stderr,
I need to do
command > output.txt 2>&1
rather than
command 2>&1 > output.txt
It doesn't seem logical that a redirection specifier after the target file >> would influence what goes into that file.
It makes more sense when you understand how redirection is implemented in C:
output.txt # redirects stdout to file "output.txt", leaving stderr alone
&1 # redirects stderr to wherever stdout is currently going
&1 # redirects stderr to wherever stdout is currently going
output.txt # redirects stdout to "output.txt", leaving stderr alone
I must say I understand very little of the redirection chapter in
"man bash".
command > output.txt 2>&1
command 2>&1 > output.txt
It doesn't seem logical that a redirection specifier after the
target file would influence what goes into that file.
William Ahern wrote:
It makes more sense when you understand how redirection is implemented in C: >>
2>&1
evaulates as
dup2(1, 2);
I must admit I've never gotten into the low-level I/O stuff always ised the more high-level stdio implementation.
https://www.tuhs.org/cgi-bin/utree.pl?file=V7/usr/src/cmd/sh/xec.c
Now that's some weird-looking code. There must be some interesting stuff in sym.h and defs.h.
William Ahern wrote:
It makes more sense when you understand how redirection is implemented in C: >>
2>&1
evaulates as
dup2(1, 2);
I must admit I've never gotten into the low-level I/O stuff always ised the more high-level stdio implementation.
https://www.tuhs.org/cgi-bin/utree.pl?file=V7/usr/src/cmd/sh/xec.c
Now that's some weird-looking code. There must be some interesting stuff in sym.h and defs.h.
https://www.tuhs.org/cgi-bin/utree.pl?file=V7/usr/src/cmd/sh/xec.c
Now that's some weird-looking code. There must be some interesting stuff in >> sym.h and defs.h.
I like it. No curly braces except at the function level.
Somebody messed up SYSTIMES and SYSUMASK.
command # executes command with stdout and stderr inherited from the shell
&1 # redirects stderr to wherever stdout is currently going
output.txt # redirects stdout to "output.txt", leaving stderr alone
Result: stdout goes to "output.txt", but stderr is unchanged.
Keith Thompson wrote:
command # executes command with stdout and stderr inherited from the shell
&1 # redirects stderr to wherever stdout is currently going
output.txt # redirects stdout to "output.txt", leaving stderr alone
Result: stdout goes to "output.txt", but stderr is unchanged.
Nit-pick: if stdout and stderr were open to different files before
the command, stderr is not unchanged; it ends up going to the file
that stdout was originally going to.
A typical use of this is to filter stderr:
some_command 2>&1 >output.txt | some_filter >&2
The 2>&1 makes stderr go to the pipe.
A typical use of this is to filter stderr:
some_command 2>&1 >output.txt | some_filter >&2
The 2>&1 makes stderr go to the pipe.
Geoff Clare <geoff@clare.See-My-Signature.invalid>:
A typical use of this is to filter stderr:
some_command 2>&1 >output.txt | some_filter >&2
The 2>&1 makes stderr go to the pipe.
It's even possible to filter stdout as well as stderr, each of them by a filter of its own, for example:
{
{ some_command 3>&- 4>&- |
sed -e 's/^/stdout: /' 1>&3 3>&- 2>&4 4>&-
} 2>&1 |
sed -e 's/^/stderr: /' 1>&2 4>&- 3>&-
} 3>&1 4>&2
Helmut Waitzmann <nn.throttle@xoxy.net> writes:
Geoff Clare <geoff@clare.See-My-Signature.invalid>:
A typical use of this is to filter stderr:
some_command 2>&1 >output.txt | some_filter >&2
The 2>&1 makes stderr go to the pipe.
It's even possible to filter stdout as well as stderr, each of them by a
filter of its own, for example:
{ { some_command 3>&- 4>&- | sed -e 's/^/stdout: /' 1>&3 3>&- 2>&4 4>&- } >> 2>&1 | sed -e 's/^/stderr: /' 1>&2 4>&- 3>&- } 3>&1 4>&2
Wow. I take my hat off to you. I knew this was possible, I don't think I'd be able to work out how. But it is an illustration of design failure. This should not be hard.
(When I need to do this, I used a named pipe.)
On 2021-10-07, Ben Bacarisse <ben.usenet@bsb.me.uk> wrote:
Helmut Waitzmann <nn.throttle@xoxy.net> writes:It's easy to read, it' just not common usage :P
Geoff Clare <geoff@clare.See-My-Signature.invalid>:
A typical use of this is to filter stderr:
some_command 2>&1 >output.txt | some_filter >&2
The 2>&1 makes stderr go to the pipe.
It's even possible to filter stdout as well as stderr, each of them by a >>> filter of its own, for example:
{ { some_command 3>&- 4>&- | sed -e 's/^/stdout: /' 1>&3 3>&- 2>&4 4>&- } >>> 2>&1 | sed -e 's/^/stderr: /' 1>&2 4>&- 3>&- } 3>&1 4>&2
Wow. I take my hat off to you. I knew this was possible, I don't think I'd >> be able to work out how. But it is an illustration of design failure. This >> should not be hard.
(When I need to do this, I used a named pipe.)
Geoff Clare <ge...@clare.See-My-Signature.invalid>:
A typical use of this is to filter stderr:
some_command 2>&1 >output.txt | some_filter >&2
The 2>&1 makes stderr go to the pipe.
It's even possible to filter stdout as well as stderr, each of them
by a filter of its own, for example:
{
{ some_command 3>&- 4>&- |
sed -e 's/^/stdout: /' 1>&3 3>&- 2>&4 4>&-
} 2>&1 |
sed -e 's/^/stderr: /' 1>&2 4>&- 3>&-
} 3>&1 4>&2
On Thursday, October 7, 2021 at 10:37:48 AM UTC+3, Helmut Waitzmann wrote:
Geoff Clare <ge...@clare.See-My-Signature.invalid>:
A typical use of this is to filter stderr:It's even possible to filter stdout as well as stderr, each of
some_command 2>&1 >output.txt | some_filter >&2
The 2>&1 makes stderr go to the pipe.
them by a filter of its own, for example:
{
{ some_command 3>&- 4>&- |
sed -e 's/^/stdout: /' 1>&3 3>&- 2>&4 4>&-
} 2>&1 |
sed -e 's/^/stderr: /' 1>&2 4>&- 3>&-
} 3>&1 4>&2
How is this any different from
{ { some_command | sed -e 's/^/stdout: /' >&3 2>&4; } 2>&1 | sed -e 's/^/stderr: /' >&2; } 3>&1 4>&2
?
What is the point of closing 3 and 4?
Oğuz <oguzismailuysal@gmail.com>:
What is the point of closing 3 and 4?
File descriptor hygiene. When not closing 3 and 4, "some_command"
And even if "some_command" is not misbehaving, the unnecessarily
open file descriptors 3 and 4 will decrease the number of available
file descriptors for "some_command" and for the "sed" filters.
Helmut Waitzmann <nn.th...@xoxy.net>:
Oğuz <oguzism...@gmail.com>:
[discussing the following command]
{
{ some_command 3>&- 4>&- |
sed -e 's/^/stdout: /' 1>&3 3>&- 2>&4 4>&-
} 2>&1 |
sed -e 's/^/stderr: /' 1>&2 4>&- 3>&-
} 3>&1 4>&2
What is the point of closing 3 and 4?
File descriptor hygiene. When not closing 3 and 4, "some_command"
[might be misbehaving by writing to the file descriptors 3 and 4]
And even if "some_command" is not misbehaving, the unnecessarilyAnd there is another pitfall with file descriptors 3 and 4 left
open file descriptors 3 and 4 will decrease the number of available
file descriptors for "some_command" and for the "sed" filters.
open: Imagine, that "some_command" might eventually put itself into
the background by closing and reopening file descriptors 0, 1 and 2
to /dev/null, then forking a child and dying. As neither the parent
nor the child knows about the open file descriptors 3 and 4, the
child will left them open. Now, if the output of the command should
be fed to a pipe, for example, by
{
{
{ some_command 3>&- 4>&- |
sed -e 's/^/stdout: /' 1>&3 3>&- 2>&4 4>&-
} 2>&1 |
sed -e 's/^/stderr: /' 1>&2 4>&- 3>&-
} 3>&1 4>&2
} 2>&1 |
mailx ...
"mailx" won't terminate as long as the child of "some_command" is
alive, because it won't stop waiting for input, because the forked
child of "some_command" unknowingly holds the input side of the "|
mailx" FIFO open.
Robert Latest <boblatest@yahoo.com>:
I must say I understand very little of the redirection chapter in
"man bash".
That's because the bash manual page purpose is not to teach the unix
file descriptor redirection mechanism. The bash manual page assumes
that the reader already is familiar with the unix file descriptors
and knows what “redirection” of those file descriptors is.
So, here is a very short introduction to unix file descriptors:
For accessing files, the kernel maintains two data structures: a
system-wide table (or array) of opened files, and a per-process (i. e. each process has one of its own) table (or array) of file descriptors. When a process asks the kernel to open a file, for
example by invoking the “open()” system service, the kernel picks an unused entry of the process' table of file descriptors. This entry
will be identified by its position (or index) in that file
descriptor table: a small number (0, 1, 2, …), a. k. a. the file descriptor number.
Then the kernel picks an unused entry of the system-wide table of
open files and records a reference to (for example: the position in
the system‐wide open‐files table) that entry in the allocated entry
of the process' table of file descriptors.
In the allocated entry of the system-wide table of open files, the
kernel records, which file is to be accessed, the access mode (that
is, whether the file is opened for reading or for writing, if
writing will overwrite the file or append to it, etc.), the current
access position in the opened file, and, how many file descriptor
table entries are associated with this entry of the system-wide
table of open files (in this case: 1).
Finally, the kernel returns the index of the allocated entry of the
process' file descriptor table to the process. In the bash manual
page that index is known as file descriptor number.
For example, the return value of the system call “open()” will
return such a number. See the manual page “open(2)”.
Note: The process can't (directly) tell the kernel, which entry
of the file descriptors table to use, when opening a file.
But there is a system service, which allows a process to tell the
kernel, which entry of the file descriptors table to use: “dup2()”.
See the manual page “dup2(2)”. The “dup()” and “dup2()” system services essentially copy one entry of the process' file descriptor
table to another.
So, if you tell the shell to start a command by
command > output.txt
the shell will first open the file “output.txt” by means of the
system service “open”, which will (for example) return the file descriptor number 42. Then the shell will tell the kernel by means
of the system service “dup2” to copy the contents of the entry #42
of the process' file descriptor table to the entry #1 in the same
table. Finally the shell will tell the kernel by means of the
system service “close” to release (i. e. empty) the entry #42.
The result of those three system service calls is, that the file
descriptor #1 will refer to the opened file “output.txt” (rather
than to the terminal): When the command writes data to its file
descriptor #1 (a. k. a. standard output), the data will arrive in
the file “output.txt”. That's why this sequence of system service
calls is often called redirection. But note: There is nothing like redirection or forwarding involved in this mechanism. It's just the
effect of copying one file descriptor table entry to another.
Now, if the command is
command > output.txt 2>&1
the shell first will do the same and then tell the kernel by means
of the system service “dup2” to copy the contents of the entry #1 to
the entry #2. Now two entries in the process' file descriptor
table – #1 and #2 – refer to the opened file “output.txt”.
On the other hand, look, what
command 2>&1 > output.txt
would do: It would copy the entry #1 of the process' file
descriptor table to the entry #2 and then open the file
“output.txt”, thus getting the (hypothetical) entry #42, then copy
that entry #42 to the entry #1. Of course the entry #2 won't be a
copy of the entry #42 then, i. e. won't refer to the file “output.txt”.
It doesn't seem logical that a redirection specifier after the
target file would influence what goes into that file.
You are fooled by the misnomer file descriptor redirection. Keep in
mind, that
2>&1
is essentially not much more than copying entry #1 to entry #2.
There is no forwarding involved like “if you want to write to the
file referred by the process' file descriptor table entry #2, look
up the file descriptor table entry #1 and use that instead”.
See also <https://en.wikipedia.org/wiki/File_descriptor#top>.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 296 |
Nodes: | 16 (2 / 14) |
Uptime: | 76:14:37 |
Calls: | 6,657 |
Calls today: | 3 |
Files: | 12,203 |
Messages: | 5,332,734 |
Posted today: | 1 |