[Note to whoever: this continues a discussion about functional
programming that started on comp.lang.ada but got off-topic for that
group so I've migrated it here. Please don't crosspost unless there's >significant Ada content in your post.]
Hadrien Grasland <hadrien.grasland@gmail.com> writes:
For example, if you wanted to explain the factorial to a student who
has never been exposed to the concept, chances are that your first
approach wouldn't be to write on a blackboard
"fact 1 = 1 -- [order of these two lines switched by me -Paul]
fact n = n * fact (n-1)"
But that's the mathematical definition of the factorial--this isn't
grade school. I assume I'm explaining it to a professional trying to do >high-assurance programming so of course that's the definition I'd use. >Treating the above as Haskell code that purports to compute the
factorial, it's easier to reason about than the imperative loop version:
Let P(n) be the proposition "fact(n)=n!". We want to verify that P(n)
is true for all n. We prove it by induction. First, P(1) is obviously
true. Second, assume P(n) is true. So by substitution, fact(n+1) =
(n+1) * fact(n). And since P(n) is true by the induction hypothesis, >fact(n+1) = (n+1) * n! which is (n+1)!, which means we've proved P(n) => >P(n+1). Therefore by induction, forall n. P(n), QED, so we've verified
the code, and this was very easy.
I'm out of my depth here, but I think to do the same thing with
imperative code is much messier, you have to put preconditions and >postconditions around every statement to prove the loop invariant that
you're going through factorials, etc. I guess you can do that with
SPARK but I don't know how it's done.
At the level of everyday coding though, going from loops to recursion is >nothing to be bothered about, trust me. It's like when people freak out
by whitespace as syntax when they first see Python, or by the
parentheses when they first see Lisp. You get used to it quickly and it >stops mattering. Or anyway, if it's really an obstacle, FP probably
isn't for you.
[...]
For example, if you wanted to explain the factorial to a student who
has never been exposed to the concept, chances are that your first
approach wouldn't be to write on a blackboard
"fact 1 = 1 -- [order of these two lines switched by me -Paul]
fact n = n * fact (n-1)"
In any case, if you are using a machine which can only perform one
binary operation at a time, you will always need an accumulator to
compute the final result.
I would gladly challenge this claim by picking two large groups of 10
years old who have never been exposed to imperative programming yet,
I'm writing programs for massively parallel GPU processors, and last
time I checked Blelloch's scan was still one of the most efficient
algorithms
Well, I'm learning OCaml using a pretty nice MOOC which has been set
up by my former university. If you're interested the address is
In imperative programming, recursion is just one tool in your
toolbox among many others. When you meet a problem which recursion
fits best, such as quicksort or tree traversal, you use recursion.
When you encounter a problem which loops fit best, such as traversing
a data structure while accumulating some intermediary state or
mutating a data structure, you use loops. When neither approach has a
clear advantage, you use loops because they are easier to
understand.
constructs, like C++ or Python. Do not force developers to use the
functional approach when it is suboptimal, and trust them to pick
the option that fits their problem domain best.
For example, if you wanted to explain the factorial to a student who
has never been exposed to the concept, chances are that your first
approach wouldn't be to write on a blackboard
"fact 1 = 1 -- [order of these two lines switched by me -Paul]
fact n = n * fact (n-1)"
But that's the mathematical definition of the factorial--this isn't
grade school. I assume I'm explaining it to a professional trying to do high-assurance programming so of course that's the definition I'd use. Treating the above as Haskell code that purports to compute the
factorial, it's easier to reason about than the imperative loop version:
Let P(n) be the proposition "fact(n)=n!". We want to verify that P(n)
is true for all n. We prove it by induction. First, P(1) is obviously
true. Second, assume P(n) is true. So by substitution, fact(n+1) =
(n+1) * fact(n). And since P(n) is true by the induction hypothesis, fact(n+1) = (n+1) * n! which is (n+1)!, which means we've proved P(n) => P(n+1). Therefore by induction, forall n. P(n), QED, so we've verified
the code, and this was very easy.
I'm out of my depth here, but I think to do the same thing with
imperative code is much messier, you have to put preconditions and postconditions around every statement to prove the loop invariant that
you're going through factorials, etc. I guess you can do that with
SPARK but I don't know how it's done.
At the level of everyday coding though, going from loops to recursion is nothing to be bothered about, trust me. It's like when people freak out
by whitespace as syntax when they first see Python, or by the
parentheses when they first see Lisp. You get used to it quickly and it stops mattering. Or anyway, if it's really an obstacle, FP probably
isn't for you.
In any case, if you are using a machine which can only perform one
binary operation at a time, you will always need an accumulator to
compute the final result.
I think that's an advanced, low level concept though. There's tons of
magic in Javascript: JIT compilation, garbage collection, built-in
objects that make RPC calls, etc. But there's zillions of
not-so-experienced Javascript programmers writing crappy web pages all
over the the world, who never think about that stuff and have never
heard of an accumulator. A programmer with deep knowledge doing
bleeding edge systems has to understand what the machine is doing, but someone just trying to bang out everyday tasks can usually stay at the
high level. No accumulators.
I'm writing programs for massively parallel GPU processors, and last
time I checked Blelloch's scan was still one of the most efficient algorithms
That's outside my area and maybe a niche, but I know some Haskell and FP
work has been done on this stuff (Data Parallel Haskell is inspired by Blelloch's NESL nested parallel work):
http://conal.net/blog/posts/parallel-tree-scanning-by-composition http://www.cs.cmu.edu/~rwh/papers/iolambda/short.pdf https://wiki.haskell.org/GHC/Data_Parallel_Haskell/References
There is also an array library with GPU support:
http://hackage.haskell.org/package/accelerate
Am 2015-11-29 um 12:12 schrieb Hadrien Grasland:
In imperative programming, recursion is just one tool in your
toolbox among many others. When you meet a problem which recursion
fits best, such as quicksort or tree traversal, you use recursion.
When you encounter a problem which loops fit best, such as traversing
a data structure while accumulating some intermediary state or
mutating a data structure, you use loops. When neither approach has a
clear advantage, you use loops because they are easier to
understand.
But couldn't the same point be made when comparing structured control structures with the goto statement? Some algorithms can be expressed in
a simpler way using labels and goto statements.
For example, the following pseudo code shows how a stream of characters delimited by a new line character could be processed using structured
control statements:
read (c)
WHILE c # new line DO
process (c)
read (c)
END
Using labels and goto, it could be programmed like this:
loop:
read (c)
IF c = new line THEN GOTO end
process (c)
end:
At first sight, the second version is simpler and easier to understand:
The read procedure is only called at one position, and it is immediately clear where the value of variable c is modified and used. In the
structured version there are two (redundant) read calls, and variable c
is assigned twice with its value used textually before the second
assignment.
Nevertheless, most developers would agree that the structured version is better because the control flow is visible immediately and there is no
danger of some external code jumping into the loop with unclear
consequences.
Similiarly, implementing an algorithm with an iteration instead of
recursion could be simpler and clearer at first sight. Nevertheless, it
could be actually better to use the recursive version of a pure
functional programming language because then one can be sure that
(instances of) variables are assigned only once and there can be no unexpected behavior caused by uninitialized variables and side effects.
constructs, like C++ or Python. Do not force developers to use the functional approach when it is suboptimal, and trust them to pick
the option that fits their problem domain best.
In my experience, you can not trust developers to pick the best option:
they will use whatever the programming languages offers.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 285 |
Nodes: | 16 (2 / 14) |
Uptime: | 75:52:37 |
Calls: | 6,489 |
Calls today: | 2 |
Files: | 12,096 |
Messages: | 5,276,201 |