• Re: on distinguishing memoization and dynamic programming

    From Kaz Kylheku@21:1/5 to Julieta Shem on Wed Jan 3 20:06:39 2024
    XPost: comp.programming

    On 2024-01-03, Julieta Shem <jshem@yaxenu.org> wrote:
    I was trying to distinguish memoization from dynamic programming --- in
    a technical way --- and I failed. Can you write something like a mathematical definition of each one?

    Did you check Wikipedia?

    https://en.wikipedia.org/wiki/Dynamic_programming

    Dynamic programming is an "algorithmic paradigm" according to this page;
    a nice term.

    Memoization is a specific algorithmic trick, which is used in some
    solutions that fall into the dynamic programming paradigm.
    (It is used essentially, so that practically useful run-times
    can be achieved: e.g. exponential time knocked down to polynomial.)

    Dynamic programming breaks a larger problem into sub-problems which
    can be solved separately and then integrated to solve the
    larger problem.

    Memoization helps when the recursion leads to overlapping subproblems
    that lead to an exponential explosion if the duplication is not
    identified and suppressed.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    NOTE: If you use Google Groups, I don't see you, unless you're whitelisted.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Julieta Shem@21:1/5 to All on Wed Jan 3 16:53:40 2024
    XPost: comp.programming

    I was trying to distinguish memoization from dynamic programming --- in
    a technical way --- and I failed. Can you write something like a
    mathematical definition of each one?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Julieta Shem@21:1/5 to Kaz Kylheku on Wed Jan 3 17:55:37 2024
    XPost: comp.programming

    Kaz Kylheku <433-929-6894@kylheku.com> writes:

    On 2024-01-03, Julieta Shem <jshem@yaxenu.org> wrote:
    I was trying to distinguish memoization from dynamic programming --- in
    a technical way --- and I failed. Can you write something like a
    mathematical definition of each one?

    [...]

    Dynamic programming breaks a larger problem into sub-problems which
    can be solved separately and then integrated to solve the
    larger problem.

    I can't distinguish this definition from ``recursive''.

    Memoization helps when the recursion leads to overlapping subproblems
    that lead to an exponential explosion if the duplication is not
    identified and suppressed.

    So it seems to be that memoization is a particular kind of strategy that
    falls in the dynamic programming set of strategies. (Thanks for the
    historical addendum in your other post.)

    Why do they say ``overlapping subproblems'' when it seems that what is
    meant is a duplicate problem? For instance, the interval [0, 10]
    overlaps with the interval [5, 15], but they're not the same. AFAICT, memoization is only useful when at least two of the subproblems are
    exactly the same.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Kaz Kylheku on Wed Jan 3 20:16:36 2024
    XPost: comp.programming

    On 2024-01-03, Kaz Kylheku <433-929-6894@kylheku.com> wrote:
    On 2024-01-03, Julieta Shem <jshem@yaxenu.org> wrote:
    I was trying to distinguish memoization from dynamic programming --- in
    a technical way --- and I failed. Can you write something like a
    mathematical definition of each one?

    Did you check Wikipedia?

    https://en.wikipedia.org/wiki/Dynamic_programming

    Dynamic programming is an "algorithmic paradigm" according to this page;
    a nice term.

    By the way, this "programming" does not refer to writing a computer
    program, but to finding a solution that can be used to schedule
    a program of events.

    That there is a dynamic programming algorithming paradigm doesn't
    have anything to do with that we write programs to make it happen.

    This explains the "programming" term: https://en.wikipedia.org/wiki/Mathematical_optimization#History

    There is another kind of "programming" in mathematical optimization: https://en.wikipedia.org/wiki/Linear_programming

    That one does not have a related algorithmic paradigm; the computer
    version is just number-crunching over the math.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    NOTE: If you use Google Groups, I don't see you, unless you're whitelisted.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Julieta Shem on Wed Jan 3 22:58:19 2024
    XPost: comp.programming

    On 2024-01-03, Julieta Shem <jshem@yaxenu.org> wrote:
    Why do they say ``overlapping subproblems'' when it seems that what is
    meant is a duplicate problem? For instance, the interval [0, 10]
    overlaps with the interval [5, 15], but they're not the same. AFAICT, memoization is only useful when at least two of the subproblems are
    exactly the same.

    The famous example is Fibonacci. If you calculate fib(7) recursively,
    fib(3), and others, will show up more than once in the recursion:

    fib(7)
    / \
    fib(6) fib(5)
    / \ / \
    fib(4) fib(5) fib(4) fib(3)
    / \ / \
    fib(4) fib(3)
    / \ / \

    Why is that called overlapping? Because the left subtree fib(6)
    and fib(5) are not the same, but they contain some common content
    (nodes that are exactly the same like another copy of fib(5), and
    multiple fib(4) and so on).

    It's just in contrast to divide-and-conquer, where the problem
    space is being strictly partitioned; no part or sub-part of the
    left tree occcurs in the right or vice versa.

    [0, 10] and [5, 15] overlap, and they have [5, 10] in common.
    If that can be solved as a sub-problem, such that we can solve
    [0, 4], [5, 10] and [11, 15], and put them together,
    that would be better than solving [5, 10] twice and doing
    the same thing.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    NOTE: If you use Google Groups, I don't see you, unless you're whitelisted.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Julieta Shem@21:1/5 to Kaz Kylheku on Wed Jan 3 20:19:07 2024
    XPost: comp.programming

    Kaz Kylheku <433-929-6894@kylheku.com> writes:

    On 2024-01-03, Julieta Shem <jshem@yaxenu.org> wrote:
    Why do they say ``overlapping subproblems'' when it seems that what is
    meant is a duplicate problem? For instance, the interval [0, 10]
    overlaps with the interval [5, 15], but they're not the same. AFAICT,
    memoization is only useful when at least two of the subproblems are
    exactly the same.

    The famous example is Fibonacci. If you calculate fib(7) recursively,
    fib(3), and others, will show up more than once in the recursion:

    fib(7)
    / \
    fib(6) fib(5)
    / \ / \
    fib(4) fib(5) fib(4) fib(3)
    / \ / \
    fib(4) fib(3)
    / \ / \

    Why is that called overlapping? Because the left subtree fib(6)
    and fib(5) are not the same, but they contain some common content
    (nodes that are exactly the same like another copy of fib(5), and
    multiple fib(4) and so on).

    It's just in contrast to divide-and-conquer, where the problem
    space is being strictly partitioned; no part or sub-part of the
    left tree occcurs in the right or vice versa.

    [0, 10] and [5, 15] overlap, and they have [5, 10] in common.
    If that can be solved as a sub-problem, such that we can solve
    [0, 4], [5, 10] and [11, 15], and put them together,
    that would be better than solving [5, 10] twice and doing
    the same thing.

    That's very clear now. Wonderful. Thank you.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From HenHanna@21:1/5 to Julieta Shem on Tue Jul 23 12:15:50 2024
    XPost: comp.programming, sci.lang

    On 1/3/2024 11:53 AM, Julieta Shem wrote:
    I was trying to distinguish memoization from dynamic programming --- in
    a technical way --- and I failed. Can you write something like a mathematical definition of each one?



    Memoization and dynamic programming are both techniques used to improve
    the efficiency of algorithms, but there are some key differences between
    them:

    --------------Memoization:

    Focus: Caching previously computed results
    Approach: Top-down (usually implemented with recursion)
    Applicability: Any function with repeated computations for the same inputs



    Example: Imagine a function that calculates the nth Fibonacci number.
    Recursive calls to calculate smaller Fibonacci numbers happen
    repeatedly. Memoization remembers these calculations and avoids
    redundant computations.





    --------Dynamic Programming:

    Focus: Solving problems with overlapping subproblems iteratively

    Approach: Bottom-up (often uses tables or arrays)

    Applicability: Problems where solutions to subproblems contribute to the solution of larger problems


    Example: Counting the number of ways to climb stairs.
    You can find the number of ways to climb 1 or 2 stairs, and then
    use those to find the number of ways to climb 3 stairs, and so on.


    The Relationship:

    Memoization can be considered a tool used within dynamic programming.


    Dynamic programming doesn't necessarily require memoization, it can
    solve problems bottom-up directly.



    Here's an analogy:

    Think of memoization as a to-do list app. You write down tasks you've
    already completed to avoid doing them again.

    Dynamic programming is like a recipe. You break down a complex
    dish into smaller steps, ensuring you only perform each step once.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Julieta Shem@21:1/5 to HenHanna on Tue Jul 23 21:06:15 2024
    XPost: comp.programming, sci.lang

    Follow-up to comp.programming only.

    HenHanna <HenHanna@devnull.tb> writes:

    On 1/3/2024 11:53 AM, Julieta Shem wrote:
    I was trying to distinguish memoization from dynamic programming --- in
    a technical way --- and I failed. Can you write something like a
    mathematical definition of each one?



    Memoization and dynamic programming are both techniques used to
    improve the efficiency of algorithms, but there are some key
    differences between them:

    --------------Memoization:

    Focus: Caching previously computed results
    Approach: Top-down (usually implemented with recursion)
    Applicability: Any function with repeated computations for the same inputs



    Example: Imagine a function that calculates the nth Fibonacci
    number. Recursive calls to calculate smaller Fibonacci numbers happen repeatedly. Memoization remembers these calculations and avoids
    redundant computations.





    --------Dynamic Programming:

    Focus: Solving problems with overlapping subproblems iteratively

    Approach: Bottom-up (often uses tables or arrays)

    Applicability: Problems where solutions to subproblems contribute to
    the solution of larger problems


    Example: Counting the number of ways to climb stairs.
    You can find the number of ways to climb 1 or 2 stairs, and
    then use those to find the number of ways to climb 3 stairs,
    and so on.


    The Relationship:

    Memoization can be considered a tool used within dynamic programming.


    Dynamic programming doesn't necessarily require memoization, it can
    solve problems bottom-up directly.



    Here's an analogy:

    Think of memoization as a to-do list app. You write down tasks you've
    already completed to avoid doing them again.

    Dynamic programming is like a recipe. You break down a
    complex dish into smaller steps, ensuring you only perform each step
    once.

    It does match my understanding of dynamic programming that it's usually
    the term used when the speedups are computed inside the procedure being optimized, while memoization and caching can be done with complete
    disregard for how the procedure internal details.

    So, the top-down-bottom-up comparison is pretty interesting, but it also
    seem to imply a certain distinction based on perspective. Science is
    usually trying to find things that are there no matter from where you
    look.

    (*) What's the point of this discussion?

    Understanding. If there is a very clear distinction and I can't see,
    then my understanding can be greater. Some things are just vague and
    that's not really a problem. For example, what is an operating system?
    Very difficult to define with arbitrary precision, but the very people
    who study them don't have any problems with that lack of precision. Is
    a high-precision distinction between memoization and dynamic programming
    very difficult to get to? It's not clear to me. But it's clear that
    there are contexts in which we clearly use one word and not the other.

    ``Cache'' is another word. Every time we memoize a function, we're
    using a cache for it. So why call it memoization? Fruits sometimes go
    by various names, but that's likely because various peoples named it independently, which is why ``chocolate'' is typically the same word in
    every culture I've seen it---perhaps because each culture imported it
    from the same source.

    Perhaps that's the case with cache and memoization. It is true that
    ``dynamic programming'' was coined by a researcher trying to get money
    for his project. The project had to use buzz words that would attract
    the money. In other words, maybe ``cache'' would replace it just fine.

    If these paragraphs are missing out something deeply important about
    dynamic programming, then I would like to know.

    Thanks for the analogy.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From HenHanna@21:1/5 to All on Wed Jul 24 01:04:01 2024
    XPost: comp.programming, sci.lang


    Here's an analogy:

    Think of memoization as a to-do list app. You write down tasks you've
    already completed to avoid doing them again.

    Dynamic programming is like a recipe. You break down a
    complex dish into smaller steps, ensuring you only perform each step
    once.


    ----------- Not a great Analogy....

    DP is just a broader term... one of the methods used is Memoization.





    Thanks for the analogy.


    DP= “recursion + memorization” is better.




    i have just watched most of it (20 min)... Very good!

    https://www.youtube.com/watch?v=Hdr64lKQ3e4

    19:40

    Mastering Dynamic Programming - How to solve any interview problem (Part 1)

    Tech With Nikola -- 45.2K subscribers


    636,861 views ( Aug 19, 2023 )

    🎬 Mastering Dynamic Programming: An Introduction 🎬

    Are you ready to unravel the secrets of dynamic programming?
    🤔 Dive into the world of efficient problem-solving with this
    comprehensive introduction to dynamic programming. Whether you're a
    budding programmer or an algorithm aficionado, this video is your
    gateway to understanding the magic behind DP.

    🔑 Key Takeaways:

    📌 Demystifying the concept of dynamic programming.
    📌 Understanding the core principles behind dynamic programming.
    📌 Unleashing the power of recursion and memoization.
    📌 Step-by-step breakdown of dynamic programming problem-solving.

    Dynamic programming is like a puzzle-solving technique, and this video
    is your ultimate guide to fitting the pieces together. Get ready to
    elevate your coding skills and witness the art of optimization in action.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)