• FFT Question

    From Tom Killwhang@21:1/5 to All on Tue Sep 1 22:00:03 2020
    Why is it that in older textbooks the DFT or FFT is defined as having scaling of (1/N) for direct FFT whereas in newer ones it doesn't? If we take the DFT of say 1,1,1,1 the answer is 4 but surely an answer of 1 makes more sense as it is the average
    value. of 4 ones.

    I know people say - "Oh we can scale it later", but it doesn't make mathematical sense. The other point is that the Fourier transform doesn't apply to a step function anyway so why does the DFT?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Marcel Mueller@21:1/5 to All on Wed Sep 2 09:01:51 2020
    Am 02.09.20 um 07:00 schrieb Tom Killwhang:
    Why is it that in older textbooks the DFT or FFT is defined as having scaling of (1/N) for direct FFT whereas in newer ones it doesn't? If we take the DFT of say 1,1,1,1 the answer is 4 but surely an answer of 1 makes more sense as it is the average
    value. of 4 ones.

    I know people say - "Oh we can scale it later", but it doesn't make mathematical sense. The other point is that the Fourier transform doesn't apply to a step function anyway so why does the DFT?

    I think this for technical reasons. If you program FFT with twiddle
    factors taken from the unit circle you get a total scale factor of N for
    a full turn around. Typically it is less work to compensate for this
    factor at another place.

    Take also into account the scale factor of the inverse FFT. If you scale
    by 1/N you must *not* scale the inverse FFT to get the original values
    back. If you want to operate symmetrically you need to scale both by
    1/sqrt(N) which is even more non intuitive. But this keeps the RMS
    energy constant.


    Marcel

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Phil Hobbs@21:1/5 to Marcel Mueller on Thu Sep 3 18:47:32 2020
    On 2020-09-02 03:01, Marcel Mueller wrote:
    Am 02.09.20 um 07:00 schrieb Tom Killwhang:
    Why is it that in older textbooks the DFT or FFT is defined as having
    scaling of (1/N) for direct FFT whereas in newer ones it doesn't? If
    we take the DFT of say 1,1,1,1 the answer is 4 but surely an answer of
    1 makes more sense as it is the average value. of 4 ones.

    I know people say - "Oh we can scale it later", but it doesn't make
    mathematical sense.  The other point is that the Fourier transform
    doesn't apply to a step function anyway so why does the DFT?

    I think this for technical reasons. If you program FFT with twiddle
    factors taken from the unit circle you get a total scale factor of N for
    a full turn around. Typically it is less work to compensate for this
    factor at another place.

    Take also into account the scale factor of the inverse FFT. If you scale
    by 1/N you must *not* scale the inverse FFT to get the original values
    back. If you want to operate symmetrically you need to scale both by 1/sqrt(N) which is even more non intuitive. But this keeps the RMS
    energy constant.

    The question of how to normalize FFTs goes back way before computers
    were even invented, on account of that vexing factor of 2*pi.

    The EE contingent tends to define the continuous-time FT with the 2*pi
    factors in the exponent, i.e.

    G(f) = F(g) = integral[-inf, inf] exp( j 2 pi f t) g(t) dt

    and

    g(t) = Finv(G) = integral[-inf, inf] exp( -j 2 pi f t) G(f) df.

    The physics and math contingents tend to use omega = 2 pi f as the
    independent frequency variable, so that the 2*pi winds up as a
    multiplicative constant outside the integral, either asymmetrically as
    1/(2 pi) on the inverse transform, or symmetrically, as
    in 1/sqrt(2 pi) on both forward and reverse. The main effect is on the statement of FT theorems.

    IMNSHO putting the 2*pi in the exponent is a huge win, in both the
    continuous and discrete cases

    Cheers

    Phil Hobbs

    --
    Dr Philip C D Hobbs
    Principal Consultant
    ElectroOptical Innovations LLC / Hobbs ElectroOptics
    Optics, Electro-optics, Photonics, Analog Electronics
    Briarcliff Manor NY 10510

    http://electrooptical.net
    http://hobbs-eo.com

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Les Cargill@21:1/5 to Tom Killwhang on Fri Sep 4 21:19:20 2020
    Tom Killwhang wrote:
    Why is it that in older textbooks the DFT or FFT is defined as having
    scaling of (1/N) for direct FFT whereas in newer ones it doesn't? If
    we take the DFT of say 1,1,1,1 the answer is 4 but surely an answer
    of 1 makes more sense as it is the average value. of 4 ones.

    I know people say - "Oh we can scale it later", but it doesn't make mathematical sense. The other point is that the Fourier transform
    doesn't apply to a step function anyway so why does the DFT?


    A lot of implementations don't scale the output. Most prominently
    FFTW. Some do. It's sort of open to the implementation. I believe the
    argument for not doing it is somet5hing like roundoff error.

    And I'd call a step function ( all 1's ) a pretty good test case for an
    FFT. Not much point in using an FFT on one if you know it's a step
    function in an implementation but it's a quick check of an implementation

    --
    Les Cargill

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From gah4@u.washington.edu@21:1/5 to Phil Hobbs on Wed Sep 9 16:12:21 2020
    On Thursday, September 3, 2020 at 3:47:41 PM UTC-7, Phil Hobbs wrote:

    (snip)

    The question of how to normalize FFTs goes back way before computers
    were even invented, on account of that vexing factor of 2*pi.


    (snip of EE contingent)

    The physics and math contingents tend to use omega = 2 pi f as the independent frequency variable, so that the 2*pi winds up as a
    multiplicative constant outside the integral, either asymmetrically as
    1/(2 pi) on the inverse transform, or symmetrically, as
    in 1/sqrt(2 pi) on both forward and reverse. The main effect is on the statement of FT theorems.

    IMNSHO putting the 2*pi in the exponent is a huge win, in both the
    continuous and discrete cases

    Not only in the Fourier transform, but in all the rest of the equations,
    omega works out much better than f. It is only convenient to use f
    when you are actually counting frequencies. (That is, no-one makes
    frequency counters to output in omega.) There are some who use
    wavelength over 2pi, but that is rare. Using omega and k, you get
    exp(i.k.x - i.omega.t) for propagating waves. (Where k is the
    wave number or wave vector, magnitude 2 pi over wavelength.)

    It is, then, convenient for the Fourier transform to put the 1/(2pi)
    along with the d omega.

    In the case of FFT in fixed point, best is to do all with enough bits so
    as not to overflow and not divide by N until the end, if it is needed.

    In floating point with radix other than 2, it is a little complicated where
    to put the divide, but note that bits can be lost when dividing by two. (Specifically, IBM used radix 16 for many years, and still supports it,
    along with 2 a little recently, and 10 on newer machines. Yes,
    some machines support all three.)

    Step functions in continuous Fourier transforms are not rare, but
    a little complicated. Delta functions are not unusual. But for DFT
    (defined for periodic band-limited signals) a step function is
    not all that special. Especially for small N, where it isn't all
    that steep. And note that this all applies for DST or DCT, too.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Phil Hobbs@21:1/5 to ga...@u.washington.edu on Wed Sep 9 19:40:34 2020
    On 2020-09-09 19:12, ga...@u.washington.edu wrote:
    On Thursday, September 3, 2020 at 3:47:41 PM UTC-7, Phil Hobbs wrote:

    (snip)

    The question of how to normalize FFTs goes back way before computers
    were even invented, on account of that vexing factor of 2*pi.


    (snip of EE contingent)

    The physics and math contingents tend to use omega = 2 pi f as the
    independent frequency variable, so that the 2*pi winds up as a
    multiplicative constant outside the integral, either asymmetrically as
    1/(2 pi) on the inverse transform, or symmetrically, as
    in 1/sqrt(2 pi) on both forward and reverse. The main effect is on the
    statement of FT theorems.

    IMNSHO putting the 2*pi in the exponent is a huge win, in both the
    continuous and discrete cases

    Not only in the Fourier transform, but in all the rest of the equations, omega works out much better than f. It is only convenient to use f
    when you are actually counting frequencies. (That is, no-one makes
    frequency counters to output in omega.) There are some who use
    wavelength over 2pi, but that is rare. Using omega and k, you get
    exp(i.k.x - i.omega.t) for propagating waves. (Where k is the
    wave number or wave vector, magnitude 2 pi over wavelength.)

    Sure, when the spatial transform is in view as well, I use exp(i(omega t
    - k dot x)) too. (I'm a physicist, after all.) Some years back I wrote
    a scalable clusterized FDTD simulator, and the far-field and
    order-source calculations are all done in the (k, omega) basis.

    But in a DSP context, putting the 2*pi in the exponent is almost always
    a win IMO.

    Cheers

    Phil Hobbs

    --
    Dr Philip C D Hobbs
    Principal Consultant
    ElectroOptical Innovations LLC / Hobbs ElectroOptics
    Optics, Electro-optics, Photonics, Analog Electronics
    Briarcliff Manor NY 10510

    http://electrooptical.net
    http://hobbs-eo.com

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tom Killwhang@21:1/5 to ga...@u.washington.edu on Wed Sep 9 20:09:11 2020
    I noticed another problem that arises from this. The units don't match up when you use Parseval's theorem and the FFT. Look ta the equation near the bottom that uses the FFT for Parseval's theorem
    https://en.wikipedia.org/wiki/Parseval%27s_theorem
    I did a simulation with white noise through a bandpass filter.

    Now I found the periodogram and this gives me power vs frequency. when you sum them up you do indeed get the same answer - but the expression on the left of the equation is the sum of squares of a random variable and that is not power. The power is
    variance after you divide by N. Hence both sides need to be divided by N otherwise we are equating energies and not power. I always understood variance (with no dc) is average power and not energy.

    Thx



    On Thursday, September 10, 2020 at 11:12:25 AM UTC+12, ga...@u.washington.edu wrote:
    On Thursday, September 3, 2020 at 3:47:41 PM UTC-7, Phil Hobbs wrote:

    (snip)
    The question of how to normalize FFTs goes back way before computers
    were even invented, on account of that vexing factor of 2*pi.
    (snip of EE contingent)
    The physics and math contingents tend to use omega = 2 pi f as the independent frequency variable, so that the 2*pi winds up as a multiplicative constant outside the integral, either asymmetrically as 1/(2 pi) on the inverse transform, or symmetrically, as
    in 1/sqrt(2 pi) on both forward and reverse. The main effect is on the statement of FT theorems.

    IMNSHO putting the 2*pi in the exponent is a huge win, in both the continuous and discrete cases
    Not only in the Fourier transform, but in all the rest of the equations, omega works out much better than f. It is only convenient to use f
    when you are actually counting frequencies. (That is, no-one makes
    frequency counters to output in omega.) There are some who use
    wavelength over 2pi, but that is rare. Using omega and k, you get
    exp(i.k.x - i.omega.t) for propagating waves. (Where k is the
    wave number or wave vector, magnitude 2 pi over wavelength.)

    It is, then, convenient for the Fourier transform to put the 1/(2pi)
    along with the d omega.

    In the case of FFT in fixed point, best is to do all with enough bits so
    as not to overflow and not divide by N until the end, if it is needed.

    In floating point with radix other than 2, it is a little complicated where to put the divide, but note that bits can be lost when dividing by two. (Specifically, IBM used radix 16 for many years, and still supports it, along with 2 a little recently, and 10 on newer machines. Yes,
    some machines support all three.)

    Step functions in continuous Fourier transforms are not rare, but
    a little complicated. Delta functions are not unusual. But for DFT
    (defined for periodic band-limited signals) a step function is
    not all that special. Especially for small N, where it isn't all
    that steep. And note that this all applies for DST or DCT, too.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From gah4@u.washington.edu@21:1/5 to gyans...@gmail.com on Thu Sep 10 17:01:39 2020
    On Wednesday, September 9, 2020 at 8:09:15 PM UTC-7, gyans...@gmail.com wrote:
    I noticed another problem that arises from this. The units don't match up when you
    use Parseval's theorem and the FFT. Look ta the equation near the bottom that uses the FFT for Parseval's theorem
    https://en.wikipedia.org/wiki/Parseval%27s_theorem
    I did a simulation with white noise through a bandpass filter.

    Now I found the periodogram and this gives me power vs frequency. when you sum them
    up you do indeed get the same answer - but the expression on the left of the equation
    is the sum of squares of a random variable and that is not power.
    The power is variance after you divide by N. Hence both sides need to be divided by N
    otherwise we are equating energies and not power.
    I always understood variance (with no dc) is average power and not energy.

    Seems like a usual problem when working with discrete signals.

    There should be a factor of the sampling period to make it look like an integral,
    so that would make it energy instead of power. But yes, it is usual to factor out the (presumed constant and known) sampling rate.

    If you consider the earlier integral on the page, it is over one period of a periodic function. That normalizes out the length (time) of the period.

    Seems not worse than many mathematics formulas which completely
    get units wrong. That is, for example, add quantities with different units.

    When you use them, you have to put back any normalizations that
    they left out, such as sampling rate. Conveniently, in many cases you can ignore the sampling rate, for example when designing filters, after you
    factor it out.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)