• Optical Pendulum

    From Francois LE COAT@21:1/5 to All on Sun Oct 24 19:15:11 2021
    Hi,

    Do you know something about the experiment of the "Optical Pendulum"?

    <https://www.youtube.com/watch?v=cDJZVWEvhrc>

    A camera is suspended upon a cable, and an image is shot at the rest
    position. Then you push the pendulum, so that the camera oscillates,
    and new images are acquired when the pendulum moves.

    The goal is to evaluate the eight parameters that determine the
    position of the camera, from the rest position to the actual one.
    Because the pendulum oscillates, we obtain a pseudo-sinusoidal.

    The eight parameters are the perspective transform that happens
    from an image, to the others. That means translations <Tx,Ty,Tz>
    rotations <Rx,Ry,Rz> and two perspective parameters <Sx,Sy>.

    That's what we can see in the above video. Each images, and the
    corresponding perspective transform parameters, compared to rest.

    Best regards,

    --
    Dr. François LE COAT
    CNRS - Paris - France
    <https://hebergement.universite-paris-saclay.fr/lecoat>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Francois LE COAT@21:1/5 to Francois LE COAT on Mon Nov 15 19:30:01 2021
    Hi,

    Francois LE COAT writes:
    Do you know something about the experiment of the "Optical Pendulum"?

        <https://www.youtube.com/watch?v=cDJZVWEvhrc>

    A camera is suspended upon a cable, and an image is shot at the rest position. Then you push the pendulum, so that the camera oscillates,
    and new images are acquired when the pendulum moves.

    The goal is to evaluate the eight parameters that determine the
    position of the camera, from the rest position to the actual one.
    Because the pendulum oscillates, we obtain a pseudo-sinusoidal.

    The eight parameters are the perspective transform that happens
    from an image, to the others. That means translations <Tx,Ty,Tz>
    rotations <Rx,Ry,Rz> and two perspective parameters <Sx,Sy>.

    That's what we can see in the above video. Each images, and the
    corresponding perspective transform parameters, compared to rest.

    The goal is to measure a global movement, when it is observed by the
    camera. There are devices that determine the position, such as the GPS
    (Global Positioning System). We can measure the inclination with a
    gyrometer, the acceleration with an accelerometer, the speed with an
    odometer. The goal is to measure all this by the image, with a camera.

    Why?

    For example when we send robots to the planet Mars (Perseverance and
    Ingenuity recently), and we want to pilot them with the means at our disposal... On planet Earth there is a positioning system by GPS, which
    works with a network of satellites. But on Mars it does not exist. To
    navigate on Mars, we find our way with a camera. To do this, you have
    to measure the movement of the camera. This is the goal of our
    experiment. Measuring the movement of the camera... The robots that
    move on Mars have navigation cameras. These are their eyes. It's as
    efficient as a GPS.

    Best regards,

    --
    Dr. François LE COAT
    CNRS - Paris - France
    <https://hebergement.universite-paris-saclay.fr/lecoat>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Francois LE COAT@21:1/5 to Francois LE COAT on Tue Apr 12 16:06:46 2022
    Hi,

    Francois LE COAT writes:
    Do you know something about the experiment of the "Optical Pendulum"?

         <https://www.youtube.com/watch?v=cDJZVWEvhrc>

    A camera is suspended upon a cable, and an image is shot at the rest
    position. Then you push the pendulum, so that the camera oscillates,
    and new images are acquired when the pendulum moves.

    The goal is to evaluate the eight parameters that determine the
    position of the camera, from the rest position to the actual one.
    Because the pendulum oscillates, we obtain a pseudo-sinusoidal.

    The eight parameters are the perspective transform that happens
    from an image, to the others. That means translations <Tx,Ty,Tz>
    rotations <Rx,Ry,Rz> and two perspective parameters <Sx,Sy>.

    That's what we can see in the above video. Each images, and the
    corresponding perspective transform parameters, compared to rest.

    The goal is to measure a global movement, when it is observed by the
    camera. There are devices that determine the position, such as the GPS (Global Positioning System). We can measure the inclination with a
    gyrometer, the acceleration with an accelerometer, the speed with an odometer. The goal is to measure all this by the image, with a camera.

    Why?

    For example when we send robots to the planet Mars (Perseverance and Ingenuity recently), and we want to pilot them with the means at our disposal... On planet Earth there is a positioning system by GPS, which
    works with a network of satellites. But on Mars it does not exist. To navigate on Mars, we find our way with a camera. To do this, you have
    to measure the movement of the camera. This is the goal of our
    experiment. Measuring the movement of the camera... The robots that
    move on Mars have navigation cameras. These are their eyes. It's as
    efficient as a GPS.

    I made a new video demonstration, with the optical pendulum experiment:

    <https://www.youtube.com/watch?v=PXbWNW7duCY>

    We can see the image taken at the pendulum's rest. Then each of the
    images, when it oscillates. We see the perspective transformation
    between each image, to the rest, in image plane, i.e. in two dimensions.
    Then using the parameters obtained in 2D from the transformation, a
    virtual camera moves in 3D, using Persistence Of Vision software.
    It is an illustration of the use that we can have in 3D of the
    parameters: in translation <Tx,Ty,Tz>, in rotation <Rx,Ry,Rz> and
    in perspective <Sx,Sy>. It is a question of determining from the images,
    the movement in space of the camera. The movement in space between two
    images is completely described by eight parameters. POV-Ray is very well
    suited to represent the trajectory in 3D, because it is a free image
    synthesis software. Of course, all these computations are not yet done
    at the rate of video. It will probably be necessary to design a hardware acceleration, to obtain a smoother video...

    Best regards,

    --
    Dr. François LE COAT
    CNRS - Paris - France
    <https://hebergement.universite-paris-saclay.fr/lecoat>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Francois LE COAT@21:1/5 to Francois LE COAT on Tue May 3 13:45:02 2022
    Hi,

    Francois LE COAT writes:
    Do you know something about the experiment of the "Optical Pendulum"?

         <https://www.youtube.com/watch?v=cDJZVWEvhrc>

    A camera is suspended upon a cable, and an image is shot at the rest
    position. Then you push the pendulum, so that the camera oscillates,
    and new images are acquired when the pendulum moves.

    The goal is to evaluate the eight parameters that determine the
    position of the camera, from the rest position to the actual one.
    Because the pendulum oscillates, we obtain a pseudo-sinusoidal.

    The eight parameters are the perspective transform that happens
    from an image, to the others. That means translations <Tx,Ty,Tz>
    rotations <Rx,Ry,Rz> and two perspective parameters <Sx,Sy>.

    That's what we can see in the above video. Each images, and the
    corresponding perspective transform parameters, compared to rest.

    The goal is to measure a global movement, when it is observed by the
    camera. There are devices that determine the position, such as the GPS
    (Global Positioning System). We can measure the inclination with a
    gyrometer, the acceleration with an accelerometer, the speed with an
    odometer. The goal is to measure all this by the image, with a camera.

    Why?

    For example when we send robots to the planet Mars (Perseverance and
    Ingenuity recently), and we want to pilot them with the means at our
    disposal... On planet Earth there is a positioning system by GPS, which
    works with a network of satellites. But on Mars it does not exist. To
    navigate on Mars, we find our way with a camera. To do this, you have
    to measure the movement of the camera. This is the goal of our
    experiment. Measuring the movement of the camera... The robots that
    move on Mars have navigation cameras. These are their eyes. It's as
    efficient as a GPS.

    I made a new video demonstration, with the optical pendulum experiment:

        <https://www.youtube.com/watch?v=PXbWNW7duCY>

    We can see the image taken at the pendulum's rest. Then each of the
    images, when it oscillates. We see the perspective transformation
    between each image, to the rest, in image plane, i.e. in two dimensions.
    Then using the parameters obtained in 2D from the transformation, a
    virtual camera moves in 3D, using Persistence Of Vision software.
    It is an illustration of the use that we can have in 3D of the
    parameters: in translation <Tx,Ty,Tz>, in rotation <Rx,Ry,Rz> and
    in perspective <Sx,Sy>. It is a question of determining from the images,
    the movement in space of the camera. The movement in space between two
    images is completely described by eight parameters. POV-Ray is very well suited to represent the trajectory in 3D, because it is a free image synthesis software. Of course, all these computations are not yet done
    at the rate of video. It will probably be necessary to design a hardware acceleration, to obtain a smoother video...

    A new video from the Optical Pendulum was realized which is a little
    smoother, dissociating acquisitions from the parameters' computation...

    <https://www.youtube.com/watch?v=N2SQStXsz6U>

    It may help to understand. A 50 images sequence is first acquired,
    and then processed sequentially. You may better perceive the
    camera-pendulum's oscillation.

    Best regards,

    --
    Dr. François LE COAT
    CNRS - Paris - France
    <https://hebergement.universite-paris-saclay.fr/lecoat>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Francois LE COAT@21:1/5 to Francois LE COAT on Fri May 6 12:45:07 2022
    Hi,

    Here is the "projective transform" I'm finally writing about...

    <https://www.youtube.com/watch?v=mnei7j-KRu8>

    It includes 8 parameters (Rx,Ry,Rz,Tx,Ty,Tz,Sx,Sy) which are
    present in POV-Ray. I use it to represent the motion of cameras.

    Francois LE COAT writes:
    Do you know something about the experiment of the "Optical Pendulum"?

         <https://www.youtube.com/watch?v=cDJZVWEvhrc>

    A camera is suspended upon a cable, and an image is shot at the rest
    position. Then you push the pendulum, so that the camera oscillates,
    and new images are acquired when the pendulum moves.

    The goal is to evaluate the eight parameters that determine the
    position of the camera, from the rest position to the actual one.
    Because the pendulum oscillates, we obtain a pseudo-sinusoidal.

    The eight parameters are the perspective transform that happens
    from an image, to the others. That means translations <Tx,Ty,Tz>
    rotations <Rx,Ry,Rz> and two perspective parameters <Sx,Sy>.

    That's what we can see in the above video. Each images, and the
    corresponding perspective transform parameters, compared to rest.

    The goal is to measure a global movement, when it is observed by the
    camera. There are devices that determine the position, such as the GPS
    (Global Positioning System). We can measure the inclination with a
    gyrometer, the acceleration with an accelerometer, the speed with an
    odometer. The goal is to measure all this by the image, with a camera.

    Why?

    For example when we send robots to the planet Mars (Perseverance and
    Ingenuity recently), and we want to pilot them with the means at our
    disposal... On planet Earth there is a positioning system by GPS, which
    works with a network of satellites. But on Mars it does not exist. To
    navigate on Mars, we find our way with a camera. To do this, you have
    to measure the movement of the camera. This is the goal of our
    experiment. Measuring the movement of the camera... The robots that
    move on Mars have navigation cameras. These are their eyes. It's as
    efficient as a GPS.

    I made a new video demonstration, with the optical pendulum experiment:

         <https://www.youtube.com/watch?v=PXbWNW7duCY>

    We can see the image taken at the pendulum's rest. Then each of the
    images, when it oscillates. We see the perspective transformation
    between each image, to the rest, in image plane, i.e. in two dimensions.
    Then using the parameters obtained in 2D from the transformation, a
    virtual camera moves in 3D, using Persistence Of Vision software.
    It is an illustration of the use that we can have in 3D of the
    parameters: in translation <Tx,Ty,Tz>, in rotation <Rx,Ry,Rz> and
    in perspective <Sx,Sy>. It is a question of determining from the images,
    the movement in space of the camera. The movement in space between two
    images is completely described by eight parameters. POV-Ray is very well
    suited to represent the trajectory in 3D, because it is a free image
    synthesis software. Of course, all these computations are not yet done
    at the rate of video. It will probably be necessary to design a hardware
    acceleration, to obtain a smoother video...

    A new video from the Optical Pendulum was realized which is a little smoother, dissociating acquisitions from the parameters' computation...

        <https://www.youtube.com/watch?v=N2SQStXsz6U>

    It may help to understand. A 50 images sequence is first acquired,
    and then processed sequentially. You may better perceive the camera-pendulum's oscillation.

    Best regards,

    --
    Dr. François LE COAT
    CNRS - Paris - France
    <https://hebergement.universite-paris-saclay.fr/lecoat>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Francois LE COAT@21:1/5 to Francois LE COAT on Tue Mar 21 15:30:04 2023
    Hi,

    A WEB page was made to illustrate the "optical pendulum" experiment:

    <https://hebergement.universite-paris-saclay.fr/lecoat/demoweb/optical_pendulum.html>

    We determinate translation, rotation and perspective transformations.
    On this WEB page you can see the pendulum swinging live... This is
    not really fast for the moment, but we're trying to accelerate it :-)

    Francois LE COAT wrote:
    Here is the "projective transform" I'm finally writing about...

        <https://www.youtube.com/watch?v=mnei7j-KRu8>

    It includes 8 parameters (Rx,Ry,Rz,Tx,Ty,Tz,Sx,Sy) which are
    present in POV-Ray. I use it to represent the motion of cameras.

    Francois LE COAT writes:
    Do you know something about the experiment of the "Optical Pendulum"? >>>>>
         <https://www.youtube.com/watch?v=cDJZVWEvhrc>

    A camera is suspended upon a cable, and an image is shot at the rest >>>>> position. Then you push the pendulum, so that the camera oscillates, >>>>> and new images are acquired when the pendulum moves.

    The goal is to evaluate the eight parameters that determine the
    position of the camera, from the rest position to the actual one.
    Because the pendulum oscillates, we obtain a pseudo-sinusoidal.

    The eight parameters are the perspective transform that happens
    from an image, to the others. That means translations <Tx,Ty,Tz>
    rotations <Rx,Ry,Rz> and two perspective parameters <Sx,Sy>.

    That's what we can see in the above video. Each images, and the
    corresponding perspective transform parameters, compared to rest.

    The goal is to measure a global movement, when it is observed by the
    camera. There are devices that determine the position, such as the GPS >>>> (Global Positioning System). We can measure the inclination with a
    gyrometer, the acceleration with an accelerometer, the speed with an
    odometer. The goal is to measure all this by the image, with a camera. >>>>
    Why?

    For example when we send robots to the planet Mars (Perseverance and
    Ingenuity recently), and we want to pilot them with the means at our
    disposal... On planet Earth there is a positioning system by GPS, which >>>> works with a network of satellites. But on Mars it does not exist. To
    navigate on Mars, we find our way with a camera. To do this, you have
    to measure the movement of the camera. This is the goal of our
    experiment. Measuring the movement of the camera... The robots that
    move on Mars have navigation cameras. These are their eyes. It's as
    efficient as a GPS.

    I made a new video demonstration, with the optical pendulum experiment:

         <https://www.youtube.com/watch?v=PXbWNW7duCY>

    We can see the image taken at the pendulum's rest. Then each of the
    images, when it oscillates. We see the perspective transformation
    between each image, to the rest, in image plane, i.e. in two dimensions. >>> Then using the parameters obtained in 2D from the transformation, a
    virtual camera moves in 3D, using Persistence Of Vision software.
    It is an illustration of the use that we can have in 3D of the
    parameters: in translation <Tx,Ty,Tz>, in rotation <Rx,Ry,Rz> and
    in perspective <Sx,Sy>. It is a question of determining from the images, >>> the movement in space of the camera. The movement in space between two
    images is completely described by eight parameters. POV-Ray is very well >>> suited to represent the trajectory in 3D, because it is a free image
    synthesis software. Of course, all these computations are not yet done
    at the rate of video. It will probably be necessary to design a hardware >>> acceleration, to obtain a smoother video...

    A new video from the Optical Pendulum was realized which is a little
    smoother, dissociating acquisitions from the parameters' computation...

         <https://www.youtube.com/watch?v=N2SQStXsz6U>

    It may help to understand. A 50 images sequence is first acquired,
    and then processed sequentially. You may better perceive the
    camera-pendulum's oscillation.

    Best regards,

    --
    Dr. François LE COAT
    CNRS - Paris - France
    <https://hebergement.universite-paris-saclay.fr/lecoat>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Francois LE COAT@21:1/5 to Francois LE COAT on Tue Nov 21 11:00:05 2023
    Hi,

    Francois LE COAT writes:
    A WEB page was made to illustrate the "optical pendulum" experiment:

    <https://hebergement.universite-paris-saclay.fr/lecoat/demoweb/optical_pendulum.html>


    We determinate translation, rotation and perspective transformations.
    On this WEB page you can see the pendulum swinging live... This is
    not really fast for the moment, but we're trying to accelerate it :-)

    Here is the experimentation of the optical pendulum at its true rhythm
    on a Dell Precision T3400 computer, Intel Core2 Quad Q6600 (2.4 GHz,
    FSB 1066 MHz, 8 MB L2 cache, four cores)...

    <https://www.youtube.com/watch?v=3HnVTz1BPsU>

    It is a machine which hosts GNU/Linux Mageia 8 in its 32-bits version,
    and that is used to the maximum performances, thanks to multi-processing
    and charge load balancing on the four cores. The calculation rate of the
    eight movement parameters is at the time order of a second. Hardware acceleration of algorithmic processing is envisaged.

    Best regards,

    --
    Dr. François LE COAT
    CNRS - Paris - France
    <https://hebergement.universite-paris-saclay.fr/lecoat>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)