• Converting from MKV to M4V reduces file size. What is being dropped?

    From David@21:1/5 to All on Tue Aug 30 13:21:35 2022
    Just a side note from my conversion testing, but Handbrake seems to reduce
    the size of the file considerably (about 5 times?).

    Is this to be expected?

    Just putting this here as a reminder for further research.

    Cheers



    Dave R


    --
    AMD FX-6300 in GA-990X-Gaming SLI-CF running Windows 7 Pro x64

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jim Lesurf@21:1/5 to David on Tue Aug 30 15:18:52 2022
    In article <jn6h6vFt1vU5@mid.individual.net>,
    David <wibble@btinternet.com> wrote:
    Just a side note from my conversion testing, but Handbrake seems to reduce the size of the file considerably (about 5 times?).

    Is this to be expected?

    Just putting this here as a reminder for further research.

    What does ffmprobe tell you about the content of the source file and the Handbrake output file?

    If Handbrake is transcoding it may be re-compressing differently, and
    possibly discarding data. This will depend entirely on your settings and
    the source file details. Result may or may not be a noticable change in the appearance (or sound) depending on the details.

    Jim

    --
    Please use the address on the audiomisc page if you wish to email me. Electronics https://www.st-andrews.ac.uk/~www_pa/Scots_Guide/intro/electron.htm
    biog http://jcgl.orpheusweb.co.uk/history/ups_and_downs.html
    Audio Misc http://www.audiomisc.co.uk/index.html

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Brian Gaff@21:1/5 to Jim Lesurf on Wed Aug 31 12:01:39 2022
    People tell me it has to do with how the areas that do not change between frames are encoded. Whether a sighted person will notice is very dependent
    on whether the brain papers over the mistakes. After all the working eye itself has a lot of processing before it can become the image you actually
    see, or think you see. A loot of it is inferred from the bit the macular has seen being added to the lower definition parts of what you are seeing.
    Brian

    --

    --:
    This newsgroup posting comes to you directly from...
    The Sofa of Brian Gaff...
    briang1@blueyonder.co.uk
    Blind user, so no pictures please
    Note this Signature is meaningless.!
    "Jim Lesurf" <noise@audiomisc.co.uk> wrote in message news:5a201c77b3noise@audiomisc.co.uk...
    In article <jn6h6vFt1vU5@mid.individual.net>,
    David <wibble@btinternet.com> wrote:
    Just a side note from my conversion testing, but Handbrake seems to
    reduce
    the size of the file considerably (about 5 times?).

    Is this to be expected?

    Just putting this here as a reminder for further research.

    What does ffmprobe tell you about the content of the source file and the Handbrake output file?

    If Handbrake is transcoding it may be re-compressing differently, and possibly discarding data. This will depend entirely on your settings and
    the source file details. Result may or may not be a noticable change in
    the
    appearance (or sound) depending on the details.

    Jim

    --
    Please use the address on the audiomisc page if you wish to email me. Electronics https://www.st-andrews.ac.uk/~www_pa/Scots_Guide/intro/electron.htm
    biog http://jcgl.orpheusweb.co.uk/history/ups_and_downs.html
    Audio Misc http://www.audiomisc.co.uk/index.html


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Woolley@21:1/5 to All on Wed Aug 31 12:49:26 2022
    On 31/08/2022 12:33, NY wrote:
    Video compression algorithms work by transmitting a "key frame" every so often (typically every 10-15 frames) which is a full-detail frame
    (subject to lossy JPEG-

    I think 10-15 is rather shorter than used in practice. A suggested
    value for a video server was 2 seconds, and up to 4.

    Also you didn't mention motion compensation, which is why old parts of
    the picture can end up moving around the screen for some time (until the
    next successful key frame reception), when a picture starts to break up.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From NY@21:1/5 to Brian Gaff on Wed Aug 31 12:33:28 2022
    "Brian Gaff" <brian1gaff@gmail.com> wrote in message news:tenf2m$1psru$1@dont-email.me...
    People tell me it has to do with how the areas that do not change between frames are encoded. Whether a sighted person will notice is very dependent
    on whether the brain papers over the mistakes. After all the working eye itself has a lot of processing before it can become the image you actually see, or think you see. A lot of it is inferred from the bit the macular
    has seen being added to the lower definition parts of what you are
    seeing.

    Video compression algorithms work by transmitting a "key frame" every so
    often (typically every 10-15 frames) which is a full-detail frame (subject
    to lossy JPEG-type compression). This is followed by a series of difference frames: differences between the current frame and the last key frame). For a scene that is fairly static with just a small amount of movement, this means that the difference frames are very small. The encoder has to be able to compare successive source frames and insert a new key frame, even if it is sooner than the normal 10-15 frames, if the scene changes dramatically (eg
    at a shot change). This avoids having to transmit huge difference frames between the current frame and a key frame that is dramatically different.

    It works well, but it can lead to macro-blocks (large squares that are typically 16 pixels square) on parts of a frame that change very rapidly -
    eg if the camera is panning or the subject moves across the frame very
    quickly. Ideally the encoder would allocate a higher bit rate (ie larger difference frame) is there is large movement, but sometimes the encoder is restricted on the maximum bit rate.

    If you record a programme that has a lot of camera-panning or other movement and play back those sequences frame by frame you can see a lot of macro-blocking.

    Another possible artefact is detail that goes missing on fairly plain backgrounds when there is a lot of movement: the classic one is a football match where there is detail in the grass which can degenerate into a featureless green mass if the bits that are needed to reproduce the grass detail are suddenly needed when many of the players move in the frame.

    As with any lossy compression, the art is in choosing a bitrate which only removes details that a normal viewer would not notice, while not reducing
    the bitrate to the extent that the picture looks overcompressed - blocky or lacking in detail. In general, a bitrate is often chosen which removes just
    a bit too much detail, so the artefacts are just visible - or am I being cynical?



    The processing of the eye / brain is incredible. It is hard to believe that
    the image that the eye sends has a hole in it, which corresponds to the
    blind spot where the optic nerve enters the eye, and the eye compensates for this by constantly moving slightly so as to fill in the blind spot - and yet the brain filters out that movement, even in cases of people with nystagmus where the movements are so large that other people can see the person's eyes moving around. Flautist/flutist James Galway is an example of this: in close-ups you could see his eyes dancing around.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Java Jive@21:1/5 to All on Wed Aug 31 12:52:59 2022
    On 31/08/2022 12:33, NY wrote:
    "Brian Gaff" <brian1gaff@gmail.com> wrote in message news:tenf2m$1psru$1@dont-email.me...
    People tell me it has to do with how the areas that do not change
    between frames are encoded. Whether a sighted person will notice is
    very dependent on whether the brain papers over the  mistakes. After
    all the working eye itself has a lot of processing before it can
    become the image you actually see, or think you see. A lot of it is
    inferred from the bit the macular has seen being  added to the lower
    definition parts of what you are seeing.

    Video compression algorithms work by transmitting a "key frame" every so often (typically every 10-15 frames) which is a full-detail frame
    (subject to lossy JPEG-type compression). This is followed by a series
    of difference frames: differences between the current frame and the last
    key frame). For a scene that is fairly static with just a small amount
    of movement, this means that the difference frames are very small. The encoder has to be able to compare successive source frames and insert a
    new key frame, even if it is sooner than the normal 10-15 frames, if the scene changes dramatically (eg at a shot change). This avoids having to transmit huge difference frames between the current frame and a key
    frame that is dramatically different.

    It works well, but it can lead to macro-blocks (large squares that are typically 16 pixels square) on parts of a frame that change very rapidly
    - eg if the camera is panning or the subject moves across the frame very quickly. Ideally the encoder would allocate a higher bit rate (ie larger difference frame) is there is large movement, but sometimes the encoder
    is restricted on the maximum bit rate.

    If you record a programme that has a lot of camera-panning or other
    movement and play back those sequences frame by frame you can see a lot
    of macro-blocking.

    Another possible artefact is detail that goes missing on fairly plain backgrounds when there is a lot of movement: the classic one is a
    football match where there is detail in the grass which can degenerate
    into a featureless green mass if the bits that are needed to reproduce
    the grass detail are suddenly needed when many of the players move in
    the frame.

    As with any lossy compression, the art is in choosing a bitrate which
    only removes details that a normal viewer would not notice, while not reducing the bitrate to the extent that the picture looks overcompressed
    - blocky or lacking in detail. In general, a bitrate is often chosen
    which removes just a bit too much detail, so the artefacts are just
    visible - or am I being cynical?

    No, I think if anything you're understating it, the compression
    artefacts often piss me off.

    Thanks for the explanation BTW, even though it was pretty much what I
    had inferred, my knowledge of video compression is a bit vague, so an explanation such as the above is helpful.

    --

    Fake news kills!

    I may be contacted via the contact address given on my website:
    www.macfh.co.uk

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From NY@21:1/5 to David on Wed Aug 31 15:49:28 2022
    On 30/08/2022 14:21, David wrote:
    Just a side note from my conversion testing, but Handbrake seems to reduce the size of the file considerably (about 5 times?).

    Is this to be expected?

    Just putting this here as a reminder for further research.

    Most video formats involve lossy compression: compressing each frame to
    the extent that some information (hopefully not noticeable detail) is lost.

    Reducing the file size to a fifth of its previous size seems excessive,
    though. Does the picture quality of the M4V file look similar to that of
    the original MKV file, or does it display as blocks on parts of the
    video where there is a lot of movement?

    Some compression algorithms are more efficient than others. The H264 compression used on HD TV is more efficient that the older MPEG
    compression used on SD TV, so the same picture quality can be
    transmitted in smaller files. It's possible that conversion from MKV to
    M4V is converting from a less efficient coding algorithm to a more
    efficient one.

    Most conversion programs allow you to configure the amount of
    compression by altering the bitrate of the resulting video. If the
    pictures subjectively look worse, you might try changing to using a
    higher bitrate. If they still look as good, just be thankful that you've
    got something for nothing ;-)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Brian Gaff@21:1/5 to me@privacy.invalid on Thu Sep 1 10:18:20 2022
    Yes I get that even though these days I see nothing. The brain does not
    realise its not seeing and tries to move the eyes all the time resulting in fatigue and disorientation.
    Brian

    --

    --:
    This newsgroup posting comes to you directly from...
    The Sofa of Brian Gaff...
    briang1@blueyonder.co.uk
    Blind user, so no pictures please
    Note this Signature is meaningless.!
    "NY" <me@privacy.invalid> wrote in message
    news:tengu5$1q2ql$1@dont-email.me...
    "Brian Gaff" <brian1gaff@gmail.com> wrote in message news:tenf2m$1psru$1@dont-email.me...
    People tell me it has to do with how the areas that do not change between
    frames are encoded. Whether a sighted person will notice is very
    dependent on whether the brain papers over the mistakes. After all the
    working eye itself has a lot of processing before it can become the image
    you actually see, or think you see. A lot of it is inferred from the bit
    the macular has seen being added to the lower definition parts of what
    you are seeing.

    Video compression algorithms work by transmitting a "key frame" every so often (typically every 10-15 frames) which is a full-detail frame (subject
    to lossy JPEG-type compression). This is followed by a series of
    difference frames: differences between the current frame and the last key frame). For a scene that is fairly static with just a small amount of movement, this means that the difference frames are very small. The
    encoder has to be able to compare successive source frames and insert a
    new key frame, even if it is sooner than the normal 10-15 frames, if the scene changes dramatically (eg at a shot change). This avoids having to transmit huge difference frames between the current frame and a key frame that is dramatically different.

    It works well, but it can lead to macro-blocks (large squares that are typically 16 pixels square) on parts of a frame that change very rapidly -
    eg if the camera is panning or the subject moves across the frame very quickly. Ideally the encoder would allocate a higher bit rate (ie larger difference frame) is there is large movement, but sometimes the encoder is restricted on the maximum bit rate.

    If you record a programme that has a lot of camera-panning or other
    movement and play back those sequences frame by frame you can see a lot of macro-blocking.

    Another possible artefact is detail that goes missing on fairly plain backgrounds when there is a lot of movement: the classic one is a football match where there is detail in the grass which can degenerate into a featureless green mass if the bits that are needed to reproduce the grass detail are suddenly needed when many of the players move in the frame.

    As with any lossy compression, the art is in choosing a bitrate which only removes details that a normal viewer would not notice, while not reducing
    the bitrate to the extent that the picture looks overcompressed - blocky
    or lacking in detail. In general, a bitrate is often chosen which removes just a bit too much detail, so the artefacts are just visible - or am I
    being cynical?



    The processing of the eye / brain is incredible. It is hard to believe
    that the image that the eye sends has a hole in it, which corresponds to
    the blind spot where the optic nerve enters the eye, and the eye
    compensates for this by constantly moving slightly so as to fill in the
    blind spot - and yet the brain filters out that movement, even in cases of people with nystagmus where the movements are so large that other people
    can see the person's eyes moving around. Flautist/flutist James Galway is
    an example of this: in close-ups you could see his eyes dancing around.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Brian Gregory@21:1/5 to David Woolley on Sat Sep 10 22:39:50 2022
    On 31/08/2022 12:49, David Woolley wrote:
    On 31/08/2022 12:33, NY wrote:
    Video compression algorithms work by transmitting a "key frame" every
    so often (typically every 10-15 frames) which is a full-detail frame
    (subject to lossy JPEG-

    I think 10-15 is rather shorter than used in practice.  A suggested
    value for a video server was 2 seconds, and up to 4.


    I think it depends.
    For television having to wait up to 4 seconds after changing channel
    before the picture appears would be annoying.

    --
    Brian Gregory (in England).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)