• Irfanview color depth

    From Peter@21:1/5 to All on Wed Nov 29 14:54:27 2023
    XPost: rec.photo.digital, alt.comp.freeware

    In Irfanview, you can Decrease Color Depth to any level but how can you
    either increase it or figure out what the current color depth is set to?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Peter@21:1/5 to All on Wed Nov 29 15:01:24 2023
    XPost: rec.photo.digital, alt.comp.freeware

    In Irfanview, you can Decrease Color Depth to any level but how can you
    either increase it or figure out what the current color depth is set to?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Shinji Ikari@21:1/5 to Peter on Wed Nov 29 21:15:00 2023
    XPost: rec.photo.digital, alt.comp.freeware

    Hello

    Peter <confused@nospam.net> schrieb

    In Irfanview, you can Decrease Color Depth to any level but how can you >either increase it or figure out what the current color depth is set to?

    Did you try pressing the key: i (= Information)?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Newyana2@21:1/5 to Peter on Wed Nov 29 19:14:23 2023
    XPost: rec.photo.digital, alt.comp.freeware

    "Peter" <confused@nospam.net> wrote

    | In Irfanview, you can Decrease Color Depth to any level but how can you
    | either increase it or figure out what the current color depth is set to?

    If you haven't changed anything then you'll mostly know
    by file type. JPGs will be 24-bit, GIFs are 8-bit, PNGs
    32-bit. 24-bit is actually the max in standard usage.
    The extra 8 bits in PNGs are for transparency values.
    BMPs can be 2-bit to 24-bit, but most people don't
    see BMPs much these days.

    Do you know what color depth is? A monitor these days
    displays 24-bit color, which means 256 different hues of
    red, green and blue. 0-0-0 is black. 255-255-255 is white.
    (Unless you're on a Mac, in which case I think it's 18-bit
    color. Which means it can only display 64 color gradients for
    R, G and B -- it's missing 192 hues out of 256, so it dithers
    pixels to the nearest color.)

    Raster image file formats store those values as numbers
    representing a pixel grid. They're all bitmap when they
    display, but each format stores the data differently.

    Why would you want to change color depth? If you want
    to do something like save a JPG as GIF then you'll lose a lot
    of the color data. If you do the reverse you won't get more
    colors. The original GIF color table entries are all that you'll
    see unless you then edit the image in 24-bit.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Peter@21:1/5 to Shinji Ikari on Thu Nov 30 01:41:22 2023
    XPost: rec.photo.digital, alt.comp.freeware

    Shinji Ikari <shinji@gmx.net> wrote:
    Did you try pressing the key: i (= Information)?

    The Irfanview "i" key brings up a table showing the compression, original
    size, current size, print size from DPI, original colors, current colors, number of unique colors (with "auto count" being checked, whatever that
    means).

    It looks like the color depth is the "current colors" but how do you get
    rid of the other color-related entropy bits when you reduce color depth?

    (Sorry for duplicate posts - it's what Eternal September sometimes does.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Peter@21:1/5 to Newyana2@invalid.nospam on Thu Nov 30 01:34:16 2023
    XPost: rec.photo.digital, alt.comp.freeware

    Newyana2 <Newyana2@invalid.nospam> wrote:
    Do you know what color depth is?

    Not really. That's why I had asked the question to figure out how modifying
    the image in various ways to increase entropy affected the color depth.

    Why would you want to change color depth?

    To increase entropy.

    If you want
    to do something like save a JPG as GIF then you'll lose a lot
    of the color data.

    Does saving JPG to GIF remove unique camera sensor imperfections?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Paul@21:1/5 to All on Wed Nov 29 22:28:16 2023
    XPost: rec.photo.digital, alt.comp.freeware

    On 11/29/2023 7:14 PM, Newyana2 wrote:
    "Peter" <confused@nospam.net> wrote

    | In Irfanview, you can Decrease Color Depth to any level but how can you
    | either increase it or figure out what the current color depth is set to?

    If you haven't changed anything then you'll mostly know
    by file type. JPGs will be 24-bit, GIFs are 8-bit, PNGs
    32-bit. 24-bit is actually the max in standard usage.
    The extra 8 bits in PNGs are for transparency values.
    BMPs can be 2-bit to 24-bit, but most people don't
    see BMPs much these days.

    Do you know what color depth is? A monitor these days
    displays 24-bit color, which means 256 different hues of
    red, green and blue. 0-0-0 is black. 255-255-255 is white.
    (Unless you're on a Mac, in which case I think it's 18-bit
    color. Which means it can only display 64 color gradients for
    R, G and B -- it's missing 192 hues out of 256, so it dithers
    pixels to the nearest color.)

    Raster image file formats store those values as numbers
    representing a pixel grid. They're all bitmap when they
    display, but each format stores the data differently.

    Why would you want to change color depth? If you want
    to do something like save a JPG as GIF then you'll lose a lot
    of the color data. If you do the reverse you won't get more
    colors. The original GIF color table entries are all that you'll
    see unless you then edit the image in 24-bit.

    JPG is subject to rounding errors. These spray into the
    color space.

    For example, open a GIF (8 bit, indexed), now save as JPG,
    then open in Irfanview. Do "Information". It counts the colors
    for you. There could be 10,000 colors in there. Now, if
    you plot the "locus" of the colors, you'll find colors very
    close to the main 256 colors, all closely clustered around their
    parent dot.

    Now, ask a tool to do a color space reduction. Ask to go from
    24 bit RGB (full color space) to 8-bit indexed. All of the
    colors which are nearest their parent dot, are converted to
    the parent dot value. Now, the image has 256 colors again,
    and they're indexes into a 24 bit color table.

    If you then ask to save the image as a GIF, the save should go
    very fast, because the colorspace is now pretty close to what
    GIF wants.

    *******

    Increasing the color space, that operation "does nothing".
    What started as a 0xaa 0xaa 0xaa pixel remains the same color.
    If you change to an HDR (High Dynamic Range) pixel with
    (3) ten bit values, then the lower two bits will be 00.

    It's when you do subsequent math on the representation, that
    some of the bit values may move around. Depending on what
    you're doing.

    *******

    When you save to PNG, one of the options is to control
    the number of bits used. You can use 8 bits per color.
    You can use 2 bits per color. This is all part of the zillion
    compression options when making a PNG.

    And none of these transformations are of particular interest
    to rec.photo.digital , as they're struggling to preserve what
    they've got. Only web people or cartoon people, revel in
    color space transformations for size reduction or other
    purposes. If you do color space reductions, you can get
    banding on the screen when you look at the result. A photographer
    does not want banding on a screen, or in print.

    What I can see of Irfanview, it does not look like you
    can raise the color space, higher than the screen capability.
    My screen isn't HDR10, so the color space apparently
    has no need to go above 24-bit RGB.

    The versions of Photoshop I've got, from years ago, their
    internal representation adjusts for the expected dynamics
    of any math. When you average two pictures, (A+B)/2,
    it uses 9 bits per pixel when doing the math. Averaging
    two pictures, is a way of reducing sensor noise. And it
    only works, if you shoot two identical picture with a
    tripod, and nothing in the scene is moving. I made a user
    manual once, with static photos, and used that technique
    to clean up the poor sensor noise. The scene was illuminated,
    but the sensor used was a joke. But you do the best with
    what you've got.

    There are a number of other wasteful tools, that use way too
    many bits when doing math (like, three floats), and these
    serve to gobble down RAM when processing images. The approach
    Photoshop was using at the time, was pretty optimal.

    Paul

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From JJ@21:1/5 to Peter on Thu Nov 30 18:58:19 2023
    XPost: rec.photo.digital, alt.comp.freeware

    On Wed, 29 Nov 2023 15:01:24 +0000, Peter wrote:
    In Irfanview, ...
    ... figure out what the current color depth is set to?

    Look at the first column of the application window statusbar. It should
    display something like e.g.:

    1280 x 720 x 24 BPP

    That "BPP" is an acronym for Bits Per Pixel. i.e. the color depth.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Newyana2@21:1/5 to Peter on Thu Nov 30 09:10:23 2023
    XPost: rec.photo.digital, alt.comp.freeware

    "Peter" <confused@nospam.net> wrote

    | > Why would you want to change color depth?
    |
    | To increase entropy.
    |
    I looked that up. I don't find any reference to entropy
    in graphics. So I'm not sure what you mean.

    | > If you want
    | > to do something like save a JPG as GIF then you'll lose a lot
    | > of the color data.
    |
    | Does saving JPG to GIF remove unique camera sensor imperfections?

    It will be "dithered" to nearest colors, based on one
    of a number of dithering approaches. For example, if
    you have a gradient of greens containing 372 hues, it
    might get converted to a field of green and white dots.
    A GIF can only use 256 colors, so most JPGs would be
    severely degraded when saved as GIF, because a JPG
    can use 16 million colors.

    It helps to understand the basic system of raster images.
    In ALL cases they represent pixel grids with numeric RGB
    values. That is, all formats store data to light pixels on a
    screen with varying intensities of red, green and blue. It's
    always a grid, always rectangular. Any image that's not a
    rectangle (like an icon or PNG) is still a rectangular bitmap,
    but the image format is stroring transparency values to be
    applied when the image is rendered onscreen. (That's why
    displays are often "32-bit". 3 bytes for RGB values and one
    byte to indicate transparency.)

    The grid of stored pixel values, indicating color intensity of
    RGB, is arranged in rows, usually starting at top left. That's
    a bitmap. All raster images are bitmaps but can be stored
    in different file formats. JPG is arguably a very poor format
    because it loses color data, but it's popular because it makes
    the smallest files and there are no royalties on the format.
    So it's ideal for online. (It's used in cameras for that reason.
    For people sending birthday party pictures in email, quality
    is not a big factor.)

    Once you open an image in a graphic editor you're dealing
    with the bitmap, so whatever you alter from there will affect
    the image saved as a different file. In the case of JPG, it
    compresses the image by eliminating contiguous colors in
    imperceptible ways. That's why a bad quality JPG looks like
    an image comprised of blocks. The reduction of colors allows
    for the data to be stored more compactly, but loses detail.
    That data is lost for good.

    So if you have a JPG saved at, say, 92 compression (it's
    1 to 100. Top quality can be either the high or low number,
    depending on the software), then if you open that in an editor
    and resave it at 87 compression, there should be no noticeable
    difference, but some byte values will be changed. (I like Paint
    Shop Pro 5 because it chops off the EXIF data altogether. I
    find it creepy to have buried data in a file. It's a privacy
    problem.)

    So if it were me I'd try resaving the JPG at different
    compression, convert both of those to BMP, then compare in
    a hex editor to see what you have. If you convert to GIF
    you'll ruin the image because it has to dither to a max of 256
    colors, while the JPG could have 100,000 colors. So forget GIF.

    If you open a JPG in a hex editor it won't be very informative.
    It's like looking at a ZIP file. You only see a bloated header and
    the compressed state of the data.

    If you open and resave as a BMP then you
    have the direct data. The first 54 bytes of the BMP file will
    contain values indicating color depth, width/height, etc. The
    rest is simply the straight grid values. So for a typical 24-bit
    BMP image, the first 3 bytes will be the BGR values in big-endian
    order for the top left pixel. Example: Bright sky blue is zero red,
    half green intensity, and full blue intensity. As a long integer
    value that's 16744448. As bytes it's 255-128-0 or 0-128-255.
    That can also be written as hex: FF 80 00 If you save a BMP
    file which is only that color then you'll see 54 bytes of file header
    followed by a repeating pattern of FF 80 00.

    All raster images work that way. All raster images are bitmaps
    in different packaging. A JPG is also a bitmap, but when you
    increase compression you'll reduce colors. So if you have, say, a
    photo of sky with pixels like 255 80 00 243 75 22 241 83 02 those
    three pixels might get dithered to 3 pixel values of 243 75 22. Your
    eye won't see the difference, but the 3 pixels' values can be more
    easily compressed.

    So you could try that. Check compression level, open the file,
    resave at different compression, open both files and resave as
    BMPs. Open both BMPs in a hex editor and see how they compare.

    I can't tell you anything about camera sensors. I don't know about
    that. But however they work, it still has to boil down to 24-bit
    RGB if you have a JPG. So any tracks left by the camera would
    have to be in patterns of pixel values.

    I hope that makes sense. It sounds complicated, but it's actually
    very simple once you get how it orks. All raster images are grids
    of pixel RGB values as numbers. It all comes down to numbers,
    just as any file does.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Peter@21:1/5 to Newyana2@invalid.nospam on Thu Nov 30 18:23:40 2023
    XPost: rec.photo.digital, alt.comp.freeware

    Newyana2 <Newyana2@invalid.nospam> wrote:
    | To increase entropy.
    |
    I looked that up. I don't find any reference to entropy
    in graphics. So I'm not sure what you mean.

    Thanks for spending the effort to understand the reason for caring about entropy in terms of digital forensics of images posted to online sources.

    Entropy is a common fundamental technical term for levels of "disorder." Images: https://duckduckgo.com/?q=camera+fingerprinting+%2Bentropy
    Smartphones: https://duckduckgo.com/?q=smartphone+sensor+fingerprinting+%2Bentropy
    Browsers: https://duckduckgo.com/?q=browser+fingerprinting+%2Bentropy
    Digits: https://duckduckgo.com/?q=fbi+fingerprinting+%2Bentropy

    I'm using the term the way they use it to uniquely identify every image
    posted to the Internet that came from any particular unique camera sensor.

    For example, this article on "Smartphone Camera Identification" discusses entropy 25 times and camera 114 times (where it's almost a 1:4 ratio) and
    where they said "In this work, we follow an identification methodology for smartphone camera sensors... Our analysis showed that the blue channel
    provided the best separation..."
    https://www.mdpi.com/1099-4300/24/8/1158/html https://mdpi-res.com/d_attachment/entropy/entropy-24-01158/article_deploy/entropy-24-01158-v3.pdf

    And this paper titled "Mobile Device Identification via Sensor
    Fingerprinting" uses the words entropy:camera in a 3:2 ratio, where they conclude "We show that the entropy from sensor fingerprinting is sufficient
    to uniquely identify a device."
    https://www.arxiv-vanity.com/papers/1408.1416/

    The problem of entropy in terms of posting images to social media is
    described in this paper on "Robustness of digital camera identification" https://link.springer.com/article/10.1007/s11042-021-11129-y
    Where they start off with "One of the problem in digital forensics is the
    issue of identification of digital cameras based on images. This aspect has been attractive in recent years due to popularity of social media platforms like Facebook, Twitter etc., where lots of photographs are shared."

    Given this conclusion from the following paper on image fingerprinting
    "No Two Digital Cameras Are the Same: Fingerprinting Via Sensor Noise" https://33bits.wordpress.com/2011/09/19/digital-camera-fingerprinting/
    "Camera fingerprinting can be used on the one hand for detecting forgeries (e.g., photoshopped images), and to aid criminal investigations by
    determining who (or rather, which camera) might have taken a picture. On
    the other hand, it could potentially also be used for unmasking individuals
    who wish to disseminate photos anonymously online."

    Let's make up a scenario where it might matter (but please don't shoot the example but try to understand the problem the example is illustrating).

    a. You are brought up Christian & you post to your local church site
    b. You are an employee & you post images to your employer web site
    c. You have political aspirations & you post images to your party web site
    d. You are LGBTQ+ & you post images to your favorite LGBTQ+ web site

    Do you want all those online photos to uniquely identify your camera?

    If you want
    to do something like save a JPG as GIF then you'll lose a lot
    of the color data.
    |
    | Does saving JPG to GIF remove unique camera sensor imperfections?

    It will be "dithered" to nearest colors, based on one
    of a number of dithering approaches. For example, if
    you have a gradient of greens containing 372 hues, it
    might get converted to a field of green and white dots.
    A GIF can only use 256 colors, so most JPGs would be
    severely degraded when saved as GIF, because a JPG
    can use 16 million colors.

    According to one of the papers above, the "blue" realm is the easiest to fingerprint (although some papers indicated it was the "green").

    This JPG-to-GIF dithering might therefore help in increasing entropy.

    It helps to understand the basic system of raster images.

    What I do not understand which is important is how the camera's sensor imperfections show up in the camera's resulting outputted raster images.

    In ALL cases they represent pixel grids with numeric RGB
    values. That is, all formats store data to light pixels on a
    screen with varying intensities of red, green and blue. It's
    always a grid, always rectangular. Any image that's not a
    rectangle (like an icon or PNG) is still a rectangular bitmap,
    but the image format is storing transparency values to be
    applied when the image is rendered onscreen. (That's why
    displays are often "32-bit". 3 bytes for RGB values and one
    byte to indicate transparency.)

    It would seem that the fewest bits used (which show the image with just
    enough clarity to be useful) would be the best to increase entropy.

    The grid of stored pixel values, indicating color intensity of
    RGB, is arranged in rows, usually starting at top left. That's
    a bitmap. All raster images are bitmaps but can be stored
    in different file formats. JPG is arguably a very poor format
    because it loses color data, but it's popular because it makes
    the smallest files and there are no royalties on the format.
    So it's ideal for online. (It's used in cameras for that reason.
    For people sending birthday party pictures in email, quality
    is not a big factor.)

    For the reasons you stated, most images online are JPG so that's what I'm trying to increase the entropy of. If an automatic JPG->GIF->JPG operation
    for all uploaded files increases that entropy, then that's probably a good technique that I can use to hinder fingerprinting by increasing entropy.

    Once you open an image in a graphic editor you're dealing
    with the bitmap, so whatever you alter from there will affect
    the image saved as a different file. In the case of JPG, it
    compresses the image by eliminating contiguous colors in
    imperceptible ways. That's why a bad quality JPG looks like
    an image comprised of blocks. The reduction of colors allows
    for the data to be stored more compactly, but loses detail.
    That data is lost for good.

    The ultimate web site might perform that image alteration also.

    I don't know what they do with the images though so I don't have control
    over whether they increase the entropy further or leave it alone.

    So if you have a JPG saved at, say, 92 compression (it's
    1 to 100. Top quality can be either the high or low number,
    depending on the software), then if you open that in an editor
    and resave it at 87 compression, there should be no noticeable
    difference, but some byte values will be changed. (I like Paint
    Shop Pro 5 because it chops off the EXIF data altogether. I
    find it creepy to have buried data in a file. It's a privacy
    problem.)

    It would be nice to know how much JPEG compression alone increases (or decreases) entropy. I would assume it increases entropy.

    But I have no idea if it's a lot or only a little.
    That they uniquely identify cameras from images hints at little.

    So if it were me I'd try resaving the JPG at different
    compression, convert both of those to BMP, then compare in
    a hex editor to see what you have. If you convert to GIF
    you'll ruin the image because it has to dither to a max of 256
    colors, while the JPG could have 100,000 colors. So forget GIF.

    I'll try the JPG->GIF->JPG method to see if it "ruins" the image.

    If you open a JPG in a hex editor it won't be very informative.
    It's like looking at a ZIP file. You only see a bloated header and
    the compressed state of the data.

    Yes but that data is very informative when it uniquely identifies your
    camera out of pictures scattered across web sites on the Internet.

    If you open and resave as a BMP then you
    have the direct data. The first 54 bytes of the BMP file will
    contain values indicating color depth, width/height, etc. The
    rest is simply the straight grid values. So for a typical 24-bit
    BMP image, the first 3 bytes will be the BGR values in big-endian
    order for the top left pixel. Example: Bright sky blue is zero red,
    half green intensity, and full blue intensity. As a long integer
    value that's 16744448. As bytes it's 255-128-0 or 0-128-255.
    That can also be written as hex: FF 80 00 If you save a BMP
    file which is only that color then you'll see 54 bytes of file header followed by a repeating pattern of FF 80 00.

    The less repeatable (more random) each image's bitmapped digital result is
    on the online storage medium, is, the better for increasing entropy.

    All raster images work that way. All raster images are bitmaps
    in different packaging. A JPG is also a bitmap, but when you
    increase compression you'll reduce colors. So if you have, say, a
    photo of sky with pixels like 255 80 00 243 75 22 241 83 02 those
    three pixels might get dithered to 3 pixel values of 243 75 22. Your
    eye won't see the difference, but the 3 pixels' values can be more
    easily compressed.

    I like that increasing compression reduces colors. What you want to do, I
    would think, is paper over the camera sensor imperfections in the output.

    So you could try that. Check compression level, open the file,
    resave at different compression, open both files and resave as
    BMPs. Open both BMPs in a hex editor and see how they compare.

    What I'll test is JPG->GIF->JPG and JPG->BMP->JPG to see which gives the
    best results for an online upload - but which do you think introduces the
    most entropy?

    I can't tell you anything about camera sensors. I don't know about
    that. But however they work, it still has to boil down to 24-bit
    RGB if you have a JPG. So any tracks left by the camera would
    have to be in patterns of pixel values.

    The articles I pointed to in the beginning of this response shows that what digital forensics target are the camera sensor's unique imperfections.

    I hope that makes sense. It sounds complicated, but it's actually
    very simple once you get how it orks. All raster images are grids
    of pixel RGB values as numbers. It all comes down to numbers,
    just as any file does.

    Thank you for all your helpful information. The goal is to introduce "just enough" entropy so that all your images aren't uniquely traced to you.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Newyana2@21:1/5 to Peter on Thu Nov 30 16:25:42 2023
    XPost: rec.photo.digital, alt.comp.freeware

    "Peter" <confused@nospam.net> wrote

    | Thank you for all your helpful information. The goal is to introduce "just
    | enough" entropy so that all your images aren't uniquely traced to you.

    Interesting issue. I'd never heard of it. I wasn't up to fully
    reading all those articles, but I have the basic idea. Whether
    it will ever be a way to ID the picture taker from an image
    is questionable. It seems a bit like doing a DNA test on
    someone wearing a name tag. The site already knows who's
    uploading in most cases, and cameras are leaking data that allows
    for ID. But I can see how this could someday be an issue.

    As for introducing entropy, I don't know. There's no sense
    having images that are ruined, so you don't want to change
    too much. And this doesn't look like entropy to me. Rather, they're
    loking for identifiable patterns of distortion. What would that be?
    Maybe the sensor never reports certain hue values? I don't know.
    I think you'll just have to test, after coming up with some way
    to gauge how unique the ID traces are. You don't know if your
    method is helping unless you know what to look for.

    JPG -> GIF -> JPG will ruin nearly all images.
    Any JPG resaving will change the image because it's
    a lossy format. But other formats are not lossy. So it's
    in the JPG resaving that you'll get the most change.

    The BMP saving would only be for inspecting byte changes
    for pixel values. There's no other advantage to BMP. The
    image displayed is already a BMP, anyway. It's the display
    of what's called a DIB -- device independent bitmap. That
    is, just the pixel bytes.

    So resave your JPG, stripping the header and reducing
    quality slightly. Then see what you have by saving that
    as a BMP. But I have no idea how you'll assess the uniqueness.
    If you had some formula for that you could probably
    automate it by processing the byte value patterns. But
    that means getting some source code for the software that
    will supposedly do the job. In other words, if yopu build up
    an ID for your camera from multiple images then you can
    test your alteration against that, but without having that,
    I don't know how you'll identify exactly what bytes in the
    image are giving you away, and why.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Peter@21:1/5 to Newyana2@invalid.nospam on Fri Dec 1 02:53:20 2023
    XPost: rec.photo.digital, alt.comp.freeware

    Newyana2 <Newyana2@invalid.nospam> wrote:
    Interesting issue. I'd never heard of it.

    Remember a decade ago you could post a picture & nobody would be able to
    find that you posted it in another place?

    Now they can. And it's just going to get worse with compute power.

    I wasn't up to fully
    reading all those articles, but I have the basic idea. Whether
    it will ever be a way to ID the picture taker from an image
    is questionable. It seems a bit like doing a DNA test on
    someone wearing a name tag. The site already knows who's
    uploading in most cases, and cameras are leaking data that allows
    for ID. But I can see how this could someday be an issue.

    The more powerful computers get and the cheaper mass storage is, the more
    it will be feasible for the powers that be to uniquely identify all the
    images ever posted to a variety of diverse locations to each camera.

    As for introducing entropy, I don't know. There's no sense
    having images that are ruined, so you don't want to change
    too much. And this doesn't look like entropy to me. Rather, they're
    loking for identifiable patterns of distortion. What would that be?

    I agree they seem to be looking for identifiable pixel flaws in the sensor output which show up in the same spots on every image that you upload.

    Maybe the sensor never reports certain hue values? I don't know.
    I think you'll just have to test, after coming up with some way
    to gauge how unique the ID traces are. You don't know if your
    method is helping unless you know what to look for.

    You can move the spots around by cropping & tilting but some spots will
    remain unless you know exactly where all are so as to crop them all out.

    JPG -> GIF -> JPG will ruin nearly all images.

    I just tried it on a bunch of images and they're not ruined for the purpose
    of uploading them to a typical web site (this isn't professional stuff).

    Luckily Irfanview can batch convert so I just ran the "b" command a couple
    of times to convert from JPEG to GIF and back to JPEG. The quality was ok.

    Any JPG resaving will change the image because it's
    a lossy format. But other formats are not lossy. So it's
    in the JPG resaving that you'll get the most change.

    Thanks for that advice. In the JPG->GIF->JPG, the original photo had 219908 unique colors, while the final uploaded photo had 80855 unique colors.

    Original (as reported by Irfanview "i" command):
    Original colors = 16.7 Million (24 BitsPerPixel)
    Current colors = 16.7 Million (24 BitsPerPixel)
    Number of unique colors = 232666

    GIF (as reported by Irfanview "i" command):
    Original colors = 256 (8 BitsPerPixel)
    Current colors = 256 (8 BitsPerPixel))
    Number of unique colors = 256

    Uploaded (as reported by Irfanview "i" command):
    Original colors = 16.7 Million (24 BitsPerPixel)
    Current colors = 16.7 Million (24 BitsPerPixel)
    Number of unique colors = 80801

    It dropped the number of unique colors by an order of magnitude.

    Interestingly, running Irfanview autoadjust colors didn't change much.
    Original colors = 16.7 Million (24 BitsPerPixel)
    Current colors = 16.7 Million (24 BitsPerPixel)
    Number of unique colors = 80375

    But oh what a difference in unique colors Irfanview sharpen did!
    Original colors = 16.7 Million (24 BitsPerPixel)
    Current colors = 16.7 Million (24 BitsPerPixel)
    Number of unique colors = 169885

    Why would a sharpen add so many unique colors to the image?

    Running an Irfanview "Effects -> Blur" didn't change all that much.
    Original colors = 16.7 Million (24 BitsPerPixel)
    Current colors = 16.7 Million (24 BitsPerPixel)
    Number of unique colors = 146437

    But of course, I have no way of knowing if this papers over unique camera sensor flaws that were in the original image that moved forward throughout.

    The BMP saving would only be for inspecting byte changes
    for pixel values. There's no other advantage to BMP. The
    image displayed is already a BMP, anyway. It's the display
    of what's called a DIB -- device independent bitmap. That
    is, just the pixel bytes.

    Since the DISPLAY is a BMP, would you think it useful to paper over unique camera sensor flaws by snapping a screenshot and replacing it with that?

    So resave your JPG, stripping the header and reducing
    quality slightly. Then see what you have by saving that
    as a BMP. But I have no idea how you'll assess the uniqueness.

    I think we need to fundamentally do a few things, but I'm not sure.

    We need to paper over the unique flaws with blurring somehow.
    And maybe we need to paper over them with color changes somehow.
    And maybe we can move them around by cropping & tilting the images.

    If you had some formula for that you could probably
    automate it by processing the byte value patterns. But
    that means getting some source code for the software that
    will supposedly do the job.

    They generally test it with a black photo but I think they only do that so
    as to have consistent input for their algorithms to assess sensor flaws.

    In other words, if yopu build up
    an ID for your camera from multiple images then you can
    test your alteration against that, but without having that,
    I don't know how you'll identify exactly what bytes in the
    image are giving you away, and why.

    Yup. I know that they're looking for unique flaws, but I don't know how to
    run the Irfanview "i" command to find the entropy of any given image file.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Newyana2@21:1/5 to Peter on Thu Nov 30 22:47:25 2023
    XPost: rec.photo.digital, alt.comp.freeware

    "Peter" <confused@nospam.net> wrote

    | > JPG -> GIF -> JPG will ruin nearly all images.
    |
    | I just tried it on a bunch of images and they're not ruined for the
    purpose
    | of uploading them to a typical web site (this isn't professional stuff).
    |

    I tried a few myself. Surprisingly they look fine!
    One is a photo of a woodcock sitting amongst
    greenery. Lots of hues. Yet the GIF looks OK. And
    as you noted, the color count increases when it's
    converted back. Somehow the JPG conversion brings
    back some sharpness.

    | Why would a sharpen add so many unique colors to the image?
    |
    Sharpen highlights the difference along edges. If you
    take a simple image -- a single color background with
    a different color rectangle in the middle, say, a sharpen
    will add 1 or more lies of new colors where the 2 colors
    meet. In a photo you're doing that with a slightly different
    hue for each pixel comparison.

    | But of course, I have no way of knowing if this papers over unique camera
    | sensor flaws that were in the original image that moved forward
    throughout.
    |

    No. You should have virtually all new pixels, but I don't
    know how their method works.

    | Since the DISPLAY is a BMP, would you think it useful to paper over unique
    | camera sensor flaws by snapping a screenshot and replacing it with that?
    | >

    A screenshot of the desktop? That should give you
    just the same bitmap. A screenshot will just send you the
    byte values being sent to the graphic hardware.

    | I think we need to fundamentally do a few things, but I'm not sure.
    |

    Maybe put it in a locked box, sseal that with wax, then
    bury it under your garage floor. :)

    | We need to paper over the unique flaws with blurring somehow.

    Are they flaws? Could they be something like particular
    hues that can never show up? I don't know. without knowing
    the method of inspection you're in the dark. But I'm guessing
    your method will have changed the precise hue values of nearly
    every pixel. So... pretty much a new image that just happens
    to look the same to your eye.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Peter@21:1/5 to Newyana2@invalid.nospam on Fri Dec 1 23:26:19 2023
    XPost: rec.photo.digital, alt.comp.freeware

    Newyana2 <Newyana2@invalid.nospam> wrote
    I tried a few myself.

    Thank you for trying that out as whatever we come up with to increase
    entropy and paper over unique camera sensor flaws must be easy to do.

    It should probably be all done inside of Irfanview, if that's possible.

    One way we can probably check our entropy is to pick a photo from the
    Internet and then run our commands on that, & then upload it somewhere.

    A few weeks later it should make it into the reverse-image-search engines.

    Surprisingly they look fine!

    I agree. For an upload, they work well (bearing in mind I also resize).

    I like that the Irfanview JPG->GIF adds disorder (in the form of collapsing
    the unique colors) and then the Irfanview batch reconversion to JPG adds
    them back - so the powers that be might not notice it had been done.

    One is a photo of a woodcock sitting amongst greenery. Lots of hues.

    Your idea of JPG->GIF->JPG was wonderful, which I had never thought of
    before you mentioned it in a couple of posts ago where the conversion needs
    to be quick & easy and all done inside of one program, Irfanview.

    To that end, I was re-testing your JPG->GIF->JPG ideas this morning (with
    more consistent settings, like no compression, cropping, tilting &
    sharpening) when I realized there may be two easy ways in Irfanview to accomplish that intermediate GIF conversion of 200K colors to 256 colors.

    One way is the way we've been discussing, which is a batch convert of JPG
    to GIF to JPG with a resharpen added to greatly increase the number of
    colors to a naturally unsuspicious level for typical uploaded JPG images.

    But the other way "might" be to just keep it in JPG but reduce the colors Irfanview -> Image -> Reduce color depth -> 256 colors (8 BPP)
    (which is what I had been doing that kicked off this original question).

    [1] Original photo
    Compression: JPEG, quality: 96, subsampling ON (2x2)
    Original size: 4000 x 3000 Pixels (12.00 MPixels) (4:3)
    Current size: 4000 x 3000 Pixels (12.00 MPixels) (4:3)
    Print size (from DPI): 141.1 x 105.8 cm; 55.56 x 41.67 inches
    Original colors: 16,7 Million (24 BitsPerPixel)
    Current colors: 16,7 Million (24 BitsPerPixel)
    Number of unique colors: 219908
    Disk size: 4.48 MB (4,693,172 Bytes)

    [2] JPG->GIF (using default Irfanview "b" settings)
    Compression: GIF - LZW
    Original size: 4000 x 3000 Pixels (12.00 MPixels) (4:3)
    Current size: 4000 x 3000 Pixels (12.00 MPixels) (4:3)
    Print size (from DPI): 141.1 x 105.8 cm; 55.56 x 41.67 inches
    Original colors: 256 (8 BitsPerPixel)
    Current colors: 256 (8 BitsPerPixel)
    Number of unique colors: 256
    Disk size: 7.06 MB (7,401,292 Bytes)

    GIF->JPG (using default Irfanview "b" settings)
    Compression: JPEG, quality: 100, subsampling ON (2x2)
    Original size: 4000 x 3000 Pixels (12.00 MPixels) (4:3)
    Current size: 4000 x 3000 Pixels (12.00 MPixels) (4:3)
    Print size (from DPI): 141.1 x 105.8 cm; 55.56 x 41.67 inches
    Original colors: 16,7 Million (24 BitsPerPixel)
    Current colors: 16,7 Million (24 BitsPerPixel)
    Number of unique colors: 98371
    Disk size: 9.17 MB (9,615,126 Bytes)

    [3] Reduce colors using Image -> Reduce color depth -> 256 colors (8 BPP)
    Compression: JPEG, quality: 96, subsampling ON (2x2)
    Original size: 4000 x 3000 Pixels (12.00 MPixels) (4:3)
    Current size: 4000 x 3000 Pixels (12.00 MPixels) (4:3)
    Print size (from DPI): 141.1 x 105.8 cm; 55.56 x 41.67 inches
    Original colors: 16,7 Million (24 BitsPerPixel)
    Current colors: 256 (8 BitsPerPixel)
    Number of unique colors: 256
    Disk size: 4.48 MB (4,693,172 Bytes)

    At first I was worried that the Original:Current colors would be a tip off
    to anyone looking at the entropy - but they seem to sync up with a save.

    [3] Reduce colors to 256 & then save the JPEG & re-open that saved JPEG.
    Compression: JPEG, quality: JPEG, quality: 100, subsampling ON (2x2)
    Original size: 4000 x 3000 Pixels (12.00 MPixels) (4:3)
    Current size: 4000 x 3000 Pixels (12.00 MPixels) (4:3)
    Print size (from DPI): 141.1 x 105.8 cm; 55.56 x 41.67 inches
    Original colors: 16,7 Million (24 BitsPerPixel)
    Current colors: 16,7 Million (24 BitsPerPixel)
    Number of unique colors: 98371
    Disk size: 9.17 MB (9,615,126 Bytes)

    Whoa. That's odd. Very strange. The number of unique colors is exactly the
    same for both methods but the file size is double in the JPG->256-JPG
    method.

    That is, the JPG->GIF->JPG resulted in exactly 98371 unique colors.
    But twice the original file size.

    And the JPG->256->JPG also resulted in exactly 98371 unique colors.
    But the same file size as the original was.

    Yet the GIF looks OK.

    The goal is for the powers that be to not realize it was done.

    Whether I move the unique computer-perceptible flaws (by tilting & cropping
    for example) or I paper them over (with blurs & color conversions),
    whatever obfuscation techniques we use must have an end result JPG that
    appears to be a normal upload (just like those from everyone else).

    The only thing left now is to reduce the size as most uploads are
    not the full-size image - but some reduced-size image.

    [4] Reducing that last JPG->256->JPG with File->Save 80% compression alone:
    Compression: JPEG, quality: 90, subsampling ON (2x2)
    Original size: 4000 x 3000 Pixels (12.00 MPixels) (4:3)
    Current size: 4000 x 3000 Pixels (12.00 MPixels) (4:3)
    Print size (from DPI): 141.1 x 105.8 cm; 55.56 x 41.67 inches
    Original colors: 16,7 Million (24 BitsPerPixel)
    Current colors: 16,7 Million (24 BitsPerPixel)
    Number of unique colors: 207697
    Disk size: 3.13 MB (3,285,403 Bytes)

    Which is interesting because we're now pretty much looking at an image
    which is similar in the specs above to the original image in number of
    unique colors being around 200K and the file size of around 3 MB.

    And as you noted, the color count increases when it's
    converted back. Somehow the JPG conversion brings
    back some sharpness.

    The (JPG->GIF->JPG vs JPG-256-JPG) methods seem to have similar results.

    I guess that means if I'm working on a hundred images, I'll use the batch conversion of JPG->GIF->JPG (and batch delete the intermediate GIF).

    But if I'm working on only one image, then I'll just employ the
    JPG->256->JPG method (instead of creating the intermediate GIF file).


    | Why would a sharpen add so many unique colors to the image?
    |
    Sharpen highlights the difference along edges. If you
    take a simple image -- a single color background with
    a different color rectangle in the middle, say, a sharpen
    will add 1 or more lies of new colors where the 2 colors
    meet. In a photo you're doing that with a slightly different
    hue for each pixel comparison.

    Thanks for explaining why sharpen adds colors by changing the pixels along
    the edges of the objects in the image. I don't know what a "resampling" is,
    but most images uploaded to the Internet are likely resized where Irfanview turns on resampling automatically with resize.

    To check the effect on entropy that resizing has, I just ran this test
    on that last JPG-256-JPG->90% compressed image to gauge the effect.

    [5a] Resize/Resample [4] -> 800x600 (no apply resharpen)(no compression)
    Compression: JPEG, quality: 100, subsampling ON (2x2)
    Original size: 800 x 600 Pixels (4:3)
    Current size: 800 x 600 Pixels (4:3)
    Print size (from DPI): 28.2 x 21.2 cm; 11.11 x 8.33 inches
    Original colors: 16,7 Million (24 BitsPerPixel)
    Current colors: 16,7 Million (24 BitsPerPixel)
    Number of unique colors: 70046
    Disk size: 340.72 KB (348,902 Bytes)

    That's a reasonable size for a typical Internet upload but let me see what
    the difference would be had I done the same steps but with resampling.

    [5b] Resize/Resample [4] -> 800x600 (yes apply resharpen)(no compression)
    Compression: JPEG, quality: 100, subsampling ON (2x2)
    Original size: 800 x 600 Pixels (4:3)
    Current size: 800 x 600 Pixels (4:3)
    Print size (from DPI): 28.2 x 21.2 cm; 11.11 x 8.33 inches
    Original colors: 16,7 Million (24 BitsPerPixel)
    Current colors: 16,7 Million (24 BitsPerPixel)
    Number of unique colors: 73459
    Disk size: 386.02 KB (395,283 Bytes)

    Which means the "Apply sharpen after Resample" doesn't seem to change much.

    BTW, it must be important to apply the sharpen after the resample because
    it's the default for Irfanview.

    | But of course, I have no way of knowing if this papers over unique camera
    | sensor flaws that were in the original image that moved forward
    | throughout.
    No. You should have virtually all new pixels, but I don't
    know how their method works.

    I also do NOT know how their method works (for example, if you flip an
    image horizontally, does their method still work?) but from what I've
    gleaned over the years, each sensor has unique flaws that they can detect.

    If we assume that flaw is a "hole", for example, and if we assume that hole (for arguments sake) is dead center in the middle of the image, then that
    one pixel in the middle of the image will be "00000000" (for example).

    I can move that flaw to some other spot by tilting and cropping but (unless cropped out) that flaw would still exist with tilting and cropping alone.

    I'd have to run another step to paper over that flaw entirely so that it's
    no longer 00000000. What Irfanview function do you think can do that best?

    | Since the DISPLAY is a BMP, would you think it useful to paper over unique >| camera sensor flaws by snapping a screenshot and replacing it with that?


    A screenshot of the desktop? That should give you
    just the same bitmap.

    Oh no!

    I didn't realize that a screenshot gets me the exact same bitmap.
    Are you sure?

    The reason I ask is not every screen has the same resolution as the image.
    So how can it be that they're both exactly the same?

    If the image has resolution of, for example, 2X the screen, then how can
    teh result be the same since the screen itself can't resolve more than 1X?

    I was hoping a screenshot would give me the unique flaws in my screen,
    instead of the unique flaws in my camera since the resolution is different.

    I should say I don't really understand resolution or DPI so I might be confusing the two because they both seem to be the same thing to me.

    A screenshot will just send you the
    byte values being sent to the graphic hardware.

    This confuses me because I don't know enough about digital images to
    understand it.

    I'm trying to understand why a screenshot won't help by reducing uniqueness
    (as it always seemed to me that it is no longer the original image).

    If, for example, there's a "hole" in the center of my camera sensor such
    that the middle pixel is "0000000" on the actual image, is a screenshot of
    that original image (saved to a new file) also going to have that hole?

    | I think we need to fundamentally do a few things, but I'm not sure.

    Maybe put it in a locked box, sseal that with wax, then
    bury it under your garage floor. :)

    The first step in protection is simply knowing what that first step is.
    Those who don't know about this issue can't even protect against it.
    At least we can try. They can't. So we're ahead of them.

    | We need to paper over the unique flaws with blurring somehow.

    Are they flaws? Could they be something like particular
    hues that can never show up? I don't know.

    I also do not know. But I'm aware they're always writing papers on new techniques which work better, so I think the uniqueness of the camera
    sensor is what they detect - but they do so via different methods.

    I think they're definitely at the point today that any image that isn't modified by the user is definitely traced to the exact camera that took it.

    That alone is a certainty. What I do not know is how much (or how little)
    of a modification does one need to balance the effort against the results.

    Given every picture that's uploaded has to be modified anyway (for sheer
    size alone if for no other reason), it's easy enough to add more steps.

    without knowing
    the method of inspection you're in the dark. But I'm guessing
    your method will have changed the precise hue values of nearly
    every pixel. So... pretty much a new image that just happens
    to look the same to your eye.

    I use a combination of steps (most of which have been mentioned but not all
    of them) where it's all done in Irfanview (for convenience) and where you
    have helped me a lot to understand what Irfanview can do for papering over flaws.

    One test I can try is to find a high-resolution photo on the net, and then modify that and post it back & see, if in a few months, my modification
    shows up when I do a search on the reverse image search engines.

    I could do two images, one as a control (which is not modified), and
    another image from the same source (which is modified).

    But that wouldn't test forensics so much as how good the image search
    engine is (where cropping & flipping foils many image search engines).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Newyana2@21:1/5 to Peter on Sat Dec 2 08:54:00 2023
    XPost: rec.photo.digital, alt.comp.freeware

    "Peter" <confused@nospam.net> wrote

    | One way we can probably check our entropy is to pick a photo from the
    | Internet and then run our commands on that, & then upload it somewhere.
    |

    Recall that this is your project, not ours. :) I rarely take
    photos and even more rarely upload them.

    | > A screenshot of the desktop? That should give you
    | > just the same bitmap.
    |
    | Oh no!
    |
    | I didn't realize that a screenshot gets me the exact same bitmap.
    | Are you sure?
    |
    | The reason I ask is not every screen has the same resolution as the image.
    | So how can it be that they're both exactly the same?
    |

    Resolution is meaningless for a DIB. I-V may say the image
    is 300 dpi, for example, but that only applies when it's printed
    at 300 dpi. Inches are an abstraction on a computer screen.

    The typical display setting is 96 dpi, but that depends on your
    screen size setting. If you have an image 200x200 and it looks,
    say, 2" square on your monitor at 1920 wide, if you then change
    your display to something like 800x600 then your image will be
    more than 4" wide.

    The graphics driver sends data for 200x200 pixels to the screen.
    The image data is not changed. If you watch a movie with John
    Wayne on a 17" TV or a 50" TV, how wide is John Wayne? (Sounds
    like a good koan.) Regardless of the TV, the film projected to show
    the movie is not changed.

    | If, for example, there's a "hole" in the center of my camera sensor such
    | that the middle pixel is "0000000" on the actual image, is a screenshot of
    | that original image (saved to a new file) also going to have that hole?
    |

    Yes, of course. If you take a photo of a tree it's not going
    to turn into a car just because you intended to take a photo
    of a car. The whole thing is byte data. If there's a flaw causing
    a black pixel then there's a black pixel.

    If you take RAW photos then you can change them quite a bit
    in saving to JPG because there's a lot more data there. The color
    space is larger starting out. But even then, a black dot in the middle
    is still part of the image. Will it be dithered out in reducing?
    Probably, but I can't say for sure.

    You have to get used to there being no absolute truth when
    it comes to computer graphics. It's all about creating images out
    of dots that each represent hues in a limited color spectrum. They're
    not the real hues based on reflected light. They're an approximation
    of the range of colors the human eye sees. The color sensor is biased.
    For example, a bumblebee might see bright blue stripes on a magenta
    flower. Maybe it won't see the magenta. I don't know. So what color
    is the flower? Black with blue stripes, or magenta? The actual light
    reflected cannot be fully perceived by either the bumblebee or a
    human. So that's just an abstraction for practical purposes. The
    color sensor in your camera will be designed to record colors in
    a range that you can see. That data is then further reduced by
    converting it to a numeric value in a limited range. There's no flower
    in your photo. There's only a long string of bytes that represent
    RGB values for dots on a grid. Depending on your monitor, display
    driver, eyesight, etc, you'll see a facsimile of that flower on your
    screen. Even on the same computer I see different graphics if I
    boot Windows vs Linux. Yet the byte data is the same.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Peter@21:1/5 to Newyana2@invalid.nospam on Sat Dec 2 15:57:33 2023
    XPost: rec.photo.digital, alt.comp.freeware

    Newyana2 <Newyana2@invalid.nospam> wrote:
    | One way we can probably check our entropy is to pick a photo from the
    | Internet and then run our commands on that, & then upload it somewhere.
    |

    Recall that this is your project, not ours. :) I rarely take
    photos and even more rarely upload them.

    Oh. No problem. I didn't mean at all to sign you up. It's my crusade.

    I had meant the "we" as in a royal we (as in anyone), and not as in you and
    me.

    My observation was that if we (anyone) can't even fool reverse-image-search engines, though, then we (anyone) didn't do it right because they're easy
    to fool.

    A screenshot of the desktop? That should give you
    just the same bitmap.
    |
    | I didn't realize that a screenshot gets me the exact same bitmap.
    | Are you sure?
    |
    | The reason I ask is not every screen has the same resolution as the image. >| So how can it be that they're both exactly the same?
    |

    Resolution is meaningless for a DIB. I-V may say the image
    is 300 dpi, for example, but that only applies when it's printed
    at 300 dpi. Inches are an abstraction on a computer screen.

    Thanks for letting me know that resolution is meaningless for a digital
    image in terms of our (the royal our) goal of protecting against being identified by our camera sensor pixel imperfections.

    The typical display setting is 96 dpi, but that depends on your
    screen size setting. If you have an image 200x200 and it looks,
    say, 2" square on your monitor at 1920 wide, if you then change
    your display to something like 800x600 then your image will be
    more than 4" wide.

    That's what confuses me because, in that scenario you describe above, my question is if I screenshot the whole screen (and crop to the 2" image)
    when the image is 2" square versus if I screenshot the whole screen (and
    crop to the 4" image), you seem to have been saying the resulting two
    images are EXACTLY the same.

    Are they?

    It's super important to nail down the answer to that question!!!!!!
    If you answer no other question ever again, that's the most important.

    The graphics driver sends data for 200x200 pixels to the screen.
    The image data is not changed. If you watch a movie with John
    Wayne on a 17" TV or a 50" TV, how wide is John Wayne? (Sounds
    like a good koan.) Regardless of the TV, the film projected to show
    the movie is not changed.

    I didn't realize that the image will be the same when you do this, but can
    you simply confirm with a yes or no given this scenario below.

    [1] Your starting point is an image from your camera
    [2] You screenshot it on Display1 and save the full-screen results
    (and then you crop away the extraneous Windows blue desktop background)
    [3] Same thing on Display2.

    Are you saying that all three images will be exactly the same pixels?
    How can that be given what is saved is always smaller in file size than the original?

    This is the most important question for us (anyone) to understand I think.

    | If, for example, there's a "hole" in the center of my camera sensor such
    | that the middle pixel is "0000000" on the actual image, is a screenshot of >| that original image (saved to a new file) also going to have that hole?

    Yes, of course. If you take a photo of a tree it's not going
    to turn into a car just because you intended to take a photo
    of a car. The whole thing is byte data. If there's a flaw causing
    a black pixel then there's a black pixel.

    That's not good if the screenshot reproduces the image perfectly.
    But it can't because the screenshot isn't even the same file size.
    Something is missing in my understanding.

    If you take RAW photos then you can change them quite a bit
    in saving to JPG because there's a lot more data there. The color
    space is larger starting out. But even then, a black dot in the middle
    is still part of the image. Will it be dithered out in reducing?
    Probably, but I can't say for sure.

    If that's the case, I think tilting & cropping & flipping (if possible)
    will just move the black dot. Maybe the Irfanview blurring may help paper
    over the black dot.

    A far deeper question (for another time) may be which blurring technique is most effective for our (anyone's) purposes, where that's for a later
    discussion as it could get too technical for me really quickly (gaussian, bokeh, quantized, etc) as I see those options in Windows image editors.

    You have to get used to there being no absolute truth when
    it comes to computer graphics. It's all about creating images out
    of dots that each represent hues in a limited color spectrum. They're
    not the real hues based on reflected light. They're an approximation
    of the range of colors the human eye sees. The color sensor is biased.
    For example, a bumblebee might see bright blue stripes on a magenta
    flower. Maybe it won't see the magenta. I don't know. So what color
    is the flower? Black with blue stripes, or magenta? The actual light reflected cannot be fully perceived by either the bumblebee or a
    human.

    What you're trying to tell me, I think, is that the colors I see or that
    the monitor sees isn't what colors are in the original image.

    Do you think Irfanview "auto adjust colors" will help?

    Or does that too do nothing to the saved image's pixel values?

    So that's just an abstraction for practical purposes. The
    color sensor in your camera will be designed to record colors in
    a range that you can see. That data is then further reduced by
    converting it to a numeric value in a limited range. There's no flower
    in your photo. There's only a long string of bytes that represent
    RGB values for dots on a grid. Depending on your monitor, display
    driver, eyesight, etc, you'll see a facsimile of that flower on your
    screen. Even on the same computer I see different graphics if I
    boot Windows vs Linux. Yet the byte data is the same.

    Thanks for all your help as all I'm trying to do is solve the problem.

    The main question to pin down the answer to is if the screenshot of the
    image is the exact same pixel values, why is the screenshot a different
    size than the original image?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Newyana2@21:1/5 to Peter on Sat Dec 2 19:37:45 2023
    XPost: rec.photo.digital, alt.comp.freeware

    "Peter" <confused@nospam.net> wrote

    | That's what confuses me because, in that scenario you describe above, my
    | question is if I screenshot the whole screen (and crop to the 2" image)
    | when the image is 2" square versus if I screenshot the whole screen (and
    | crop to the 4" image), you seem to have been saying the resulting two
    | images are EXACTLY the same.
    |
    | Are they?
    |
    | It's super important to nail down the answer to that question!!!!!!
    | If you answer no other question ever again, that's the most important.
    |

    It's the same 200x200 image either way. Only the
    display is different.


    | > The graphics driver sends data for 200x200 pixels to the screen.
    | > The image data is not changed. If you watch a movie with John
    | > Wayne on a 17" TV or a 50" TV, how wide is John Wayne? (Sounds
    | > like a good koan.) Regardless of the TV, the film projected to show
    | > the movie is not changed.
    |
    | I didn't realize that the image will be the same when you do this, but can
    | you simply confirm with a yes or no given this scenario below.
    |
    | [1] Your starting point is an image from your camera
    | [2] You screenshot it on Display1 and save the full-screen results
    | (and then you crop away the extraneous Windows blue desktop background)
    | [3] Same thing on Display2.
    |
    | Are you saying that all three images will be exactly the same pixels?
    | How can that be given what is saved is always smaller in file size than
    the
    | original?
    |
    They should be, if you save it to BMP. If you save to JPG
    then you're changing the image.

    | What you're trying to tell me, I think, is that the colors I see or that
    | the monitor sees isn't what colors are in the original image.
    |
    There's no absolute color. The image saves byte
    values that represent RGB. Try making a very simple BMP
    and open the file in a hex editor. After the header bytes
    you can see the bytes represeting color. For example, if
    you save an image of pure blue then the bytes will appear
    as FF 00 00 FF 00 00 FF 00 00 and so on.

    Now what if you send that to a B/W TV screen? It will
    be gray. If you have eye problems so that you can't see
    blue then you'll perhaps see green. I don't know.

    | The main question to pin down the answer to is if the screenshot of the
    | image is the exact same pixel values, why is the screenshot a different
    | size than the original image?

    I explained that above. The screenshot is the same size
    in terms of pixels. How those pixels are displayed is a different
    issue. If you watch a movie on TV or in a theater it's a different
    size, right? It's not different images. It's different display size.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Peter@21:1/5 to Newyana2@invalid.nospam on Sun Dec 3 02:03:14 2023
    XPost: rec.photo.digital, alt.comp.freeware

    Newyana2 <Newyana2@invalid.nospam> wrote:
    | The main question to pin down the answer to is if the screenshot of the
    | image is the exact same pixel values, why is the screenshot a different
    | size than the original image?

    I explained that above. The screenshot is the same size
    in terms of pixels. How those pixels are displayed is a different
    issue. If you watch a movie on TV or in a theater it's a different
    size, right? It's not different images. It's different display size.

    I appreciate narrowing down the questions to the single most important one.

    In my tests, a screenshot is completely different than the original image. https://i.postimg.cc/05J1xDrG/woodcock1.jpg https://i.postimg.cc/vHFFmT6J/woodcock2.jpg
    Yet you keep saying the screenshot results in the same as the original.

    You know more than I do so you're likely right - but what am I doing wrong?
    So I just ran this repeatable experiment which I'd ask you to test out.

    1. I arbitrarily picked this image of a woodcock sitting amongst greenery.
    https://appvoices.org/images/uploads/2016/04/woodcock.jpg

    Let's never edit or re-save that image ever again.
    So I made a copy called "woodcock1.jpg" instead.

    Compression: JPEG, quality: 94, subsampling OFF
    Original size: 4200 x 3080 Pixels (12.94 MPixels) (1.36)
    Current size: 4200 x 3080 Pixels (12.94 MPixels) (1.36)
    Original colors: 16,7 Million (24 BitsPerPixel)
    Current colors: 16,7 Million (24 BitsPerPixel)
    Number of unique colors: 329007
    Disk size: 3.74 MB (3,925,976 Bytes)

    2. In Irfanview, I display that woodcock1.jpg image on my first display.
    In Irfanview, I adjust the window borders to about 4 inches square.
    In Irfanview I fit that woodcock1.jpg image to that 4x4 inch window by
    using Irfanview -> View -> Display options -> Fit images to window

    This is what that looks like on the first of the two screens.
    https://i.postimg.cc/4yNbvf6w/fourinchesonscreen.jpg

    Then I press the keyboard "printscreen" button.
    This actually prints both screens (I don't know how to print just one).
    I paste those results back over the Irfanview image.
    And I crop out both of the Windows blue backgrounds.
    Until I am back to just the image without the Irfanview window borders.
    I save that resulting Irfanview image as woodcock2.jpg (no compression).

    Compression: JPEG, quality: 100, subsampling ON (2x2)
    Original size: 314 x 230 Pixels (1.36)
    Current size: 314 x 230 Pixels (1.36)
    Original colors: 16,7 Million (24 BitsPerPixel)
    Current colors: 16,7 Million (24 BitsPerPixel)
    Number of unique colors: 32347
    Disk size: 73.89 KB (75,661 Bytes)

    The two images (woodcock1.jpg & woodcock2.jpg) are completely different
    even though the second image is just a screenshot of the first image.

    The reason I'm confused by you saying the screenshot is the same bitmap as
    the image was is there's no way that those two files are close in any way.

    I must not be understanding what you mean then when you say that the
    screenshot of the image will just get me back to the bits in the image.

    Why are my screenshot results completely different from the original then?
    https://appvoices.org/images/uploads/2016/04/woodcock.jpg
    https://i.postimg.cc/05J1xDrG/woodcock1.jpg
    https://i.postimg.cc/4yNbvf6w/fourinchesonscreen.jpg
    https://i.postimg.cc/vHFFmT6J/woodcock2.jpg

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Newyana2@21:1/5 to Peter on Sat Dec 2 22:53:56 2023
    XPost: rec.photo.digital, alt.comp.freeware

    "Peter" <confused@nospam.net> wrote

    | Why are my screenshot results completely different from the original then?
    | https://appvoices.org/images/uploads/2016/04/woodcock.jpg
    | https://i.postimg.cc/05J1xDrG/woodcock1.jpg
    | https://i.postimg.cc/4yNbvf6w/fourinchesonscreen.jpg
    | https://i.postimg.cc/vHFFmT6J/woodcock2.jpg

    As I said, if you save to JPG it will be different. Even
    zero compression still compresses and will involve data loss.
    But isn't this a sidetrack? It's not relevant to use
    screenshots. The point is just to find any way to change
    the pixels, then find software to test whether it worked.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Peter@21:1/5 to Newyana2@invalid.nospam on Sun Dec 3 15:41:59 2023
    XPost: rec.photo.digital, alt.comp.freeware

    Newyana2 <Newyana2@invalid.nospam> wrote:
    | Why are my screenshot results completely different from the original then? >| https://appvoices.org/images/uploads/2016/04/woodcock.jpg
    | https://i.postimg.cc/05J1xDrG/woodcock1.jpg
    | https://i.postimg.cc/4yNbvf6w/fourinchesonscreen.jpg
    | https://i.postimg.cc/vHFFmT6J/woodcock2.jpg

    As I said, if you save to JPG it will be different.]

    Thanks for trying to understand, where I'm fully aware that a JPG is lossy compression so that every save set at less than no compression, loses bits.

    But I wasn't aware that a save with no compression also loses bits.

    But that's a lot of bits to lose in a single save with no compression! https://i.postimg.cc/05J1xDrG/woodcock1.jpg = Disk size: 3.74 MB https://i.postimg.cc/vHFFmT6J/woodcock2.jpg = Disk size: 73.89 KB
    Especially as the only thing in between was a single screenshot & crop.

    Even zero compression still compresses and will involve data loss.

    Because it is so easy to do, and seemingly it increases entropy a lot,
    the important question to answer is what does a screenshot actually do.

    If a screenshot is so faithful to the bits, why is simply running a single screenshot & cropping back to the image & saving once losing so many bits?

    Even the number of unique colors is hugely different in the screenshot. woodcock1.jpg = Disk size: 3.74 MB, 329007 unique colors
    woodcock2.jpg = Disk size: 73.89 KB, 32347 unique colors

    But isn't this a sidetrack? It's not relevant to use screenshots.

    It may be wrong to use a screenshot and then it's irrelevant, but using a screenshot is admittedly very easy and it drastically changes the image.

    The image is so drastically changed that it could maybe be the best method.

    The point is just to find any way to change
    the pixels, then find software to test whether it worked.

    Today I use a combination of methods (most mentioned already) to move all sensor flaws to both a different X location and to a different Y location.

    But of course, moving the flaw (for example, flipping or rotating) will not change the relationship between any two flaws so papering over is needed.

    I use a variety of methods to paper over flaws (blur & sharpen & reduce
    colors for example - which is what sparked this conversation initially).

    But what's most important to narrow down the answer to (because it seems so functionally powerful and it's very easy to do) is what a screenshot does.

    If a screenshot is so faithful to the original, why can't I reproduce that?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Paul@21:1/5 to Peter on Sun Dec 3 14:13:07 2023
    XPost: rec.photo.digital, alt.comp.freeware

    On 12/3/2023 10:41 AM, Peter wrote:
    Newyana2 <Newyana2@invalid.nospam> wrote:
    | Why are my screenshot results completely different from the original then? >> | https://appvoices.org/images/uploads/2016/04/woodcock.jpg
    | https://i.postimg.cc/05J1xDrG/woodcock1.jpg
    | https://i.postimg.cc/4yNbvf6w/fourinchesonscreen.jpg
    | https://i.postimg.cc/vHFFmT6J/woodcock2.jpg

    As I said, if you save to JPG it will be different.]

    Thanks for trying to understand, where I'm fully aware that a JPG is lossy compression so that every save set at less than no compression, loses bits.

    But I wasn't aware that a save with no compression also loses bits.

    But that's a lot of bits to lose in a single save with no compression! https://i.postimg.cc/05J1xDrG/woodcock1.jpg = Disk size: 3.74 MB https://i.postimg.cc/vHFFmT6J/woodcock2.jpg = Disk size: 73.89 KB
    Especially as the only thing in between was a single screenshot & crop.

    Even zero compression still compresses and will involve data loss.

    Because it is so easy to do, and seemingly it increases entropy a lot,
    the important question to answer is what does a screenshot actually do.

    If a screenshot is so faithful to the bits, why is simply running a single screenshot & cropping back to the image & saving once losing so many bits?

    Even the number of unique colors is hugely different in the screenshot. woodcock1.jpg = Disk size: 3.74 MB, 329007 unique colors
    woodcock2.jpg = Disk size: 73.89 KB, 32347 unique colors

    But isn't this a sidetrack? It's not relevant to use screenshots.

    It may be wrong to use a screenshot and then it's irrelevant, but using a screenshot is admittedly very easy and it drastically changes the image.

    The image is so drastically changed that it could maybe be the best method.

    The point is just to find any way to change
    the pixels, then find software to test whether it worked.

    Today I use a combination of methods (most mentioned already) to move all sensor flaws to both a different X location and to a different Y location.

    But of course, moving the flaw (for example, flipping or rotating) will not change the relationship between any two flaws so papering over is needed.

    I use a variety of methods to paper over flaws (blur & sharpen & reduce colors for example - which is what sparked this conversation initially).

    But what's most important to narrow down the answer to (because it seems so functionally powerful and it's very easy to do) is what a screenshot does.

    If a screenshot is so faithful to the original, why can't I reproduce that?


    The original image is 4200x3080.

    To present that on a 1920x1080 screen, means processing
    a clump of pixels and "summarizing" that content, with
    a new synthetic pixel. Depending on how many pixels are
    being averaged, that's going to "damp" the camera sensor
    signature a bit.

    Paul

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From rocco portelli@21:1/5 to Peter on Sun Dec 3 14:44:30 2023
    XPost: rec.photo.digital, alt.comp.freeware

    Peter <confused@nospam.net> wrote:

    Because it is so easy to do, and seemingly it increases entropy a lot,
    the important question to answer is what does a screenshot actually do.

    I wonder if that screenshot & crop is the same as a resize to 4x4 inches?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Peter@21:1/5 to Paul on Sun Dec 3 19:42:08 2023
    XPost: rec.photo.digital, alt.comp.freeware

    Paul <nospam@needed.invalid> wrote:
    If a screenshot is so faithful to the original, why can't I reproduce that? >>

    The original image is 4200x3080.

    To present that on a 1920x1080 screen, means processing
    a clump of pixels and "summarizing" that content, with
    a new synthetic pixel. Depending on how many pixels are
    being averaged, that's going to "damp" the camera sensor
    signature a bit.

    That's why I didn't understand what was being said when it was said that a screenshot is exactly the same pixels. They're not even close in my tests.

    While that's confusing because what people said who know more than I do was that the screenshot is exactly the same, if it does "summarize the
    content", that's an almost perfect way to get rid of explicit flaws, I
    would think.

    Do you see where I'm going?

    If the goal is to get rid of specific camera sensor flaws (which we have no idea what they are), then it seems that "processing a clump of pixels" as
    if they were a single pixel, is an extremely useful step for that purpose.

    Especially as it take no more effort or special tools than the pressing of
    a keyboard button and saving the results.

    Does my logic make sense to you given the goal is obfuscating sensor flaws?
    Or am I missing something?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Nic@21:1/5 to Paul on Sun Dec 3 14:51:27 2023
    XPost: rec.photo.digital, alt.comp.freeware

    On 12/3/23 2:13 PM, Paul wrote:
    On 12/3/2023 10:41 AM, Peter wrote:
    Newyana2 <Newyana2@invalid.nospam> wrote:
    | Why are my screenshot results completely different from the original then?
    | https://appvoices.org/images/uploads/2016/04/woodcock.jpg
    | https://i.postimg.cc/05J1xDrG/woodcock1.jpg
    | https://i.postimg.cc/4yNbvf6w/fourinchesonscreen.jpg
    | https://i.postimg.cc/vHFFmT6J/woodcock2.jpg

    As I said, if you save to JPG it will be different.]
    Thanks for trying to understand, where I'm fully aware that a JPG is lossy >> compression so that every save set at less than no compression, loses bits. >>
    But I wasn't aware that a save with no compression also loses bits.

    But that's a lot of bits to lose in a single save with no compression!
    https://i.postimg.cc/05J1xDrG/woodcock1.jpg = Disk size: 3.74 MB
    https://i.postimg.cc/vHFFmT6J/woodcock2.jpg = Disk size: 73.89 KB
    Especially as the only thing in between was a single screenshot & crop.

    Even zero compression still compresses and will involve data loss.
    Because it is so easy to do, and seemingly it increases entropy a lot,
    the important question to answer is what does a screenshot actually do.

    If a screenshot is so faithful to the bits, why is simply running a single >> screenshot & cropping back to the image & saving once losing so many bits? >>
    Even the number of unique colors is hugely different in the screenshot.
    woodcock1.jpg = Disk size: 3.74 MB, 329007 unique colors
    woodcock2.jpg = Disk size: 73.89 KB, 32347 unique colors

    But isn't this a sidetrack? It's not relevant to use screenshots.
    It may be wrong to use a screenshot and then it's irrelevant, but using a
    screenshot is admittedly very easy and it drastically changes the image.

    The image is so drastically changed that it could maybe be the best method. >>
    The point is just to find any way to change
    the pixels, then find software to test whether it worked.
    Today I use a combination of methods (most mentioned already) to move all
    sensor flaws to both a different X location and to a different Y location. >>
    But of course, moving the flaw (for example, flipping or rotating) will not >> change the relationship between any two flaws so papering over is needed.

    I use a variety of methods to paper over flaws (blur & sharpen & reduce
    colors for example - which is what sparked this conversation initially).

    But what's most important to narrow down the answer to (because it seems so >> functionally powerful and it's very easy to do) is what a screenshot does. >>
    If a screenshot is so faithful to the original, why can't I reproduce that? >>
    The original image is 4200x3080.

    To present that on a 1920x1080 screen, means processing
    a clump of pixels and "summarizing" that content, with
    a new synthetic pixel. Depending on how many pixels are
    being averaged, that's going to "damp" the camera sensor
    signature a bit.

    Paul
    What happens when Grayscale is used, from camera to jpg or bmp?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Paul@21:1/5 to Peter on Sun Dec 3 16:51:17 2023
    XPost: rec.photo.digital, alt.comp.freeware

    On 12/3/2023 2:42 PM, Peter wrote:
    Paul <nospam@needed.invalid> wrote:
    If a screenshot is so faithful to the original, why can't I reproduce that? >>>

    The original image is 4200x3080.

    To present that on a 1920x1080 screen, means processing
    a clump of pixels and "summarizing" that content, with
    a new synthetic pixel. Depending on how many pixels are
    being averaged, that's going to "damp" the camera sensor
    signature a bit.

    That's why I didn't understand what was being said when it was said that a screenshot is exactly the same pixels. They're not even close in my tests.

    While that's confusing because what people said who know more than I do was that the screenshot is exactly the same, if it does "summarize the
    content", that's an almost perfect way to get rid of explicit flaws, I
    would think.

    Do you see where I'm going?

    If the goal is to get rid of specific camera sensor flaws (which we have no idea what they are), then it seems that "processing a clump of pixels" as
    if they were a single pixel, is an extremely useful step for that purpose.

    Especially as it take no more effort or special tools than the pressing of
    a keyboard button and saving the results.

    Does my logic make sense to you given the goal is obfuscating sensor flaws? Or am I missing something?


    It attenuates the amplitude of the information.

    A question would be, is the effect different to
    start with, with a sensor having fewer pixels (larger area) ?
    Or a sensor manufactured a different way ?
    Like say, CCD versus CMOS, or the cheap kind used
    in webcams, versus other camera types ?

    Paul

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Paul@21:1/5 to Nic on Sun Dec 3 17:29:46 2023
    XPost: rec.photo.digital, alt.comp.freeware

    On 12/3/2023 2:51 PM, Nic wrote:

    What happens when Grayscale is used, from camera to jpg or bmp?

    JPG processing, tends to spray into the colorspace.

    You can study that, by "counting colors" with Irfanview,
    and comparing the number of colors in the source picture,
    versus the number in the output.

    BMP is transparent, up to the limits of its 24-bit carriage.
    Even with indexed color, you can in some of the cases,
    convert that to 24 bit, without affecting anything.
    Modern hardware, can create more bits than you know
    what to do with.

    PNG has more options for carriage, and can shrink some
    special-case source images, significantly.

    There is a tendency to abuse JPG, and use it for things
    it isn't optimal for. For photography, it's sins may be
    a worthwhile tradeoff, if storage space somewhere is at
    a premium. Making it compress cartoon cells, isn't a good
    usage of it. (Cartoons might work better as a GIF.)

    Paul

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)