In Irfanview, you can Decrease Color Depth to any level but how can you >either increase it or figure out what the current color depth is set to?
Did you try pressing the key: i (= Information)?
Do you know what color depth is?
Why would you want to change color depth?
If you want
to do something like save a JPG as GIF then you'll lose a lot
of the color data.
"Peter" <confused@nospam.net> wrote
| In Irfanview, you can Decrease Color Depth to any level but how can you
| either increase it or figure out what the current color depth is set to?
If you haven't changed anything then you'll mostly know
by file type. JPGs will be 24-bit, GIFs are 8-bit, PNGs
32-bit. 24-bit is actually the max in standard usage.
The extra 8 bits in PNGs are for transparency values.
BMPs can be 2-bit to 24-bit, but most people don't
see BMPs much these days.
Do you know what color depth is? A monitor these days
displays 24-bit color, which means 256 different hues of
red, green and blue. 0-0-0 is black. 255-255-255 is white.
(Unless you're on a Mac, in which case I think it's 18-bit
color. Which means it can only display 64 color gradients for
R, G and B -- it's missing 192 hues out of 256, so it dithers
pixels to the nearest color.)
Raster image file formats store those values as numbers
representing a pixel grid. They're all bitmap when they
display, but each format stores the data differently.
Why would you want to change color depth? If you want
to do something like save a JPG as GIF then you'll lose a lot
of the color data. If you do the reverse you won't get more
colors. The original GIF color table entries are all that you'll
see unless you then edit the image in 24-bit.
In Irfanview, ...
... figure out what the current color depth is set to?
| To increase entropy.
|
I looked that up. I don't find any reference to entropy
in graphics. So I'm not sure what you mean.
If you want|
to do something like save a JPG as GIF then you'll lose a lot
of the color data.
| Does saving JPG to GIF remove unique camera sensor imperfections?
It will be "dithered" to nearest colors, based on one
of a number of dithering approaches. For example, if
you have a gradient of greens containing 372 hues, it
might get converted to a field of green and white dots.
A GIF can only use 256 colors, so most JPGs would be
severely degraded when saved as GIF, because a JPG
can use 16 million colors.
It helps to understand the basic system of raster images.
In ALL cases they represent pixel grids with numeric RGB
values. That is, all formats store data to light pixels on a
screen with varying intensities of red, green and blue. It's
always a grid, always rectangular. Any image that's not a
rectangle (like an icon or PNG) is still a rectangular bitmap,
but the image format is storing transparency values to be
applied when the image is rendered onscreen. (That's why
displays are often "32-bit". 3 bytes for RGB values and one
byte to indicate transparency.)
The grid of stored pixel values, indicating color intensity of
RGB, is arranged in rows, usually starting at top left. That's
a bitmap. All raster images are bitmaps but can be stored
in different file formats. JPG is arguably a very poor format
because it loses color data, but it's popular because it makes
the smallest files and there are no royalties on the format.
So it's ideal for online. (It's used in cameras for that reason.
For people sending birthday party pictures in email, quality
is not a big factor.)
Once you open an image in a graphic editor you're dealing
with the bitmap, so whatever you alter from there will affect
the image saved as a different file. In the case of JPG, it
compresses the image by eliminating contiguous colors in
imperceptible ways. That's why a bad quality JPG looks like
an image comprised of blocks. The reduction of colors allows
for the data to be stored more compactly, but loses detail.
That data is lost for good.
So if you have a JPG saved at, say, 92 compression (it's
1 to 100. Top quality can be either the high or low number,
depending on the software), then if you open that in an editor
and resave it at 87 compression, there should be no noticeable
difference, but some byte values will be changed. (I like Paint
Shop Pro 5 because it chops off the EXIF data altogether. I
find it creepy to have buried data in a file. It's a privacy
problem.)
So if it were me I'd try resaving the JPG at different
compression, convert both of those to BMP, then compare in
a hex editor to see what you have. If you convert to GIF
you'll ruin the image because it has to dither to a max of 256
colors, while the JPG could have 100,000 colors. So forget GIF.
If you open a JPG in a hex editor it won't be very informative.
It's like looking at a ZIP file. You only see a bloated header and
the compressed state of the data.
If you open and resave as a BMP then you
have the direct data. The first 54 bytes of the BMP file will
contain values indicating color depth, width/height, etc. The
rest is simply the straight grid values. So for a typical 24-bit
BMP image, the first 3 bytes will be the BGR values in big-endian
order for the top left pixel. Example: Bright sky blue is zero red,
half green intensity, and full blue intensity. As a long integer
value that's 16744448. As bytes it's 255-128-0 or 0-128-255.
That can also be written as hex: FF 80 00 If you save a BMP
file which is only that color then you'll see 54 bytes of file header followed by a repeating pattern of FF 80 00.
All raster images work that way. All raster images are bitmaps
in different packaging. A JPG is also a bitmap, but when you
increase compression you'll reduce colors. So if you have, say, a
photo of sky with pixels like 255 80 00 243 75 22 241 83 02 those
three pixels might get dithered to 3 pixel values of 243 75 22. Your
eye won't see the difference, but the 3 pixels' values can be more
easily compressed.
So you could try that. Check compression level, open the file,
resave at different compression, open both files and resave as
BMPs. Open both BMPs in a hex editor and see how they compare.
I can't tell you anything about camera sensors. I don't know about
that. But however they work, it still has to boil down to 24-bit
RGB if you have a JPG. So any tracks left by the camera would
have to be in patterns of pixel values.
I hope that makes sense. It sounds complicated, but it's actually
very simple once you get how it orks. All raster images are grids
of pixel RGB values as numbers. It all comes down to numbers,
just as any file does.
Interesting issue. I'd never heard of it.
I wasn't up to fully
reading all those articles, but I have the basic idea. Whether
it will ever be a way to ID the picture taker from an image
is questionable. It seems a bit like doing a DNA test on
someone wearing a name tag. The site already knows who's
uploading in most cases, and cameras are leaking data that allows
for ID. But I can see how this could someday be an issue.
As for introducing entropy, I don't know. There's no sense
having images that are ruined, so you don't want to change
too much. And this doesn't look like entropy to me. Rather, they're
loking for identifiable patterns of distortion. What would that be?
Maybe the sensor never reports certain hue values? I don't know.
I think you'll just have to test, after coming up with some way
to gauge how unique the ID traces are. You don't know if your
method is helping unless you know what to look for.
JPG -> GIF -> JPG will ruin nearly all images.
Any JPG resaving will change the image because it's
a lossy format. But other formats are not lossy. So it's
in the JPG resaving that you'll get the most change.
The BMP saving would only be for inspecting byte changes
for pixel values. There's no other advantage to BMP. The
image displayed is already a BMP, anyway. It's the display
of what's called a DIB -- device independent bitmap. That
is, just the pixel bytes.
So resave your JPG, stripping the header and reducing
quality slightly. Then see what you have by saving that
as a BMP. But I have no idea how you'll assess the uniqueness.
If you had some formula for that you could probably
automate it by processing the byte value patterns. But
that means getting some source code for the software that
will supposedly do the job.
In other words, if yopu build up
an ID for your camera from multiple images then you can
test your alteration against that, but without having that,
I don't know how you'll identify exactly what bytes in the
image are giving you away, and why.
I tried a few myself.
Surprisingly they look fine!
One is a photo of a woodcock sitting amongst greenery. Lots of hues.
Yet the GIF looks OK.
And as you noted, the color count increases when it's
converted back. Somehow the JPG conversion brings
back some sharpness.
| Why would a sharpen add so many unique colors to the image?
|
Sharpen highlights the difference along edges. If you
take a simple image -- a single color background with
a different color rectangle in the middle, say, a sharpen
will add 1 or more lies of new colors where the 2 colors
meet. In a photo you're doing that with a slightly different
hue for each pixel comparison.
| But of course, I have no way of knowing if this papers over unique camera
| sensor flaws that were in the original image that moved forward
| throughout.
No. You should have virtually all new pixels, but I don't
know how their method works.
| Since the DISPLAY is a BMP, would you think it useful to paper over unique >| camera sensor flaws by snapping a screenshot and replacing it with that?
A screenshot of the desktop? That should give you
just the same bitmap.
A screenshot will just send you the
byte values being sent to the graphic hardware.
| I think we need to fundamentally do a few things, but I'm not sure.
Maybe put it in a locked box, sseal that with wax, then
bury it under your garage floor. :)
| We need to paper over the unique flaws with blurring somehow.
Are they flaws? Could they be something like particular
hues that can never show up? I don't know.
without knowing
the method of inspection you're in the dark. But I'm guessing
your method will have changed the precise hue values of nearly
every pixel. So... pretty much a new image that just happens
to look the same to your eye.
| One way we can probably check our entropy is to pick a photo from the
| Internet and then run our commands on that, & then upload it somewhere.
|
Recall that this is your project, not ours. :) I rarely take
photos and even more rarely upload them.
A screenshot of the desktop? That should give you|
just the same bitmap.
| I didn't realize that a screenshot gets me the exact same bitmap.
| Are you sure?
|
| The reason I ask is not every screen has the same resolution as the image. >| So how can it be that they're both exactly the same?
|
Resolution is meaningless for a DIB. I-V may say the image
is 300 dpi, for example, but that only applies when it's printed
at 300 dpi. Inches are an abstraction on a computer screen.
The typical display setting is 96 dpi, but that depends on your
screen size setting. If you have an image 200x200 and it looks,
say, 2" square on your monitor at 1920 wide, if you then change
your display to something like 800x600 then your image will be
more than 4" wide.
The graphics driver sends data for 200x200 pixels to the screen.
The image data is not changed. If you watch a movie with John
Wayne on a 17" TV or a 50" TV, how wide is John Wayne? (Sounds
like a good koan.) Regardless of the TV, the film projected to show
the movie is not changed.
| If, for example, there's a "hole" in the center of my camera sensor such
| that the middle pixel is "0000000" on the actual image, is a screenshot of >| that original image (saved to a new file) also going to have that hole?
Yes, of course. If you take a photo of a tree it's not going
to turn into a car just because you intended to take a photo
of a car. The whole thing is byte data. If there's a flaw causing
a black pixel then there's a black pixel.
If you take RAW photos then you can change them quite a bit
in saving to JPG because there's a lot more data there. The color
space is larger starting out. But even then, a black dot in the middle
is still part of the image. Will it be dithered out in reducing?
Probably, but I can't say for sure.
You have to get used to there being no absolute truth when
it comes to computer graphics. It's all about creating images out
of dots that each represent hues in a limited color spectrum. They're
not the real hues based on reflected light. They're an approximation
of the range of colors the human eye sees. The color sensor is biased.
For example, a bumblebee might see bright blue stripes on a magenta
flower. Maybe it won't see the magenta. I don't know. So what color
is the flower? Black with blue stripes, or magenta? The actual light reflected cannot be fully perceived by either the bumblebee or a
human.
So that's just an abstraction for practical purposes. The
color sensor in your camera will be designed to record colors in
a range that you can see. That data is then further reduced by
converting it to a numeric value in a limited range. There's no flower
in your photo. There's only a long string of bytes that represent
RGB values for dots on a grid. Depending on your monitor, display
driver, eyesight, etc, you'll see a facsimile of that flower on your
screen. Even on the same computer I see different graphics if I
boot Windows vs Linux. Yet the byte data is the same.
| The main question to pin down the answer to is if the screenshot of the
| image is the exact same pixel values, why is the screenshot a different
| size than the original image?
I explained that above. The screenshot is the same size
in terms of pixels. How those pixels are displayed is a different
issue. If you watch a movie on TV or in a theater it's a different
size, right? It's not different images. It's different display size.
| Why are my screenshot results completely different from the original then? >| https://appvoices.org/images/uploads/2016/04/woodcock.jpg
| https://i.postimg.cc/05J1xDrG/woodcock1.jpg
| https://i.postimg.cc/4yNbvf6w/fourinchesonscreen.jpg
| https://i.postimg.cc/vHFFmT6J/woodcock2.jpg
As I said, if you save to JPG it will be different.]
Even zero compression still compresses and will involve data loss.
But isn't this a sidetrack? It's not relevant to use screenshots.
The point is just to find any way to change
the pixels, then find software to test whether it worked.
Newyana2 <Newyana2@invalid.nospam> wrote:
| Why are my screenshot results completely different from the original then? >> | https://appvoices.org/images/uploads/2016/04/woodcock.jpg
| https://i.postimg.cc/05J1xDrG/woodcock1.jpg
| https://i.postimg.cc/4yNbvf6w/fourinchesonscreen.jpg
| https://i.postimg.cc/vHFFmT6J/woodcock2.jpg
As I said, if you save to JPG it will be different.]
Thanks for trying to understand, where I'm fully aware that a JPG is lossy compression so that every save set at less than no compression, loses bits.
But I wasn't aware that a save with no compression also loses bits.
But that's a lot of bits to lose in a single save with no compression! https://i.postimg.cc/05J1xDrG/woodcock1.jpg = Disk size: 3.74 MB https://i.postimg.cc/vHFFmT6J/woodcock2.jpg = Disk size: 73.89 KB
Especially as the only thing in between was a single screenshot & crop.
Even zero compression still compresses and will involve data loss.
Because it is so easy to do, and seemingly it increases entropy a lot,
the important question to answer is what does a screenshot actually do.
If a screenshot is so faithful to the bits, why is simply running a single screenshot & cropping back to the image & saving once losing so many bits?
Even the number of unique colors is hugely different in the screenshot. woodcock1.jpg = Disk size: 3.74 MB, 329007 unique colors
woodcock2.jpg = Disk size: 73.89 KB, 32347 unique colors
But isn't this a sidetrack? It's not relevant to use screenshots.
It may be wrong to use a screenshot and then it's irrelevant, but using a screenshot is admittedly very easy and it drastically changes the image.
The image is so drastically changed that it could maybe be the best method.
The point is just to find any way to change
the pixels, then find software to test whether it worked.
Today I use a combination of methods (most mentioned already) to move all sensor flaws to both a different X location and to a different Y location.
But of course, moving the flaw (for example, flipping or rotating) will not change the relationship between any two flaws so papering over is needed.
I use a variety of methods to paper over flaws (blur & sharpen & reduce colors for example - which is what sparked this conversation initially).
But what's most important to narrow down the answer to (because it seems so functionally powerful and it's very easy to do) is what a screenshot does.
If a screenshot is so faithful to the original, why can't I reproduce that?
Because it is so easy to do, and seemingly it increases entropy a lot,
the important question to answer is what does a screenshot actually do.
If a screenshot is so faithful to the original, why can't I reproduce that? >>
The original image is 4200x3080.
To present that on a 1920x1080 screen, means processing
a clump of pixels and "summarizing" that content, with
a new synthetic pixel. Depending on how many pixels are
being averaged, that's going to "damp" the camera sensor
signature a bit.
On 12/3/2023 10:41 AM, Peter wrote:What happens when Grayscale is used, from camera to jpg or bmp?
Newyana2 <Newyana2@invalid.nospam> wrote:The original image is 4200x3080.
| Why are my screenshot results completely different from the original then?Thanks for trying to understand, where I'm fully aware that a JPG is lossy >> compression so that every save set at less than no compression, loses bits. >>
| https://appvoices.org/images/uploads/2016/04/woodcock.jpg
| https://i.postimg.cc/05J1xDrG/woodcock1.jpg
| https://i.postimg.cc/4yNbvf6w/fourinchesonscreen.jpg
| https://i.postimg.cc/vHFFmT6J/woodcock2.jpg
As I said, if you save to JPG it will be different.]
But I wasn't aware that a save with no compression also loses bits.
But that's a lot of bits to lose in a single save with no compression!
https://i.postimg.cc/05J1xDrG/woodcock1.jpg = Disk size: 3.74 MB
https://i.postimg.cc/vHFFmT6J/woodcock2.jpg = Disk size: 73.89 KB
Especially as the only thing in between was a single screenshot & crop.
Even zero compression still compresses and will involve data loss.Because it is so easy to do, and seemingly it increases entropy a lot,
the important question to answer is what does a screenshot actually do.
If a screenshot is so faithful to the bits, why is simply running a single >> screenshot & cropping back to the image & saving once losing so many bits? >>
Even the number of unique colors is hugely different in the screenshot.
woodcock1.jpg = Disk size: 3.74 MB, 329007 unique colors
woodcock2.jpg = Disk size: 73.89 KB, 32347 unique colors
But isn't this a sidetrack? It's not relevant to use screenshots.It may be wrong to use a screenshot and then it's irrelevant, but using a
screenshot is admittedly very easy and it drastically changes the image.
The image is so drastically changed that it could maybe be the best method. >>
The point is just to find any way to changeToday I use a combination of methods (most mentioned already) to move all
the pixels, then find software to test whether it worked.
sensor flaws to both a different X location and to a different Y location. >>
But of course, moving the flaw (for example, flipping or rotating) will not >> change the relationship between any two flaws so papering over is needed.
I use a variety of methods to paper over flaws (blur & sharpen & reduce
colors for example - which is what sparked this conversation initially).
But what's most important to narrow down the answer to (because it seems so >> functionally powerful and it's very easy to do) is what a screenshot does. >>
If a screenshot is so faithful to the original, why can't I reproduce that? >>
To present that on a 1920x1080 screen, means processing
a clump of pixels and "summarizing" that content, with
a new synthetic pixel. Depending on how many pixels are
being averaged, that's going to "damp" the camera sensor
signature a bit.
Paul
Paul <nospam@needed.invalid> wrote:
If a screenshot is so faithful to the original, why can't I reproduce that? >>>
The original image is 4200x3080.
To present that on a 1920x1080 screen, means processing
a clump of pixels and "summarizing" that content, with
a new synthetic pixel. Depending on how many pixels are
being averaged, that's going to "damp" the camera sensor
signature a bit.
That's why I didn't understand what was being said when it was said that a screenshot is exactly the same pixels. They're not even close in my tests.
While that's confusing because what people said who know more than I do was that the screenshot is exactly the same, if it does "summarize the
content", that's an almost perfect way to get rid of explicit flaws, I
would think.
Do you see where I'm going?
If the goal is to get rid of specific camera sensor flaws (which we have no idea what they are), then it seems that "processing a clump of pixels" as
if they were a single pixel, is an extremely useful step for that purpose.
Especially as it take no more effort or special tools than the pressing of
a keyboard button and saving the results.
Does my logic make sense to you given the goal is obfuscating sensor flaws? Or am I missing something?
What happens when Grayscale is used, from camera to jpg or bmp?
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 297 |
Nodes: | 16 (2 / 14) |
Uptime: | 01:39:36 |
Calls: | 6,666 |
Calls today: | 4 |
Files: | 12,212 |
Messages: | 5,335,487 |