Many times I need to pass to functions or serialize an array of bits. If they are just a few (8-16 bits), I decide to use a standard array:
uint8_t my_short_array_of_bits[16];
void abits_set_bit(uint8_t *array, size_t size, uint8_t nbit);
uint8_t abits_get_bit(uint8_t *arrray, size_t size, uint8_t nbit);
It sometimes occurs that the number of bits is much greater, maybe 100 or 200.
If I developed on desktop/server machine, I would continue using array of uint8_t, but I'm developing on embedded platforms with limited resources.
So I started using uint32_t for 32 bits. It is very comfortable to test, set and clear a bit. I often need to set the N lowest bits or get the lowest bit set (__builtin_ctz).
For a recent project, I needed to increase the number of bits to 60, so the first idea is to use uint64_t. However I'm thinking to try to generalize for a
bigger number of bits, because tomorrow someone will ask to increase the length
again to 100 or 200.
Do you have some suggestions for this? Maybe in the same project I need to use
an array of 100 bits and an array of 200 bits. I would prefer a typedef for both types, for example:
typedef uint8_t inputs_t[100];
uint8_t inputs_get_bit(inputs_t inputs, unsigned int nbit);
void inputs_set_bit(...);
void inputs_reset_bit(...);
void inputs_toggle_bit(...);
int inputs_ctz(inputs_t inputs);
void inputs_set_low_bits(inputs_t inputs, unsigned int nbits);
typedef uint8_t outputs_t[200];
uint8_t outputs_get_bit(outputs_t inputs, unsigned int nbit);
...
Maybe I can create a .h file with preprocessor macro that will define static inline functions automatically for many types of arrays.
Something similar to:
#define ABITS_TYPE_NAME inputs
#define ABITS_LENGTH 100
#include "abits.h"
--- abits.h ---
#define ABITS_TYPE CONCAT(ABITS_TYPE_NAME, _t)
typedef uint8_t ABITS_TYPE[ABITS_LENGTH];
#define ABITS_GET_FUNC CONCAT(ABITS_TYPE_NAME, _get_bit)
static inline uint8_t
ABITS_GET_FUNC(ABITS_TYPE ABITS_TYPE_NAME, unsigned int nbits) {
...
}
...
--- end of abits.h ---
I know I can try to write this .h file, but maybe there are some public domain
code ready-to-use.
int
bitcount(uint32_t value) {
value = value - ((value >> 0001) & 0x55555555);
value = (value & 0x33333333) + ((value >> 0010) & 0x33333333);
value = (value + (value >> 0100)) & 0x0F0F0F0F;
return (value * 0x01010101) >> (0011 * 01000);
}
On Fri, 21 May 2021 16:05:32 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:
On 5/20/2021 12:22 AM, pozz wrote:
Many times I need to pass to functions or serialize an array of bits. If they
are just a few (8-16 bits), I decide to use a standard array:
uint8_t my_short_array_of_bits[16];
void abits_set_bit(uint8_t *array, size_t size, uint8_t nbit);
uint8_t abits_get_bit(uint8_t *arrray, size_t size, uint8_t nbit);
Why would you want to allocate a BYTE (uint8_t) for EACH BIT?
8 bits would fit into:
uint8_t my_short_array_of_bits[1];
while 16 would fit in:
uint8_t my_short_array_of_bits[2];
generally:
uint8_t my_short_array_of_bits[(NUMBITS+7)/8];
Don, isn't this kind of like Bit Banding for an ARM Corte-M3, 4, 7 etc
? Except I ~think~ that ARM uses more than 1 byte for each bit ?
I wouldn't prefer not to use 1 byte to represent 1 bit but I have seen
it done before for true/false I think back in the 8 bit days
On 5/20/2021 12:22 AM, pozz wrote:
Many times I need to pass to functions or serialize an array of bits. If they
are just a few (8-16 bits), I decide to use a standard array:
uint8_t my_short_array_of_bits[16];
void abits_set_bit(uint8_t *array, size_t size, uint8_t nbit);
uint8_t abits_get_bit(uint8_t *arrray, size_t size, uint8_t nbit);
Why would you want to allocate a BYTE (uint8_t) for EACH BIT?
8 bits would fit into:
uint8_t my_short_array_of_bits[1];
while 16 would fit in:
uint8_t my_short_array_of_bits[2];
generally:
uint8_t my_short_array_of_bits[(NUMBITS+7)/8];
It sometimes occurs that the number of bits is much greater, maybe 100 or 200.
uint8_t my_short_array_of_bits[(100+7)/8];
uint8_t my_short_array_of_bits[(200+7)/8];
If I developed on desktop/server machine, I would continue using array of
uint8_t, but I'm developing on embedded platforms with limited resources.
So?
So I started using uint32_t for 32 bits. It is very comfortable to test, set >> and clear a bit. I often need to set the N lowest bits or get the lowest bit >> set (__builtin_ctz).
For a recent project, I needed to increase the number of bits to 60, so the >> first idea is to use uint64_t. However I'm thinking to try to generalize for a
bigger number of bits, because tomorrow someone will ask to increase the length
again to 100 or 200.
Do you have some suggestions for this? Maybe in the same project I need to use
an array of 100 bits and an array of 200 bits. I would prefer a typedef for >> both types, for example:
Ideally, you would conditionally pick your underlying implementation
(i.e., whether to use uint8 vs unint32 vs uint64) based on whatever the >native hardware best supports.
Using a wider base type can be less efficient on targets that have narrower >natural data sizes. E.g., support for a uint32 on an 8 bit processor may
ADD code where a "more natural" 8 bit type would better fit with the >instruction set/architecture.
E.g., I use BigRationals in some of my math. These are pairs of BigIntegers >(which, in turn, are implemented as arrays of some convenient *base* data type >wide enough to support the value(s) they are being asked to represent.
So, if the value is currently 0x123456789, you'd need at least 33 bits to >represent the value. On an architecture where bytes are the nominal data >type, this would require an array of 5 bytes; on a 16bit architecture, it >would require 3 words; on a 32bit architecture, 2 longs.
In my case, the size of the array is hidden from the type; a BigInteger
value of 0x123456789 would consume less resources than one having the value >0x112233445566778899 -- but, both would still be treated as "BigIntegers".
If you want to create stronger typing and create a type for each possible >bit-array size, then you'll need to "templatize" the routines that process >those data types as a bit_array_16_t is not type compatible with a >bit_array_18_t -- even though the underlying implementations are strikingly >similar.
OTOH, if you just treat them all as "bit_array_t" -- and assume responsibility >for manually ensuring that you've specified the correct size for each >argument processed, then you can share a parameterized (instead of templatized)
implementation.
typedef uint8_t inputs_t[100];
uint8_t inputs_get_bit(inputs_t inputs, unsigned int nbit);
void inputs_set_bit(...);
void inputs_reset_bit(...);
void inputs_toggle_bit(...);
int inputs_ctz(inputs_t inputs);
void inputs_set_low_bits(inputs_t inputs, unsigned int nbits);
typedef uint8_t outputs_t[200];
uint8_t outputs_get_bit(outputs_t inputs, unsigned int nbit);
...
Maybe I can create a .h file with preprocessor macro that will define static >> inline functions automatically for many types of arrays.
Something similar to:
#define ABITS_TYPE_NAME inputs
#define ABITS_LENGTH 100
#include "abits.h"
--- abits.h ---
#define ABITS_TYPE CONCAT(ABITS_TYPE_NAME, _t)
typedef uint8_t ABITS_TYPE[ABITS_LENGTH];
#define ABITS_GET_FUNC CONCAT(ABITS_TYPE_NAME, _get_bit)
static inline uint8_t
ABITS_GET_FUNC(ABITS_TYPE ABITS_TYPE_NAME, unsigned int nbits) {
...
}
...
--- end of abits.h ---
I know I can try to write this .h file, but maybe there are some public domain
code ready-to-use.
On 5/22/2021 12:50 AM, Don Y wrote:
int
bitcount(uint32_t value) {
     value = value               - ((value >> 0001) & 0x55555555);
     value = (value & 0x33333333) + ((value >> 0010) & 0x33333333);
     value = (value + (value >> 0100)) & 0x0F0F0F0F;
     return (value * 0x01010101) >> (0011 * 01000);
}
Ugh! Sorry, I'm mixing language characteristics. In GNU C, the
constants that appear to be "octal" (but are intended to be binary)
would be prefaced with "0b". In Limbo, you'd use the "2r" prefix.
On 5/22/2021 11:03, Don Y wrote:
On 5/22/2021 12:50 AM, Don Y wrote:
int
bitcount(uint32_t value) {
value = value - ((value >> 0001) & 0x55555555);
value = (value & 0x33333333) + ((value >> 0010) & 0x33333333);
value = (value + (value >> 0100)) & 0x0F0F0F0F;
return (value * 0x01010101) >> (0011 * 01000);
}
Ugh! Sorry, I'm mixing language characteristics. In GNU C, the
constants that appear to be "octal" (but are intended to be binary)
would be prefaced with "0b". In Limbo, you'd use the "2r" prefix.
To make the thread somewhat less dull let me state
(to the delight of David :-) that true big endian (power like)
shows up as being the correct way when doing bitmaps.
Bit 0 is the left most bit in the byte at the lowest offset
etc. Nicely ordered left to right...
On 5/20/2021 12:22 AM, pozz wrote:
Many times I need to pass to functions or serialize an array of bits.
If they are just a few (8-16 bits), I decide to use a standard array:
uint8_t my_short_array_of_bits[16];
void abits_set_bit(uint8_t *array, size_t size, uint8_t nbit);
uint8_t abits_get_bit(uint8_t *arrray, size_t size, uint8_t nbit);
Why would you want to allocate a BYTE (uint8_t) for EACH BIT?
8 bits would fit into:
uint8_t my_short_array_of_bits[1];
while 16 would fit in:
uint8_t my_short_array_of_bits[2];
generally:
uint8_t my_short_array_of_bits[(NUMBITS+7)/8];
It sometimes occurs that the number of bits is much greater, maybe 100
or 200.
uint8_t my_short_array_of_bits[(100+7)/8];
uint8_t my_short_array_of_bits[(200+7)/8];
If I developed on desktop/server machine, I would continue using array
of uint8_t, but I'm developing on embedded platforms with limited
resources.
So?
So I started using uint32_t for 32 bits. It is very comfortable to
test, set and clear a bit. I often need to set the N lowest bits or
get the lowest bit set (__builtin_ctz).
For a recent project, I needed to increase the number of bits to 60,
so the first idea is to use uint64_t. However I'm thinking to try to
generalize for a bigger number of bits, because tomorrow someone will
ask to increase the length again to 100 or 200.
Do you have some suggestions for this? Maybe in the same project I
need to use an array of 100 bits and an array of 200 bits. I would
prefer a typedef for both types, for example:
Ideally, you would conditionally pick your underlying implementation
(i.e., whether to use uint8 vs unint32 vs uint64) based on whatever the native hardware best supports.
Using a wider base type can be less efficient on targets that have narrower natural data sizes. E.g., support for a uint32 on an 8 bit processor may
ADD code where a "more natural" 8 bit type would better fit with the instruction set/architecture.
E.g., I use BigRationals in some of my math. These are pairs of
BigIntegers
(which, in turn, are implemented as arrays of some convenient *base*
data type
wide enough to support the value(s) they are being asked to represent.
So, if the value is currently 0x123456789, you'd need at least 33 bits to represent the value. On an architecture where bytes are the nominal data type, this would require an array of 5 bytes; on a 16bit architecture, it would require 3 words; on a 32bit architecture, 2 longs.
In my case, the size of the array is hidden from the type; a BigInteger
value of 0x123456789 would consume less resources than one having the value 0x112233445566778899 -- but, both would still be treated as "BigIntegers".
If you want to create stronger typing and create a type for each possible bit-array size, then you'll need to "templatize" the routines that process those data types as a bit_array_16_t is not type compatible with a bit_array_18_t -- even though the underlying implementations are strikingly similar.
OTOH, if you just treat them all as "bit_array_t" -- and assume responsibility
for manually ensuring that you've specified the correct size for each argument processed, then you can share a parameterized (instead of templatized)
implementation.
typedef uint8_t inputs_t[100];
uint8_t inputs_get_bit(inputs_t inputs, unsigned int nbit);
void inputs_set_bit(...);
void inputs_reset_bit(...);
void inputs_toggle_bit(...);
int inputs_ctz(inputs_t inputs);
void inputs_set_low_bits(inputs_t inputs, unsigned int nbits);
typedef uint8_t outputs_t[200];
uint8_t outputs_get_bit(outputs_t inputs, unsigned int nbit);
...
Maybe I can create a .h file with preprocessor macro that will define
static inline functions automatically for many types of arrays.
Something similar to:
#define ABITS_TYPE_NAME inputs
#define ABITS_LENGTH 100
#include "abits.h"
--- abits.h ---
#define ABITS_TYPE CONCAT(ABITS_TYPE_NAME, _t)
typedef uint8_t ABITS_TYPE[ABITS_LENGTH];
#define ABITS_GET_FUNC CONCAT(ABITS_TYPE_NAME, _get_bit)
static inline uint8_t
ABITS_GET_FUNC(ABITS_TYPE ABITS_TYPE_NAME, unsigned int nbits) {
...
}
...
--- end of abits.h ---
I know I can try to write this .h file, but maybe there are some
public domain code ready-to-use.
Il 22/05/2021 01:05, Don Y ha scritto:
On 5/20/2021 12:22 AM, pozz wrote:
Many times I need to pass to functions or serialize an array of bits. If >>> they are just a few (8-16 bits), I decide to use a standard array:
uint8_t my_short_array_of_bits[16];
void abits_set_bit(uint8_t *array, size_t size, uint8_t nbit);
uint8_t abits_get_bit(uint8_t *arrray, size_t size, uint8_t nbit);
Why would you want to allocate a BYTE (uint8_t) for EACH BIT?
8 bits would fit into:
uint8_t my_short_array_of_bits[1];
while 16 would fit in:
uint8_t my_short_array_of_bits[2];
generally:
uint8_t my_short_array_of_bits[(NUMBITS+7)/8];
Because the bits are just a few, so I don't lost many space and loops for operations on bit are very fast.
Of course, this decision is made when the bit are completely indipendent data and I don't interested in classical "bit operations".
For example, when you have 10 boolean options for your software. Why should you
pack those options in a bitmask if they are unrelated?
Do you have some suggestions for this? Maybe in the same project I need to >>> use an array of 100 bits and an array of 200 bits. I would prefer a typedef >>> for both types, for example:
Ideally, you would conditionally pick your underlying implementation
(i.e., whether to use uint8 vs unint32 vs uint64) based on whatever the
native hardware best supports.
Yes, but my question was related to a "generic" approach that can be used if the bits are 32, 64 or 256. In the latter case, arrays are necessary.
Using a wider base type can be less efficient on targets that have narrower >> natural data sizes. E.g., support for a uint32 on an 8 bit processor may
ADD code where a "more natural" 8 bit type would better fit with the
instruction set/architecture.
Yes, something similar. I only asked if there was a public domain code with this kind of approach.
OTOH, if you just treat them all as "bit_array_t" -- and assume responsibility
for manually ensuring that you've specified the correct size for each
argument processed, then you can share a parameterized (instead of templatized)
implementation.
Yes, this is another solution.
result_t
test_bit(
uint bit, // [0,size)
BASE_T *array, // size elements
uint size // (0,UINT_MAX]
) {
ASSERT( size > 0 ); // makes no sense to have zero or fewer bits!
// ensure the referenced bit resides within the structure
if ( (bit < 0) || (bit >= size) ) {
return ERROR;
}
if ( array[bit] ) {
return (1);
} else {
return (0);
}
}
if ( array[bit/8] & (1 << (bit % BASE_BITS)) ) {
return (1);
} else {
return (0);
}
}
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 296 |
Nodes: | 16 (3 / 13) |
Uptime: | 49:42:28 |
Calls: | 6,649 |
Calls today: | 1 |
Files: | 12,200 |
Messages: | 5,330,100 |