• Re: Should polysign numbers encompass linear algebra?

    From Timothy Golden@21:1/5 to Timothy Golden on Wed Oct 18 07:53:26 2023
    On Tuesday, June 13, 2023 at 5:45:21 PM UTC-4, Timothy Golden wrote:
    On Sunday, June 4, 2023 at 12:38:51 PM UTC-4, Timothy Golden wrote:
    Sun 04 Jun 2023 12:27:35 PM EDT
    As a basis we have the geometry already specified by:
    ( 1, 1, 1, ... ) = 0.
    These values, however, are not real values since they lack sign; their position is their sign. They typically are continuous, and a general value could read:
    ( x0, x1, x2, ... )
    The geometry is already established here, and in this regard these numbers are superior to the Cartesian version, which an ordinary linear algebra basis presumes. However the orthogonality of these values in say the three-form is not present. Already
    we see this from the fact that:
    ( a, a, a ) = 0.
    which in polysign reads:
    * a - a + a = 0
    or with the zero sign:
    @ a - a + a = 0
    In the three-form the signs are modulo-three behaved, and so the @, mnemonically close to a zero is the same as *, which mnemonically takes three strokes to draw, as zero is three modulo three. And of course this works out with the reals actually
    taking two components which behave:
    ( a, a ) = 0,
    which encompasses the fact that
    + a - a = 0,
    and so this detail of polysign generating the reals this way... including their geometry... via the same laws that beget the complex numbers in their new three-signed suit, and go on coherently upward into P4, P5, and so forth, do develop general
    dimensional algebra without any consideration of linear algebra, which in its own terms is general dimensional, though it lacks an arithmetic product, which would clearly be a fundamental contribution of polysign; that product being the real product in
    the two-form, the complex product in the three-form, and so on upward (and indeed downward onto P1), in general dimensional terms.

    The geometry of these algebraically behaved systems is related to the simplex. It is merely the rays from the center of a simplex outward to its vertices. You should delete the frame of the simplex and see simply n unit rays which when followed under
    ordinary vector thought will lead to where you started. This language is confused by the Cartesian assumption and the real valued assumption, which definitely are not in the spirit of polysign. In effect, the usage of the ordered pair (a0,a1) and its n-
    ary form (a0,a1,a2,...an) are in a new sense here yet that new sense is upheld by one very simple law (1,1,...1)=0.

    The conundrum that should be perceived by an apt reader here is to breach the Cartesian product and its meaning from this new position, possibly yielding a new interpretation via the prioritization of the ray as fundamental over the line. This is to
    say that the P2 form, which is in use within linear algebra is of a nonfundamental nature.

    Let's for a moment informally treat little 'r' to mean the ray, and big 'R' to mean the line as normally intended within the real valued basis. We see that:
    P2: r x r <=> R
    P3: r x r x r <=> R X R
    P4: r x r x r x r <=> R X R X R
    ...
    where these little 'x' belie my own usage of the ordered series above, and the big 'X' are the usual Cartesian product.
    but you see we've generated a conflict in that we can now as well write:
    R <=> rxr
    R X R <=> rxr X rxr
    R X R X R <=> rxr X rxr X rxr
    ...
    This is normally felt as people describe one of the key descriptions of physical space as having six directions: up, down, left, right, forward, and backward; whereas in P4 physical space takes a description with just four directions, which are rays
    emanating from the center of a tetrahedron outward to its vertices. The inverse directions of these rays are simply composed as the sum of the other rays:
    ( 1, 0, 0, 0 ) @ ( 0, 1, 1, 1 ) = 0
    where '@' is the universal zero sign which replaces the '+' sign, which as sign two does not hold as a neutral sign in general, though it does happen to work in P2.

    Confusing rxrxrxr with rxrXrxrXrxr is not going to be well received. As to fundamental status and what ought to be in the basis of a linear algebra basis, it could be said that there is a new law in town:
    ( 1, 1, 1 ) = 0

    The conflict can be amplified a bit by receding to ordinary set theory, where a sensible Cartesian product might be felt as considering materials such as screws, bolts, nuts, washers, etc. composed of mild steel, stainless steel, brass, aluminum,
    plastic, etc. Here we see a sensible Cartesian product of actually crossing two unique types and accessing a final resultant. In this regard we have a clean decomposition but for a few outliers, such as a mild steel lockwasher of a certain thin gauge
    which will not function properly, but really it is the concept that matters. What we see is that the sensibility of this material Cartesian product is not at all what we witness of the geometry of RxR, which is two copies of an identical substrate. It
    may be possible to cross thread a nut onto a bolt, but to claim the two as one is merely a sum operation; not a product. Yet that sum operation over in the realm of the real value will merely yield a singular real value, and so this confusion is profuse.

    Now, returning to the ray, which under polysign takes fundamental status, and returning as well to the ordered series, what we see is that within polysign the ordered series usage is in fact a sum of types. On one hand you could claim that they are
    identical types since they are all rays, but you see their position in the series belays their uniqueness. When I write:
    ( 1, 2, 3 )
    I have specified a concrete P3 value which is equivalent to:
    @ 1 - 2 + 3
    or, if the zero sign is still too difficult to appreciate:
    - 2 + 3 * 1
    but you see the signs are modulo three in P3 so these two expressions are the same, and further there are many other such equivalent expressions:
    ( 2, 3, 4 ), ( 5, 6, 7 ), ...
    are all equivalent and if there were one to settle on it would be what we call the reduced form:
    ( 0, 1, 2 )
    or
    - 1 + 2
    and this is what we normally do already in P2, where we carry around just one of two components, because this is the simple way. If one were to receive a bad mark in algebra for giving the teacher an answer such as
    -1.23 + 3.21
    on a test, and here I do mean a P2 value, which is an ordinary real value, well, all that the teacher can fault the student for is not fully reducing, eh? the fact remains that this student is closer to the generalization of sign and all that follows
    from its consideration than the priors of four hundred years or so.
    Poly Dot( Poly & a, Poly & b )
    { // This is Yannis Picart's algorithm as emailed to me on 3/16/2013
    // 2023/06/13tpg: this is yielding zero, which for the moment is a mystery. Why it's taken me this long to investigate this is a mystery as well. The dot product as a real (P2) value is part of the conundrum.

    Poly r( 2 );
    r.Zero();
    if( a.n != b.n )
    {
    r.error.bits.Dimension = 1;
    r.error.bits.Product = 1;
    throw r;
    }
    double f = (float)(a.n - 1); // factor
    for( int i = 0; i < a.n; i++ )
    { for( int j = 0; j < a.n; j++ )
    { if( i == j ) r[0] += a[i] * b[i];
    else r[1] += ( a[i] * b[i] ) / f;
    }
    }
    return r;
    }
    It turns out this algorithm yields zero. The dot product is a peculiar thing from the perspective of polysign in that it is a P2 type value which is returned from two Pn types.

    void TestDotProduct( )
    {
    NameFunction();
    for( int n = 1; n < 10; n++ )
    {
    Poly z1(n), z2(n);
    z1.Random();
    z2.Random();
    Poly z3(2);
    z3 = Dot( z1,z2 );
    z3.Reduce();
    Cartesian c1(z1), c2(z2);
    double cDot = c1 * c2;
    cout << "z1:"<<z1<<" Dot z2:"<<z2 << "\n";
    cout << "cDot:"<< cDot << " z3:"<<z3<<"\n";
    }

    }
    z1:[P3 0.251886580924, 0.590696297022, 0 ] Dot z2:[P3 0.756469769422, 0.67034787733, 0 ]
    cDot:0.278668829419 z3:[P2 1.11022302463e-16, 0 ]
    z1:[P4 0.0338247540805, 0.159448474416, 0, 0.0611660105543 ] Dot z2:[P4 0.11747262167, 0.00468378329977, 0.0472328170563, 0 ]
    cDot:-0.00807268206416 z3:[P2 8.67361737988e-19, 0 ]
    z1:[P5 0.343000725819, 0, 0.129471115487, 0.272784249205, 0.173279418936 ] Dot z2:[P5 0.475414701736, 0.319349932431, 0.277043473373, 0.937308722388, 0 ]
    cDot:0.106913426528 z3:[P2 0, 5.55111512313e-17 ]
    z1:[P6 0.329268427144, 0.658183311873, 0.158139963071, 0.822682938644, 0, 0.0935249682581 ] Dot z2:[P6 0.568415303235, 0, 0.588695427627, 0.305360455762, 0.0109099359397, 0.0819612454196 ]
    cDot:0.00560443798571 z3:[P2 0, 1.11022302463e-16 ]
    z1:[P7 1.03621363044, 0.391606202748, 0.0525701435182, 0, 0.521171734744, 0.0512894984347, 0.259892320492 ] Dot z2:[P7 0.661380869398, 0.608007411219, 0, 0.284197497792, 0.112954068287, 0.51365004764, 0.255865665279 ]
    cDot:0.315337263298 z3:[P2 0, 2.22044604925e-16 ]


    To what degree is linear algebra built around the dot product? Yes.
    Why a P2 value rules the day: Not in polysign.
    Is this to say that the title of this thread is doomed?
    Is it the cast that we are after and is it a general dimensional business? Getting closer and not really working very hard on it.
    Best outcome will be to yield the P2 dot product, yet won't there as well be a P1 version?
    To suppose already that the P1 version will have a horizon: whereas in the P2 version as a vector is pointing away from the referent A, so that A.B would be negative in the P2 case, would the P1 version simply be zero? This essentially forms a horizon of
    positive values. It happens that they do disappear at perpendicular... well, perhaps I am suffering Cartesian thinking here. That vectors are rays is back-feeding a bit in a ray based system.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)