Matrices are sort of like labeled directed graphs. They both have a combinatorial part, determined by their underlying directed graphs, and an algebraic part determined by matrix semirings. The relational aspects of the matrix algebra will be briefly discussed.
Definition. let A be a semiring, with matrix semiring Mat_n(A). Then the underlying relation of a matrix R(M) is an adjacency matrix with one for all non-zero entries in M and zero otherwise.
R : Mat_n(A) \to Mat_n(T_2)
The underlying relation R(M) of a matrix M determines an inequality because relations form an ordered algebraic structure.
Theorem. let M,N be matrices in the semiring Mat_n(A) then
R(M\circ N) \subseteq R(M) \circ R(N)
Proof. let (i,j) be an entry in the matrix M \circ N then if (i,j) \in R(M \circ N) this implies that (M \circ N)_{(i,j)} \not= 0. The entries in a matrix can be written by a dot product u \cdot v where u is the ith column of N and v is the jth row of M.
u \cdot v = \sum_{i=1}^{n} u_i v_i
Suppose that u \cdot v \not= 0, then this implies that there exists at least one index i such that u \cdot v \not= 0 so that both entries at the index are not both zero. If that were not the case, then u \cdot v would have to be zero. To see this notice that zero is an annihilator of every element in a semiring, so if there is a zero at each index of one of u,v then \forall i : u_i v_i = 0.
By substitution, if \forall n : u_n v_n = 0 then u \cdot v = \sum_{i=1}^n 0, but this equals zero because the sum of any number of zeros is zero. It follows that there is at least one entry that both vectors are both not zero at. Then due to this entry R(u) \circ R(v) = 1 because all the dot product of T_2 vectors need to be one is a common non-zero entry at some index.
By the fact that R(v_1) \circ R(v_2) = 1 we have that (R(M) \circ(N))_{(i,j)} = 1, which implies that (i,j) \in R(M) \circ R(N). Finally, by the fact that for all (i,j) we have that (i,j) \in R(M \circ N) \Rightarrow (i,j) \in R(M) \circ R(N) we know that R(M \circ N) \subseteq R(M) \circ R(N). \square
In a semiring without additive inverses or zero divisors, like \mathbb{N} then R(M\circ N) = R(M) \circ R(N) and R turns into a homomorphism. This relationship allow us to form subalgebras of matrix rings, from subalgebras of the relation algebra. In order to elaborate on this we must first establish the following lemma.
Lemma. let Mat_n(T_2) be a matrix semiring then power sets of relations R are multiplicative sets iff R is transitive.
Proof. let R be a relation, then in order for \wp(R) to be composition closed, we must have that for all E_1, E_2 \in R we have \{E_1\} \circ \{E_2\} \in R. This implies R^2 \subseteq R, so that R is a transitive relation. Then if R is a transitive relation and A,B \subseteq R we have that A \circ B consists of edges E_1 \circ E_2 such that E_1 \in A and E_2 \in B, but all such edges are in R since R is transitive. Thus \wp(R) is a multiplicative set. \square
We can use this to form suborders of matrix semirings. In particular, if Mat_n(R) is a matrix semiring, then the subset of Mat_n(R) consisting of matrices whose non-zero entries are all in a transitive relation T forms a subalgebra. In short, this allows us to form subalgebras of matrices from ordering relations.
Theorem. let R be a ring or a semiring, then the restriction S of the matrix algebra Mat_n(R) to those martices whose non-zero entries are all in a transitive relation T forms a subalgebra.
Proof. this proof will split up into additive and multiplicative components.
(1) addition is defined componentwise, so any restriction of Mat_n(R) to a set of non-zero entries is an additive submonoid. Negation is also defined componentwise, so in the case R is a ring, then the restriction S is also an additive subgroup.
(2) let M,N have entries in T. Then because T forms a composition closed class of the matrix semiring Mat_n(T_2) by the previous lemma, we have that R(M) \circ R(N) \subseteq T, but by the inequality theorem this has R(M\circ N) \subseteq R(M) \circ R(N) \subseteq T. This means that the non-zero entries in M \circ N are a subset of T, which implies that M \circ N \in S, so that S is multiplicatively closed.
These restrictions don't necessarily need to have a multiplicative identity, which is worth pointing out. This often happens in ring theory, for example ideals don't need to have the multiplicative identity. This allows us to restrict the non-zero entries in a matrix to a transitive relation.
Example 1. diagonal matrices are determined by an antichain i = j
Example 2. upper triangular matrices are determined by a total order i \leq j
Example 3. lower triangular matrices are determined by a total order j \leq i
Example 4. upper triangular matrices with zero main diagonal are defined by a strict total order i \lt j and lower triangular matrices with zero main diagonal are defined by the strict total order j \lt i.
Example 5. take any adjacency matrix of a transitive relation, and you can form rings of matrices with non-zero entries in it
References:
Triangular matrix
No comments:
Post a Comment