# Book: LieTheory

Universal hyperbolic geometry: Relationships with Lie theory.

• Perpendicularity and the Lie bracket.
• Orthocenters and the Jacobi identity.
• Universal circle (the wall between inside and outside - is it the wall between propagation forwards and reflection back?) and ...?
• Ortho-axis - most important line - and ...?

Think about spectral graph theory. Is it relevant for Dynkin diagrams?

References

Videos to study

VGTU

Short texts Classical group: Discussion of forms

Online books

Print books

Lectures

Notes

Root systems

I am studying the possible root systems. They seem to describe the ways of relating two dimensions. A dimension can be considered the simplest root system, with 2 roots, opposite to each other.

Instead of roots, I think we should think of oriented dimensions acting as oriented mirrors, o-mirrors. And we can consider the Weyl chamber with regard to which all such o-mirrors are oriented. Each dimension is perhaps a slot defined by the Lie algebra for a continuous parameter in a Lie group. Thus the Lie algebra explains how the parameters in a Lie group are related with each other.

Each node in a Dynkin diagram is an o-mirror but also each edge in the Dynkin diagram is an o-mirror across which one node is reflected to another. The reflection may be two-directional in the case of a single bond (120 degrees) or one-directional in the case of a double bond (135 degrees) or triple bond (150 degrees).

The Cartan matrix gives the freedom of the root b in the direction of root a, that is the reflective distance between -b and b in terms of a. This distance is positive (2) in the case of a itself, because -a + 2a = a. We can think of this as the measure of increasing freedom inherent in each independent dimension. Otherwise, for b it is negative, either -1, -2, -3. We can think of this as the measure of decreasing freedom because it is the link that relates two dimensions. (Why is this more negative as the span increases?) The determinant of the Cartan matrix needs to be positive because it is a positive definite matrix. This means that the positive freedom (increasing slack) has to be greater than the negative freedom (decreasing slack). This is why the maximum is -3 and -4 is ruled out because the determinant would be 0 = 2*2 - 1*4.

The relationship between two fundamental roots defining two dimensions is given by the number of times one root can be added to the other and still be a root. If the angle is 90 degrees - 0 times, 120 degrees - 1 time, 135 degrees - 2 times, 150 degrees - 3 times. Thus if two dimensions are not directly connected, then they can be considered separated by 90 degrees, but otherwise there are three ways they may be connected.

Two dimensions may not be directly connected but yet be indirectly connected by way of another dimension. The principal way this occurs is through another dimension that is 120 degrees separated from one root and likewise 120 degrees separated from the other root. The other two roots can then be separated from each other by 90 degrees. We may imagine that 120 degrees is the square root of 90 degrees, in this sense. That is why this is not possible for other degrees, because it needs to be symmetrical going in both directions. Thus this is the way that we can have unbounded chains of dimensions, with adjacent dimensions separated by 120 degrees, and all other dimensions separated by 90 degrees. I can try to visualize such a chain by relabeling dimensions, that is, "forgetting" old dimensions and considering them as new dimensions.

So this chain defines the A-series. We can think of it as transmitting a signal. If we start with one-dimension at one end, then the A-series can end in four ways at the other end. These are the classical Lie groups. Also, the one-dimension can lead to a double bond which must end symmetrically, yielding F4. Otherwise, there may be three series coming together into one dimension. These may describe a "coincidence" of three signals coming together. The possible lengths of the series are determined by the equation 1/p + 1/q + 1/r > 1, yielding the D and E series. Finally, two dimensions may be related by a triple bond, yielding G2.

Dynkin diagrams not allowed are those for which the Cartan matrix of a Dynkin subdiagram has determinant zero. This is to say that the Cartan matrix needs to be invertible. I think that if the Cartan matrix for a subdiagram is singular, then it is also for the diagram. One test, in particular, is whether the rows are linearly independent, as they are in the case of a cycle (add all rows and you get 0). The determinant {$d_{n}$} of the Cartan matrix for An (the chain) is given by the recursion formula {$d_{n+1}=2d_{n}-d_{n-1}$}, which is to say, they obey the arithmetic mean {$d_{n}=(d_{n-1}+d_{n+1})/2$}. The initial conditions for An are A0=2 and A1=3 so they continue increasing by 1. The same recursion formula holds if we start our chain with initial conditions B0=2 and B1=2 and then we get that the determinant Bn=2. Similarly, we have C0=2, C1=2, Cn=2 and D0=4, D1=4, Dn=4. Now if we have a B or C type ending at the other end of the chain, then the recursion formula switches to {$d_{z+1}=2d_{z}-2d_{z-1}$} which yields 0 for B, C type endings. Similarly, if we have a D type ending at the other end of the chain, then the recursion formula switches to {$d_{z+1}=4d_{z}-4d_{z-1}$} which yields 0.

If we build up a Dynkin diagram (and Cartan matrix), then a single edge applies the recursion above - if the last two states differ by +1, then so will the next one, and if the last two states are equal, then likewise the next state will be equal. A double edge, if applied as above, will yield 0 if the two states are equal.

For groups the important point is that, where RT is R transpose, R*RT = 1 iff the inner product (and lengths) is preserved. And this is the "shortcut" that makes it possible to take the inverse in the group. So the possible Lie groups are given by the possible inner products.

The Cartan matrix M is DS where S is positive definite because the simple roots span all of Euclidean space. This is not the case when we have a cycle or other configuration where the determinant is 0 and so the whole space is not spanned. By continuity looking at the product of eigenvalues, if one is negative, then in order for it to become positive, it would have to pass zero, which means that the eigenvectors could not have spanned the whole space (?)

The constraints on A-cycles and A-loops are logical but the constraining equations allowing G2 are geometrical based on the possible legs for a right triangle.

Symplectic algebras involve pairs of dimensions - perhaps because the extra root is longer by 2. Whereas the even orthogonal algebras

A classical group is a group that preserves a bilinear or sesquilinear form on finite-dimensional vector spaces over R, C or H. A form {{math|φ: V × VF}} on some finite-dimensional right vector space over {{math|F {{=}} R, C}}, or {{math|H}} is

• bilinear if: ${\varphi(x\alpha, y\beta) = \alpha\varphi(x, y)\beta, \quad \forall x,y \in V, \forall \alpha,\beta \in F.}$
• sesquilinear if: ${\varphi(x\alpha, y\beta) = \bar{\alpha}\varphi(x, y)\beta, \quad \forall x,y \in V, \forall \alpha,\beta \in F.}$

Metric preserving groups. Groups preserving:

• bilinear symmetric metrics are called orthogonal
• bilinear antisymmetric metrics are called
• sesquilinear metrics are called unitary
• special orthogonal are groups preserving bilinear symmetric metrics and also volumes
• special symplectic are groups preserving bilinear antisymmetric metrics and also volumes
• unitary-symplectic groups are the intersection of U(2N) and Sp(2N) and are isomorphic with unitary groups in a Quaternion space USp(2N) ~ U(N,H)

Sum of epsilon-i equals zero because of the trace. Note that the sum changes size as i grows but it's still zero. It is shifting like the center of the simplex.

Fundamental roots are raising and lowering operators for a root lattice. (You stay within the lattice.)

If you start with the "lowest weight" combination, then you can get Zero and all of the negative roots as well.

There is a lowest maximal weight from we can go back (negatively) to each fundamental root. (?) So a root system is simply the components of the lowest maximal weight (the opposite of zero). This means we have a finite closed system (a building block) for the lattice (similarly to the cube for the usual grid, or other polytopes). When is that true? How does that extend? In each root direction? Bond strength shows the amount of raising and lowering that is possible.

Lattice generated by independent vectors. Not interesting. All the same, get a cube. It's interesting when we get repetition. Lattice generated by independent vectors embedded in a smaller space is more interesting. This is the fundamental task of geometry and likewise of neurons, to economize. How small can the space be? And there is an underlying space epsilon-1, epsilon-2... and how are these spaces related? Measure how much compression there is. They are and must be integers. Consider if we have n*alpha + m*beta = 0, what does that say about theta?

Roots ei - ej are the perspective that opposites coexist. Whereas ei or ei+ej mean it's all the same? And are the classical groups given by a foursome of representations for relating these two perspectives?

Lie Bracket:

• Remiasi tuo, kad summing over permutations of 1 yield 0. [x,x]=0
• Summing over permutations of 2 yields 0. [x,y]+[y,x]=0
• Summing over permutations of 3 yields 0. [x,[y,z]] + [y,[z,x]] + [z,[x,y]]=0

That's true writing out [x,y]=xy-yx and summing out you get a positive and a negative term for each permutation. But also true in the brackets directly permuting cyclically. What would it look like to sum over permutations of 4?

Lie groups and Lie algebras

• ways of breaking up an identity into two elements that are inverses of each other
• orthogonal: symmetric transposes of each other
• unitary: conjugate transposes of each other
• symplectic: antisymmetric transposes of each other
• two out of three property At the level of forms, this can be seen by decomposing a Hermitian form into its real and imaginary parts: the real part is symmetric (orthogonal), and the imaginary part is skew-symmetric (symplectic)—and these are related by the complex structure (which is the compatibility).

Today I watched video "J2 Unitary Groups" from doctorphys series on "Theoretical Physics" at YouTube. https://www.youtube.com/playlist?list=PL54DF0652B30D99A4 I appreciate that he actually calculates some small, concrete examples, which is what I try to do, too. And then he made an extra remark which made things click for me. He explained for a particular matrix that when you take the inverse, it's just like reversing the direction of the angle.

So I want to apply that insight and write up my thoughts on making sense of the classical Lie groups. I will start by describing what they are.

First, I will explain what a "group" is. In math, a group is at work/play whenever we think/talk about actions. For example, imagine a drawing in the plane. Let's establish some point in the drawing where we pin it down to the plane. Then we can rotate that picture by X degrees. Let's say, for the sake of concreteness, that X is an integer from 0 to 359. Then the rotations are "actions" in that they can be:

• added: rotations by X and by Y can be added to get a rotation by X+Y
• it's associative (you can insert parentheses as you prefer): rotating by ((X+Y)+Z) = rotating by (X + (Y+Z))
• there's an action which does nothing, leaves things be, namely the "identity" action 0
• you can undo each action. Rotation by X (say 40 degrees) can be undone by some other rotation (320 degrees = -40 degrees).

We call this a "group" of actions (elements, operators, etc.)

You can have subgroups. So if we restrict ourselves to rotations by multiples of 5 degrees (5, 10, 15...) we will have the 60 rotations we need for the minute hand of an old-fashioned, pre-digital clock. If we restrict ourselves to rotations of 30 degrees (30, 60, 90...) we will have the 12 rotations we need for the hour hand of that same clock. If we restrict to rotations of 90 degrees (0, 90, 180, 270), then we have the 4 rotations which would keep our drawing unchanged if it was a square. This is called a "symmetry group" but the others are symmetry groups, too, for the right objects. If we restrict to rotations of 72 degrees, then we have the 5 rotations that would keep a pentagon unchanged. This last subgroup would be special because it doesn't have any subgroups, partly because 5 is a prime number. Such subgroups are valued as building blocks for more complicated groups.

All of these groups are "commutative" because rotating by X and by Y is the same as rotating by Y and then by X. But there are groups which are not commutative. Let's take the 4 rotations (0, 90, 180, 270) of the square in the plane and let's add a reflection R that flips the square over on that plane. Then it turns out that we have a group of 8 actions and we have, for example, that 90 + R does not equal R + 90. You can imagine non-commutativity is typical when you play with a Rubik's cube (the order of the actions matters).

All of these groups are finite. But we can also have infinite groups. Imagine if we rotated by any real number of degrees. These rotations happen to also be continuous, which is not trivial to make rigorous, but I think basically here boils down to the fact that we can make infinitesimal, that is, itsy bitsy rotations, as small as we want. So then we have a Lie group. You can imagine that Lie groups are important in physics because we live in a world where actions can be subtle and continuous.

The "building blocks" of the Lie groups have been classified. There are four families of groups, An, Bn, Cn, Dn, where n is any natural number. These are called the classical Lie groups. There are also five exceptional Lie groups. I would like to intuitively, qualitatively understand the essence of those classical groups, so I could feel what makes them different and what they share in common. I have failed to find any such exposition.

But I think I'm getting closer. These classical Lie groups are known as the unitary (An), orthogonal (odd and even, Bn and Dn) and symplectic (Cn). These can all be thought of as groups of matrices. Each matrix can be thought of as an action, as a rule which tells you how to break up one vector into components, modify those components, and then output a new vector. These rules can be composed just like actions. In general, matrices and matrix multiplication are used to describe explicitly a group's actions and how they are composed. This is called a group representation. It's a bit like writing down the multiplication table of a group. Mathematicians study the restrictions on the types of tables possible and can determine from that the nature of the group, for example, how it breaks down into subgroups.

The numbers in these matrices can be complex numbers. Now for me the key distinction seems to be that in each group there is a special way to imagine how an action is undone. In other words, there is a special relation between a matrix and its inverse. You undo an action:

• in a unitary group, by taking the conjugate transpose of its matrix.
• in an orthogonal group, by taking the symmetric transpose of its matrix.
• in a symplectic group, by taking the anti-symmetric transpose of its matrix.

What is a transpose? An NxN matrix is a set of rules which takes a column vector (of N components) as the input and outputs a row vector (of N components). Then the transpose is the same set of rules but just reorganized so that the input is a row vector and the matrix outputs a column vector. That will be important if we think in terms of tensors where the row vectors and the column vectors are the extremes of top-down thinking and bottom-up thinking in building a coordinate space. So this is what the classical Lie groups all have in common.

Where they differ is on how they modify the transpose so that the group's action is undone.

Joe, Kirby,

I'm thinking that this distinction between ratios and products comes up as the distinction between "contravariants" and "covariants". And I'm imagining that a (p,q) tensor tells you that p dimensions are to be understood in terms of "division" (contravariants, as with vectors) and q dimensions are to be understood in terms of "multiplication" (covariants, as with covectors - reflections). I'm still trying to figure it out. For example, if you have an answer A = 5 / 7 then on the one hand you are dividing, and in fact, the denominator 1/7 is the "unit", that is, the "denominated" whereas the 5 is the amount, the "numerated". If you want the fraction to stay the same then you have to multiply the top and bottom by the same. I mean to say that I don't understand but I think that tensors are relevant to this question.

The link between difference/sum and ratio/product is given by the exponential/logarithm function. In particular, the Lie group G and the Lie algebra A are related by:

e**A = G

So this is a key equation for relating the discrete world (Lie algebra A) and the continuous world (Lie group G). Addition/subtraction in the discrete world is matched by multiplication/division in the continuous world.

The equation above involves matrices. In general, there is the very meaningful "polar decomposition" of matrices:

M = P U = P e**iH analogous to polar coordinates for a complex number: z = R e**i t

P is a positive semi-definite Hermitian matrix, which means that all of its eigenvalues are nonnegative real numbers, which means that the effect of the matrix P is simply to distort the lengths of vectors in various directions (which is analogous to R).

U is a unitary matrix, which means that it preserves lengths but may be a rotation, for example, as given by the Hermitian matrix H, whose eigenvalues are real.

Well, for Lie groups (continuous groups) to exist their actions (their elements) need to have counteractions, that is, inverses. And when those actions are described as matrices, it turns out that there can't be any radial component P. That is, the matrix can't stretch vectors bigger or smaller. Otherwise, apparently, the action would rip the group apart, it would not be continuous. All that can exist is the angular component. In other words, the volumes (bound by a set of vectors) have to be preserved. These volumes are given by the determinant, which I think detects what is "inside" the volume and what is "outside" of it. The determinant has to be nonzero (so that the volume doesn't collapse, and thus the matrix is reversible), but also it has to have absolute value 1 (so that there is no stretching bigger or smaller). We can also relate this to Cramer's rule for calculating the inverse of the matrix, where the denominator is the determinant, and thus in our case there is no denominator to speak of.

What this means is that for Lie groups there is always a "short cut" for calculating the inverse of the action. In the case of the circle group, for example, it means that an action doesn't have to be thought of as a big matrix that needs to be inverted. Instead, in that case we can think of the action as rotating by an angle, and the inverse is simply rotating back. Thus these "short cuts" are given by the adjoint matrix. For example, for unitary matrices the short cut for calculating the inverses is to take the conjugate transpose.

So now I'm trying to understand what "short cuts" are allowed. That apparently classifies the Lie groups. The way that classification is made is instead to look at the Lie algebras. Instead of looking at multiplication (in Lie groups) we look at addition (in Lie algebras). The addition is I think described by crystallographic lattices, which is where the tetrahedral vs. Euclidean geometries come up, for example. And so it is possible to calculate the limited possibilities for the geometry. So I will try to figure that out and report back.

A related way to understand this is to look at the "normal forms" preserved by the Lie groups. I suppose this means that each Lie group preserves not only the lengths (and volumes) but something more precise. There aren't many possibiities, though.

Notes

Jacobi identity - constraint on nonassociativity - arises from commutator

anti-commutativity - arises from commutator - relates duality of order of elements with a duality of positive and negative signs

Parsiųstas iš http://www.ms.lt/sodas/Book/LieTheory
Puslapis paskutinį kartą pakeistas 2018 gruodžio 09 d., 21:49