| This is the talk page for discussing improvements to the Tensor article. This is not a forum for general discussion of the subject of the article. |
Article policies
|
| Find sources: Google (books · news · scholar · free images · WP refs) · FENS · JSTOR · TWL |
| Archives: 1, 2, 3, 4, 5, 6, 7, 8Auto-archiving period: 3 months |
| This It is of interest to the following WikiProjects: | |||||||||||||||||||||
| |||||||||||||||||||||
Referring to orthonormal basis as frame
[edit]I see my edit changing "from one orthonormal basis (called a frame) to another" to "from one orthonormal basis to another" in the Spinors section was reverted without comment by Tito Omburo. Is there any need to refer to frames in the physics sense in this section? "Frames" have an entirely distinct linear algebra meaning (and this is primarily a linear algebra article). You could make the exact same point in fewer words and without the ambiguity by just saying "basis", right? — Preceding unsigned comment added by 2600:4040:2255:4300:7426:36C9:973E:3FC5 (talk) 16:04, 9 January 2025 (UTC)
- I reverted the edit because the section still uses the word frame throughout, but without clarifying that in this section "frame" means "orthonormal basis". Tito Omburo (talk) 17:38, 9 January 2025 (UTC)
- Yes, sorry, I missed that at first. This is my first Wikipedia edit. Would it be alright with you if I took a stab at rewriting the spinor section so as not to mention frames at all? 2600:4040:2254:3D00:3119:AB26:2A63:C265 (talk) 03:54, 10 January 2025 (UTC)
- What would you call the "space of frames"? Tito Omburo (talk) 10:21, 10 January 2025 (UTC)
- The more I think about this, I'm not even sure it makes sense to have two paragraphs about spinors on this page. We're essentially saying "Here's something that would transform like a tensor *if* orientation didn't matter which it *actually* does." But if you do think the tensor page should explain the basic motivation for spinors, perhaps something like: "Physical coordinate systems are often expressed in terms of orthonormal bases. Changes of orthonormal bases obtained from rotations transform the same way as tensors under rotation; however, not all possible orthonormal bases can be reached by rotation (because the orthogonal group representing all possible rotations has two connected components). Spinors and spin groups use a double cover of the orthogonal group to allow transformations between bases that can't be considered orientation-preserving rotations." 2600:4040:2257:3600:8546:6D3:38D8:37BD (talk) 19:34, 10 January 2025 (UTC)
- Darnit, that parenthetical should say "all changes of basis", not "all rotations". The whole point is that it includes things a simple rotation won't get you to. 2600:4040:2257:3600:8546:6D3:38D8:37BD (talk) 19:40, 10 January 2025 (UTC)
- Except this explanation is wrong. Spinors exist not because the orthogonal group is not connected, but because the special orthogonal group is not simply connected. Tito Omburo (talk) 19:44, 10 January 2025 (UTC)
- I believe you're mistaken. The orthogonal group has two connected components, one corresponding to orientation-preserving rotations (matrices with determinant 1) and one corresponding to orientation-flipping transformations (matrices with determinant -1). The special orthogonal group throws out the orientation-flipping ones and is simply connected. A quick Google search should lead you to multiple proofs of this fact. I was also wondering if there is a reason to explain spinors on this page at all. 2600:4040:2258:5B00:6C83:5698:CE47:BB6 (talk) 20:19, 10 January 2025 (UTC)
- No, the special orthogonal group is not simply connected. That's in some sense the whole point of spinors. Tito Omburo (talk) 20:57, 10 January 2025 (UTC)
- Yes, that is my understanding too. The special orthogonal group SO(n) has non-trivial fundamental group. Namely, its fundamental group is the group of two elements. The spin group is the two-to-one cover of SO(n) that corresponds to this fundamental group, according to covering space theory. Mgnbar (talk) 21:03, 10 January 2025 (UTC)
- It is also true that the orthogonal group O(n) has two connected components, one of which is the special orthogonal group SO(n). There is a two-to-one covering map from O(n) to SO(n). (In fact there are infinitely many such maps. I'm not aware of a canonical one.) But that's not what spin is about. Mgnbar (talk) 21:03, 10 January 2025 (UTC)
- No, the special orthogonal group is not simply connected. That's in some sense the whole point of spinors. Tito Omburo (talk) 20:57, 10 January 2025 (UTC)
- I believe you're mistaken. The orthogonal group has two connected components, one corresponding to orientation-preserving rotations (matrices with determinant 1) and one corresponding to orientation-flipping transformations (matrices with determinant -1). The special orthogonal group throws out the orientation-flipping ones and is simply connected. A quick Google search should lead you to multiple proofs of this fact. I was also wondering if there is a reason to explain spinors on this page at all. 2600:4040:2258:5B00:6C83:5698:CE47:BB6 (talk) 20:19, 10 January 2025 (UTC)
- The more I think about this, I'm not even sure it makes sense to have two paragraphs about spinors on this page. We're essentially saying "Here's something that would transform like a tensor *if* orientation didn't matter which it *actually* does." But if you do think the tensor page should explain the basic motivation for spinors, perhaps something like: "Physical coordinate systems are often expressed in terms of orthonormal bases. Changes of orthonormal bases obtained from rotations transform the same way as tensors under rotation; however, not all possible orthonormal bases can be reached by rotation (because the orthogonal group representing all possible rotations has two connected components). Spinors and spin groups use a double cover of the orthogonal group to allow transformations between bases that can't be considered orientation-preserving rotations." 2600:4040:2257:3600:8546:6D3:38D8:37BD (talk) 19:34, 10 January 2025 (UTC)
- What would you call the "space of frames"? Tito Omburo (talk) 10:21, 10 January 2025 (UTC)
- Yes, sorry, I missed that at first. This is my first Wikipedia edit. Would it be alright with you if I took a stab at rewriting the spinor section so as not to mention frames at all? 2600:4040:2254:3D00:3119:AB26:2A63:C265 (talk) 03:54, 10 January 2025 (UTC)
- @Tito Omburo and Mgnbar:This really belongs in Tensor fields A basis pertains to a vector space, while a frame pertains to a coordinate patch in a differentiable manifold . The physics literature often refers to spinor fields as spinors, tensor fields as tensors and vector fields as vectors, but this article is supposed to be about the algebraic entities, not the corresponding fiber bundles on manifolds.. -- Shmuel (Seymour J.) Metz Username:Chatul (talk)
- Well, I've seen "frame" refer either to a basis of the tangent space at a point or a section of the frame bundle, in both mathematical and physical contexts. In particular, Cartan famously studied "moving frames", which wouldn't be a sensible combination of words unless a frame was fixed and not a field. The main thing is consistency in usage in the article, and if, instead of saying "orthonormal basis", we can get away with using the simpler term "frame", which also borrows from the physical idea "frame of reference" that we are trying to make accessible, all the better. (Incidentally, I would use the term "frame field" to refer to the field case. See, e.g. Michor Topics in differential geometry. ) Tito Omburo (talk) 12:00, 14 July 2025 (UTC)
First image and caption don't match
[edit]The image at the top of the article and its caption are mismatched in several fairly confusing ways. First, the unit vectors e are not shown in the picture. Second, the output vector T is not shown in the picture. The reader is left trying to guess which e goes where, and why there are three output vectors when it says a second-order tensor has only one. Also, the σ coefficients aren't mentioned in the caption at all. 2600:8800:1180:25:AD40:E4C1:631E:5E07 (talk) 21:58, 25 April 2024 (UTC)
- Surely the are represented by the locations at which is evaluated, the outputs are shown in a bluish color? --JBL (talk) 23:26, 25 April 2024 (UTC)
- The anonymous poster has a point. One can guess where the are, but one should not have to. More importantly, is prominent in the figure and missing from the caption. In fact, is it true that ? Or is the figure reserving for the traction vectors that the stress tensor produces? If that's the case, then is the caption wrong? It's all a bit of a mess. Mgnbar (talk) 00:22, 26 April 2024 (UTC)
- It is helpful to compare with an earlier revision revision of the caption, although this doesn't fully resolve the difficulty. Tito Omburo (talk) 13:14, 26 April 2024 (UTC)
- It doesn't help that everything is a subscript, preventing use of the Einstein summation convention. I would guess that , the projection of on . Things would have been clearer had the caption spelled things out, used and not introduced an apparently extraneous additional name. I agree with Tito Omburo that the earlier version is clearer, although I find the nomenclature awkward. -- Shmuel (Seymour J.) Metz Username:Chatul (talk) 13:33, 26 April 2024 (UTC) -- Revised 19:04, 26 April 2024 (UTC)
- The current caption does describe what/where the e_i are (they are the normals to the faces of the cube). I agree about the sigmas (I mean, I know what must have been intended, but it's not good that it's entirely implicit). --JBL (talk) 18:16, 26 April 2024 (UTC)
Machine learning still unclear
[edit]@Tito Omburo: A recent edit, permalink/1300282879, removed a {{clarify|text=the dimension of the spaces along the different axes}} template without actually clarifying the text in question. An axis is indexed by a scalar, e.g., real number, complex number. Changing axis to slot without defining slots leaves the dimension of the spaces belonging to the different slots
just as murky as before. -- Shmuel (Seymour J.) Metz Username:Chatul (talk) 11:57, 15 July 2025 (UTC)
- I never understood the clarification that was offered. Can we please discuss it relative to the mathematical meaning of tensor?
- Suppose that we have an n-dimensional vector space V, and we choose a basis for it. Then a linear transformation from V to itself is an element of , and it can be written as an nxn matrix M. The entries of M are denoted Mij (or maybe one of the indices is superscripted).
- Are the indices i and j indexing the "axes"/"slots", one of which is (for) and one of which is (for) ? Is that the meaning of "axis"/"slot"?
- If so, then an axis is indexed by an integer between 1 and n — not by a general real or complex number. Please help me understand. Mgnbar (talk) 12:53, 15 July 2025 (UTC)
- Actually in machine learning, a tensor is just a multidimensional array, the axes are the different dimensions of the tensor (just like a rectangular matrix has horizontal and vertical axes.) Thus a tensor belongs to where each factor is a module over a "ring" (of floating point numbers, usually), each having a fixed basis. The axes refer to the , and they generally have different dimensions. Tito Omburo (talk) 13:01, 15 July 2025 (UTC)
- I think that your description agrees with mine (where they overlap). But then I don't understand how Chatul says that the axes are indexed by real or complex numbers. Mgnbar (talk) 14:38, 15 July 2025 (UTC)
- I misread the text. Not enough coffee. :-(
- BTW, are Tito Omburo's scare quotes around ring because FP arithmetic violates the distributive and associative properies? -- Shmuel (Seymour J.) Metz Username:Chatul (talk) 15:23, 15 July 2025 (UTC)
- Indeed! Tito Omburo (talk) 16:58, 15 July 2025 (UTC)
- Thanks for clarifying, both of you. Happy travels. Mgnbar (talk) 17:10, 15 July 2025 (UTC)
- Thanks everyone, from me as well. Always good to make progress! Tito Omburo (talk) 19:11, 15 July 2025 (UTC)
- Indeed! Tito Omburo (talk) 16:58, 15 July 2025 (UTC)
- I think that your description agrees with mine (where they overlap). But then I don't understand how Chatul says that the axes are indexed by real or complex numbers. Mgnbar (talk) 14:38, 15 July 2025 (UTC)
New figure and total order
[edit]The new figure added today really leans into the notion that there is a total "order" to tensors — as in, a (p, q)-tensor is a tensor of total order p+q. So, for example, there is no distinction between (1, 1)-tensors (linear transformations) and (0, 2)-tensors (bilinear pairings). Is this what we want? Mgnbar (talk) 00:16, 7 October 2025 (UTC)
to be confused with
[edit]This article starts with a confusing statement:
- This article is about tensors on a single vector space and is not to be confused with Vector field or Tensor field.
We've come here to learn about "tensor", but the first thing we read is an admonition not to be so stupid. Do not be confused about these other things you don't understand, but rest assured, because the thing you came to read about is "on" another thing you don't understand and maybe that is good thing or maybe not??
In my opinion this line should be removed. If the goal is to help experts jump onward to the fields, then
fills the bill without the confusion.
Personally I'm in between: I know about tensors and fields generally but the difference between a "tensor on a single vector space" and a "tensor field" is opaque to me. The only sentence in the article about single vector space is the unsourced:
- However, the mathematics literature usually reserves the term tensor for an element of a tensor product of any number of copies of a single vector space V and its dual, as above.
This means nothing to me, which is fine but the line I am complaining about assumes that every reader will understand "on a single vector space". Johnjbarton (talk) 17:33, 17 December 2025 (UTC)
- I don't have a strong opinion about your proposal, but here is some background.
- In some subjects, such as physics, "tensor" often means "tensor field" (I guess because almost every quantity is a field anyway). So there is a danger that readers coming from the physics literature will arrive here looking for tensor fields and not finding them.
- If you start with a tensor field on a space X, and you evaluate it at a single point x of X, then you get a tensor on the tangent space to X at x. This is the relationship between a "tensor on a single vector space" and a "tensor field".
- Sorry if I'm telling you things you already know. :) Mgnbar (talk) 17:53, 17 December 2025 (UTC)
- Hatnotes like this are incredibly common when there are topics with easily confusable names: WP:Hatnote. I don't see anything problematic with the current one. --JBL (talk) 18:28, 17 December 2025 (UTC)
- I think mentioning tensor field in the Hatnote section is a good idea, which addresses the issue of tensor being used as shorthand for tensor field (though I've not seen that myself). My point is that the current hatnote fails in this mission by saying too much. It attempts to jam a distinction in to one sentence which fails to help exactly those readers who might benefit from the hatnote. Such readers as myself who are unclear on the mathematical details of similar ideas of "space" and "field", as in "vector space" and "tensor field" are more confused than helped. If you already know the meaning of these words, the hatnote tells you nothing; if you don't know the meaning, ditto.
- Or maybe it does succeed. My first reaction was to stop reading: I guess this article isn't for people like me. Johnjbarton (talk) 19:15, 17 December 2025 (UTC)
- It seems that you find the current wording very off-putting. :) Perhaps we can tweak it to sound more neutral: "This article is about tensors on a single vector space. For continuously varying families of tensors, see Vector field or Tensor field." Any better? Mgnbar (talk) 20:54, 17 December 2025 (UTC)
- Yes, a bit better. Perhaps you can indulge me: what would a "scalar on a single vector space" mean? My brain can only come up with scalar field. Why would anyone be interested in a non-varying family of tensors? (A non-varying family of scalars sounds like a constant to me.) As far as I can tell neither the vector space nor non-varying aspects of tensors are covered in the article, despite this claim at the outset. Should we assume that all readers understand "on a single vector space"? If I understood a bit more maybe I could offer another suggestion. Johnjbarton (talk) 21:41, 17 December 2025 (UTC)
- Yes, I am happy to try to explain. A "scalar on a single vector space" is a scalar. Yes, a tensor on a single vector space is constant, in that it is not varying with respect to anything else. In the USA, at least, it is common for undergraduate math students to take at least one, and often two, courses on this concept alone. Those courses are called "linear algebra". Any better? Mgnbar (talk) 21:50, 17 December 2025 (UTC)
- OK great. So why don't we just say:
- For continuously varying families of tensors, see Vector field or Tensor field.
- The first part of your version, "This article is about tensors on a single vector space." says one thing we know (article about tensors) and one thing foreign to me and not even discussed in the article (on a single vector space). For me personally something like
- This article is about algebraic objects that generalize the concepts of scalar and vectors. For continuously varying families of tensors, see Vector field or Tensor field."
- would be even better. I think this achieves the goal of the hatnote using words more likely to be understood by readers who understand linear algebra and wishing to know about tensors. Johnjbarton (talk) 22:47, 17 December 2025 (UTC)
- I'm okay with either one. JBL, do you have an opinion? Mgnbar (talk) 02:14, 18 December 2025 (UTC)
- I think your rephrase seems fine (or there are probably other alternative ways of saying this). The current language is kind of confusing and awkward. –jacobolus (t) 02:24, 18 December 2025 (UTC)
- Thanks to everyone. I put an alternative hatnote to try. Johnjbarton (talk) 04:22, 18 December 2025 (UTC)
- The new version seems fine. --JBL (talk) 21:01, 18 December 2025 (UTC)
- Thanks to everyone. I put an alternative hatnote to try. Johnjbarton (talk) 04:22, 18 December 2025 (UTC)
- OK great. So why don't we just say:
- Yes, I am happy to try to explain. A "scalar on a single vector space" is a scalar. Yes, a tensor on a single vector space is constant, in that it is not varying with respect to anything else. In the USA, at least, it is common for undergraduate math students to take at least one, and often two, courses on this concept alone. Those courses are called "linear algebra". Any better? Mgnbar (talk) 21:50, 17 December 2025 (UTC)
- Yes, a bit better. Perhaps you can indulge me: what would a "scalar on a single vector space" mean? My brain can only come up with scalar field. Why would anyone be interested in a non-varying family of tensors? (A non-varying family of scalars sounds like a constant to me.) As far as I can tell neither the vector space nor non-varying aspects of tensors are covered in the article, despite this claim at the outset. Should we assume that all readers understand "on a single vector space"? If I understood a bit more maybe I could offer another suggestion. Johnjbarton (talk) 21:41, 17 December 2025 (UTC)
- It seems that you find the current wording very off-putting. :) Perhaps we can tweak it to sound more neutral: "This article is about tensors on a single vector space. For continuously varying families of tensors, see Vector field or Tensor field." Any better? Mgnbar (talk) 20:54, 17 December 2025 (UTC)
- Hatnotes like this are incredibly common when there are topics with easily confusable names: WP:Hatnote. I don't see anything problematic with the current one. --JBL (talk) 18:28, 17 December 2025 (UTC)