Talk:Matrix multiplication

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

This article is just plain wrong[edit]

This article is based on the false assertion that a matrix is an "array of numbers". In its simplest case, a matrix IS a rectangular array of numbers, characterized by the number of its rows and columns, each of which is a positive integer. A matrix is always rectangular but it is NOT always an array of numbers. It can be an array of functions, sets, vectors, tensors, and most other mathematical objects, including matrices. Matrix multiplication is generally (as far as I know) defined as the direct generalization of the binary operation on numerical matrices, so it makes sense to explain it in those terms but to claim it is only about determining a numerical product array is just plain wrong.71.28.55.221 (talk) 18:55, 24 October 2016 (UTC)[reply]

The article says here that the entries do not have to be numbers, they could be functions or matrices as you point out. The definition of matrix multiplication as given in this article is correct. If you are talking about the lead,
"In mathematics, matrix multiplication is a binary operation that takes a pair of matrices, and produces another matrix. Numbers such as the real or complex numbers can be multiplied according to elementary arithmetic. On the other hand, matrices are arrays of numbers, so there is no unique way to define "the" multiplication of matrices."
...well... this is just a way to start from something the reader will most likely know before reading this article (multiplication of ordinary real numbers, possibly complex numbers). The rule for matrix multiplication does not change if the entries are functions or anything else. So what is your point? What does
"Matrix multiplication is generally (as far as I know) defined as the direct generalization of the binary operation on numerical matrices"
mean? MŜc2ħεИτlk 19:47, 24 October 2016 (UTC)[reply]

I think that 71.* has a point and that the lead paragraph of the article is terrible. Actually it would make more sense (but not enough sense) if "matrices are arrays of numbers" was replaced by "matrices are not necessarily arrays of numbers". McKay (talk) 02:26, 25 October 2016 (UTC)[reply]

It is certainly true that matrices are not always numbers. I made a try at this; see what you think. LouScheffer (talk) 12:22, 25 October 2016 (UTC)[reply]
Looks better, but if that was the only point, then its a bit overkill to claim the entire lead paragraph or even the entire article is "plain wrong"... MŜc2ħεИτlk 17:34, 25 October 2016 (UTC)[reply]
But in this edit what is "each of which which makes sense in different contexts" supposed to mean? There are different definitions for multiplying matrices. It is unclear how they each "make sense in different contexts". One matrix multiplication may be useful for one purpose, another for another purpose, etc. so it would seem their usefulness, not "sense", is clearer. MŜc2ħεИτlk 17:41, 25 October 2016 (UTC)[reply]
I would favor emphasizing the standard product first and only mentioning other types of matrix product later (say, in the second paragraph). That would correspond to how the body of the article is organized. Also, even though linear transformations are mentioned in the third paragraph, it is not mentioned that when two linear transformations are represented by matrices the composition of the transformations is represented by the product of the matrices. This is surely the whole point of the standard matrix product. McKay (talk) 03:10, 26 October 2016 (UTC)[reply]
Makes sense to put the most common definition first, and explain why it is important. I tried this, feel free to change/comment. LouScheffer (talk) 03:44, 26 October 2016 (UTC)[reply]
It's better. Now, what about the very first sentence: "In mathematics, matrix multiplication refers to any operation associated with the multiplication of matrix elements."? I don't think it makes sense. (1) It doesn't say that there are two matrices involved. (2) It isn't true that any such operation is called "matrix multiplication"; only some of them are. Also, the parts of the third sentence of the first paragraph which refer to matrix size seem to just confuse the issue. I'm trying an entirely new paragraph; what do you think? McKay (talk) 01:04, 28 October 2016 (UTC)[reply]
Lead is looking better, good work from both of you. MŜc2ħεИτlk 08:18, 29 October 2016 (UTC)[reply]

Scope?[edit]

If this article is to be about binary operations on two matrices to form a third, then scalar multiplication of matrices by a number should be excluded. Shall we transfer the section in this article to that article? The "see also" section of this article can link to the scalar multiplication article. It would prevent straining the lead between scalar multiplication and binary operations.

It could also make sense to have this article concentrate on the usual definition of the matrix product, with links to the other products (Hadamard product (matrices), Kronecker product etc.), and the Frobenius product split off to its own article (Frobenius product (matrices)) to be expanded later. MŜc2ħεИτlk 08:18, 29 October 2016 (UTC)[reply]

I totally agree with restricting this article to the product of two matrices to form a third. Just because when one refers to "multiplication of a matrix/matrices", it is not clear which multiplication operation is being referred to does not mean that a single article should try to cover each of the myriad possibilities, as per WP:NAD. —Quondum 15:32, 2 November 2016 (UTC)[reply]
Moved the scalar multiplication content over. It remains to decide if other matrix multiplications stay in this article or are removed altogether. MŜc2ħεИτlk 09:55, 3 November 2016 (UTC)[reply]
IMO the "other" matrix multiplications should only be mentioned in "See also" section. The Frobenius inner product does not seem to have an article of its own yet, but the material should ideally not be lost. The last two paragraphs of the lead do not belong there (strained, as you say, and conventions for an article should be explicated in the body, not the lead). However, there seems to be a tendency to duplicate material in many articles (which I dislike if there is a clear place it "belongs"), so there may be resistance. —Quondum 15:49, 3 November 2016 (UTC)[reply]

The article is now about the usual matrix product. MŜc2ħεИτlk 10:28, 6 November 2016 (UTC)[reply]

Which is as it should be. Nice to see articles being shaped into what they should be. —Quondum 15:32, 6 November 2016 (UTC)[reply]

Needs early section on motivation[edit]

As someone mentioned in an earlier post, this article needs an early example (probably in a new section right after the lead, or in a new paragraph at the end of the lead) of why one would want to multiply matrices, in order to provide some context. Solving Ax=b occurs to me, but that requires using an inverse matrix, which would be too much for someone who is just beginning with matrices. Is there a simpler way to motivate matrix multiplication? Loraof (talk) 23:32, 22 December 2016 (UTC)[reply]

There is an applications section way down deep in the article, but it uses terminology that will be beyond the understanding of a beginner. It's good to have that section as is and where it is, but a more elementary example needs to be near the beginning. Loraof (talk) 23:39, 22 December 2016 (UTC)[reply]

Are you talking about Matrix multiplication#Examples of matrix products? If so, adding a motivation section, when the lead already says linear transformations are one reason for the definition, will create duplication.
What could be done is a heuristic construction: start from the products of a row and column, then a square matrix and column, etc. at each stage explaining the purposes or examples (which the section already does). It would amount to putting Matrix multiplication#Examples of matrix products before the general definition. MŜc2ħεИτlk 10:36, 23 December 2016 (UTC)[reply]
No, Matrix multiplication#Examples of matrix products just shows how to multiply matrices together. It does not say why we might want to do so. And while both that section and the lead mention the use in linear transformations, they don't say why we would want to do that. Loraof (talk) 14:47, 23 December 2016 (UTC)[reply]
Except the section does say why we want to do so. Please read what is under each example. MŜc2ħεИτlk 10:02, 24 December 2016 (UTC)[reply]

Example[edit]

In the Dutch Wikipedia I found this example. I translated it, but it needs the hand of a native speaker.

Example[edit]

A company sells cement, chalk and plaster in bags weighing 25, 10, and 5 kg respectively. Four construction firms Arson, Build, Construct and Demolish, buy regularly these products from this company. The number of bags the clients buy in a specific year may be arranged in a 4×3-matrix A, with columns for the products and rows representing the clients:

We see for instance that , indicating that client Construct has bought 12 bags of chalk that year.

A bag of cement costs GBP 12, a bag of chalk GBP 9 and a bag of plaster GBP 8. The 3×2-matrix B shows prices and weights of the three products:

To find the total amount firm Arson has spent that year, we calculate:

,

in which we recognize the first row of the matrix A (Arson) and the first column of the matrix B (prices).

The total weight of the product bought by Arson is calculated in a similar manner:

,

in which we now recognize the first row of the matrix A (Arson) and the second column of the matrix B (weight).

We can make similar calculations for the other clients. Together they form the matrix AB as the matrix product of the matrices A and B:

Nijdam (talk) 11:55, 24 December 2016 (UTC)[reply]

Years ago there was probably an example like this. I prefer the generalized examples in Matrix multiplication#Examples of matrix products, but if anyone wants a real-life example then it could be added. MŜc2ħεИτlk 10:03, 24 December 2016 (UTC)[reply]

Indeed, it was me who proporsed it.Nijdam (talk) 11:55, 24 December 2016 (UTC)[reply]

I don't often edit wikipedia articles, so it's your call whether you want to insert this figure:[edit]

Hold you left and right hands as shown. This will give you muscle memory on how to multiply matrices. A related rule is that the "inner" subscripts are summed, while the outer are not: For 3x3 matrices, if a·b=c, then
am1b1n+am2b2n+am3b3n = cmn

Use it if you think it makes the article better--Guy vandegrift (talk) 18:42, 8 February 2017 (UTC)[reply]

Note also that the muscle memory of your left-right hands reminds you that the left and right indices refer to first and second index respectively: rows and columns (from left-to-right). Advice offered by someone who might have a marginal case of dyslexia --Guy vandegrift (talk) 16:12, 10 February 2017 (UTC)[reply]

Pedantic question about the associative property[edit]

In the article, the product of row-vector with column-vector is used as an example of matrices product (also later, in the section The inner and outer products). Is that correct? Don't we get a problem with the associative property?

Say A is a 1x3 matrix, B is a 3x1 matrix and C is a 5x5 matrix. In that case (AB)C is defined, but A(BC) isn't. Am I correct to say the associative property doesn't hold when the matrices are actually vectors? 79.179.82.237 (talk) 16:02, 23 August 2017 (UTC)[reply]

The confusion arises because strictly speaking AB is a 1x1 matrix, not a scalar, so (AB)C is strictly not defined. However, it is common to identify vectors of n components with 1xn matrices then ignore the difference between the 1x1 matrix whose single entry is the inner-product of the vectors, and the inner-product itself which is a scalar and not a matrix. McKay (talk) 05:40, 25 August 2017 (UTC)[reply]
Thank you for your answer, with which I agree completely.
Do you think we should add some remark/note, or am I just being fussy here? 79.179.82.237 (talk) 14:08, 26 August 2017 (UTC) — Preceding unsigned comment added by 84.229.130.80 (talk) [reply]

Incomprehensible Illustrations[edit]

Not only are there two illustrations on how to do matrix multiplication, the first one is a mess of arrows (and the image caption is no better), the second illustration is slightly better, but I had no idea that the two circles represent the dot products. If you do a google image search, you'll find far better illustrations, such as this one:

https://i1.wp.com/www.javatechblog.com/wp-content/uploads/2016/06/matrix-multiplication.jpg?resize=583%2C565

or that one:

https://www.mathsisfun.com/algebra/images/matrix-multiply-a.svg

The excessive use of symbols is not helping either (summations over indicides) is not helping either. Probably, the whole definition can be condensed to one picture and maybe a short reminder of the dot product. — Preceding unsigned comment added by 185.81.138.29 (talk) 17:23, 6 December 2017 (UTC)[reply]

I agree that the first figure is confusing. If one follows the arrows, one should believe that the sequence of operations is I'll remove this figure. D.Lazard (talk) 14:58, 20 February 2018 (UTC)[reply]

Math rating: Class = B ?[edit]

I have rewritten the whole article. For this, I have done some choices, generally explained in edit summaries. This rewriting is almost finished. However, a subsection Matrix multiplication § Related complexities of Matrix multiplication § Complexity is still needed, and I'll probably write it soon. In fact, it is now well known that the complexities of most problems of linear algebra are better expressed in terms of matrix-multiplication complexity, or in terms of the exponent of this complexity. It is thus useful to include that in WP, and presently, this article seems the best place.

IMHO, the rating of this article should be upgraded to "class = B". Being the author of the last major revision, I cannot upgrade it myself. So, your opinion is welcome, and if there is a consensus, for that, please, upgrade the rating. D.Lazard (talk) 18:14, 27 February 2018 (UTC)[reply]

Done. Thanks for your work on the article! Jakob.scholbach (talk) 20:39, 27 February 2018 (UTC)[reply]

Non-ring entries[edit]

"In mathematics, matrix multiplication or matrix product is a binary operation that produces a matrix from two matrices with entries in a field, or, more generally, in a ring."

What about other kinds of entries?

https://en.wikipedia.org/wiki/Logical_matrix

"Every logical matrix in U corresponds to a binary relation. These listed operations on U, and ordering, correspond to a calculus of relations, where the matrix multiplication represents composition of relations."

79.177.28.86 (talk) 06:16, 25 June 2018 (UTC)[reply]

Definitely should be expanded at least to semiring. Those are useful and standard e.g. for an algebraic view of graph shortest path problems (for which one wants the tropical semiring). But it's difficult to see how to go much more general than that and still have a useful notion of matrix multiplication. If it's just an array of things without a multiplication operation, it's not really a matrix. —David Eppstein (talk) 06:53, 25 June 2018 (UTC)[reply]
From article "Matrix"
"The entries need not be quadratic matrices, and thus need not be members of any ordinary ring; but their sizes must fulfil certain compatibility conditions."
(this is itself a wrong thing that a fact is mentioned in a more general article and not mentioned in a more speicial one)
This kind of structure to give entries for matrices to multiply is not limited to submatrices.
What's this kind of structure? It's a an analog of ring, with ring monoid replaced with a category.
79.177.28.86 (talk) 01:27, 26 June 2018 (UTC)[reply]

German article and Manual of Style[edit]

Can I suggest people look at the German version (<translate?hl=en&sl=auto&tl=en&u=https://de.wikipedia.org/wiki/Matrizenmultiplikation via Google translate) of this article and compare it to the English one?

I think the German version is a lot easier for someone unfamiliar with the topic to follow. I can see severel reasons.

Firstly, the German version is more inline with our MOS:MATH, which states:

"A general approach to writing an article is to start simple and then move towards more abstract and technical subjects later on in the article."
"The lead should, as much as possible, be accessible to a general reader, so specialized terminology and symbols should be avoided."
"Most mathematical ideas are capable of some form of generalization. If appropriate, such material can be put under a "Generalizations" section. As an example, multiplication of the rational numbers can be generalized to other fields."

This is not what has happend in this article, and I can see the history of why that is in some of the threads above. People coming to this talk page to insist some generalisation or other is mentioned in the first sentence. We end up with a first sentence that says

"In mathematics, matrix multiplication or matrix product is a binary operation that produces a matrix from two matrices with entries in a field, or, more generally, in a ring or even a semiring."

This is a really bad approach because it makes people think that they need to understand terms like field and semiring to understand matrix multiplication.

Relatedly, the German article talks about the sorts of things that are useful for a beginer to know, like "In order to be able to multiply two matrices, the number of columns of the first matrix must match the number of rows of the second matrix." It says this earlier and without the need to say "if A is an n × m matrix" etc.

Finally, the German article

  • makes much better use of images, including having one at the top of the article
  • devotes less space in the lead to compuational complexity
  • briefly mentions the history of matrix multiplication (something else recommended in MOS:MATH).

I would be happy to make some changes in line with the above, but I thought I should discuss it here first, given the previous conversations where people argue to include further generalisations in the lead.

Yaris678 (talk) 15:45, 13 December 2019 (UTC)[reply]

(following the note left at WT:WPM) Everything you say sounds very reasonable. The inclusion of jargon like "semiring" in the first sentence is particularly indefensible. --JBL (talk) 22:24, 15 December 2019 (UTC)[reply]
Thanks JBL. I have now made a change addressing the above. There are probably a few other pointers we cold take from the German article, like the use of further illustrations of the same format elsewhere in the article, but that will require further thought. Yaris678 (talk) 14:43, 18 December 2019 (UTC)[reply]
I agree with removing semirings and such from the lead, especially the very first sentence. Another thing that should be improved is the fact that the lead does in no way summarize the article (as it should). Jakob.scholbach (talk) 08:36, 19 December 2019 (UTC)[reply]

Should the text in "Illustration" subsection of "Definition" actually be a caption of the illustration?[edit]

This section has been separated off from the section above

I didn't look at all the changes in detail, but the change to the example diagram was probably a bit much. What was there before may not be ideal, but I think trying to cram all of that into a caption will make things worse. This appears to be an important enough example to have in the body of the article to help explain the process, and the diagram is just there as an extra visual aid to refer to. –Deacon Vorbis (carbon • videos) 15:18, 18 December 2019 (UTC)[reply]
I hope you don't mind me making this a separate section. The issue here isn't really about the German article or our MOS.
I can see arguments both ways on this. It is basically an aesthetic question, rather than one of right and wrong. But I think that making the text a caption makes it look neater and makes it clearer to the reader what the text is referring too.
I am happy to leave this aspect of the article as it is for now, but if any other editors have a view, I would appreciate reading it.
Yaris678 (talk) 15:38, 18 December 2019 (UTC)[reply]

Illustration of complexity[edit]

I'd like to thank Jochen Burghardt for this change to the illustration on known computational complexity vs time. Beyond the reason he cites for the change (difficulty of updating the older image) the new one fixes a big problem with the older image: it showed the complexity as changing continuously and piecewise-linearly over time, but actually the change was discontinuous and piecewise-constant. —David Eppstein (talk) 21:36, 23 February 2020 (UTC)[reply]

Product matrix with vector[edit]

The article mentions the product of a matrix with a column vector, which is just a special case of the matrix product. It should also mention the product of a matrix with a vector, what is quite similar, but yet formally different. Otherwise there is a formal problem with the discussion of eigenvectors. Madyno (talk) 21:43, 3 September 2020 (UTC)[reply]

The product of a matrix with a vector cannot be defined if the coordinates of the vector are not defined. You are thinking of the application of a linear map on a vector, which is represented, when coordinates are defined, by the product of the matrix of the linear map by the column vector of the coordinates of the vector. D.Lazard (talk) 08:21, 4 September 2020 (UTC)[reply]

Right, with vector I meant an element of . Madyno (talk) 20:56, 5 September 2020 (UTC)[reply]

Pratical example[edit]

@DeepKling, MrOllie, and D.Lazard: I like the example added by DeepKling. In fact, the article currently fails to give an idea of the purpose of matrix multiplication, and the example could fix this. I suggest to keep it and to add the {{citation needed}} tag, for now. In case it is in fact WP:OR, possible a similar example can be found in some introductory textbook, and then added with that source. Also, the presentation may be improved, possibly using an Svg picture (to be created). - Jochen Burghardt (talk) 21:28, 26 January 2022 (UTC)[reply]

We're not on a deadline, here. We can afford to wait for an example that actually has a source. MrOllie (talk) 21:30, 26 January 2022 (UTC)[reply]
@Jochen Burghardt, MrOllie, and D.Lazard: Mmh, I think the problem is that I do not know of any textbook that provides an actual practical example, they just state how the calculation is done and I could not find an intuitive explanation on the internet as well. I always hate explanations where you think you would never come up with that idea yourself, creating an artificial complexity. However, I'm a little surprised that a simple example using the exact calculations that were introduced in the paragraph before and obviously producing the right result needs a source according to you. This is a little frustrating. I guess I either need to publish the example myself in a textbook or wait for someone else to bother writing such an example down ;-) . If I should create an svg, I can do so (I published several graphics on Wikipedia), but if any trivial example needs a source, I guess this would be pointless.--DeepKling (talk) 05:44, 28 January 2022 (UTC)[reply]

@DeepKling, MrOllie, and D.Lazard: I found an example in a 1996 German textbook;[1] here is an ad-hoc translation:

A factory uses basic commodities (de:Rohstoffe) to produce intermediate goods, which in turn are used to produce final products. The matrices

  and  

provide the amount of basic commodities needed for a given amount of intermediate goods, and the amount of intermediate goods needed for a given amount of final products, respectively. For example, to produce one unit of intermediate good , two units of basic commodity , one unit of , and one unit of are needed, corresponding to the second row of .

Using matrix multiplication, compute

;

this matrixc directly provides the amounts of basic commodities needed for given amounts of final goods. For example, to produce 100 units of the final product , 80 units of , and 60 units of , the necessary amounts of basic goods can be computed as

,

that is, units of , units of , units of , units of are needed. Similarly, the product matrix can be used to compute the needed amounts of basic goods for other final-good amount data.

Maybe, this example could be used in the article. - Jochen Burghardt (talk) 08:46, 29 January 2022 (UTC)[reply]

This sort of examples is commonly encountered in operations research, specially in optimization of resource allocation through linear programming. One may certainly find similar examples in English textbooks on these subjects. D.Lazard (talk) 10:04, 29 January 2022 (UTC)[reply]
I gave it a try, and inserted the example (slighly adapted) in section Matrix_multiplication#Linear_maps. I also added a brief example on rotation in Cartesian plane coordinates. Hope this is ok. - Jochen Burghardt (talk) 20:45, 2 February 2022 (UTC)[reply]

References

  1. ^ Peter Stingl (1996). Mathematik für Fachhochschulen – Technik und Informatik (in German) (5th ed.). Munich: Carl Hanser Verlag. ISBN 3-446-18668-9. Here: Exm.5.4.10, p.205-206

Ok, so not everything here is true…[edit]

See, the matrix multiplication ISN‘T binary. It is just a simple row of calculations and has been and is also used in analog computers, which are also NOT binary. That‘s it. This is the entire reason I wrote this. 91.181.173.247 (talk) 07:46, 21 May 2022 (UTC)[reply]

o(1)[edit]

In "It is not known whether matrix multiplication can be performed in O(n2 + o(1)) time.", the formula seems wrong. (2 + o(1)) can be much larger than 2, which could result in O(n3) or worse. The o(1) should be removed like in the German article, or removed from the superscript: O(n2 + o(1)) Cymno (talk) 11:44, 26 July 2022 (UTC)[reply]

Incorrect. O-notation is only meaningful in the limit for large values of n. Every expression that can be bounded as (2 + o(1)) must be arbitrarily close to 2 for sufficiently large values of n. It is indeed unknown whether a bound of the form O(n2 + o(1)) can be achieved. If you have edited the German article based on the same misunderstanding, perhaps you should undo your edits until you understand this material better. —David Eppstein (talk) 23:30, 26 July 2022 (UTC)[reply]
Isn't it possible to simplify the expression to avoid nested O(.) ? I have difficulties to understand what is supposed to mean, let alone - Jochen Burghardt (talk) 09:29, 27 July 2022 (UTC)[reply]
That's a good point, changed to accordingly. , but that's not entirely obvious. Fawly (talk) 21:48, 27 July 2022 (UTC)[reply]
I agree that the outer O is redundant and better removed. —David Eppstein (talk) 23:04, 27 July 2022 (UTC)[reply]

AlphaTensor improved upon number of multiplications needed for matrix multiplications[edit]

Not sure if this is relevant, but a new AI by DeepMind managed to further improve upon previous lowest number of multiplications needed for matrix multiplication. Not sure if this actually leads to a decrease in time complexity though. 114.76.186.77 (talk) 05:23, 9 October 2022 (UTC)[reply]

Do you have a source? - Jochen Burghardt (talk) 11:30, 9 October 2022 (UTC)[reply]
This requires a reliable reference and a precise statement; at least, what is the number of multiplications for which dimensions of the matrices? D.Lazard (talk) 14:17, 9 October 2022 (UTC)[reply]
Yes, the papers have already been published
- The algorithm by DeepMind https://www.nature.com/articles/s41586-022-05172-4 (https://www.deepmind.com/blog/discovering-novel-algorithms-with-alphatensor)
- The algorithm by human beating DeepMind https://arxiv.org/abs/2210.04045 (https://www.newscientist.com/article/2341965-humans-beat-deepmind-ai-in-creating-algorithm-to-multiply-numbers/) Vip17 (talk) 13:42, 5 November 2022 (UTC)[reply]

Non-commutativity exceptions[edit]

I have reservations regarding the accuracy of the logical biconditional statement within the Non-commutativity sub-section, specifically the assertion that:

if A is a  matrix with entries in a field F, then  for every  matrix B with entries in F, if and only if  where , and I is the  identity matrix."

I believe it should be expressed as a unidirectional implication, where the statement holds when . Indeed, the biimplication is in contrast with an assertion made a few lines below, which states that:

"One special case where commutativity does occur is when D and E are two (square) diagonal matrices (of the same size); then DE = ED". 

Furthermore, matrix multiplication is commutative in other scenarios, for instance when one matrix is a scalar multiple of the other or again when they are simultaneously diagonalizable. Simonedl (talk) 18:24, 13 October 2023 (UTC)[reply]

The article is correct. The initial statement says that the multiples of the identity are the only matrices which commute with everything, not the only ones that commute with some things. I.e., that these are exactly the elements of the center. 35.139.154.158 (talk) 19:47, 13 October 2023 (UTC)[reply]
Ah, now it is clear. Since English is not my native language, I misunderstood the sentence. Thank you for the clarification. Simonedl (talk) 23:51, 13 October 2023 (UTC)[reply]
Note that your first citation requires commutation of the given A with every B. So you are not free to chose B = c I, nor to restrict B to a diagonal matrix, nor to require B to be a scalar multiple of A or to be simultaneously diagonalizable with A. - Jochen Burghardt (talk) 21:25, 13 October 2023 (UTC)[reply]