47
$\begingroup$

Let $\det_d = \det((x_{i,j})_{1 \leq i,j\leq d})$ be the determinant of a generic $d \times d$ matrix. Suppose $k \mid d$, $1 < k < d$. Can $\det_d$ be written as the determinant of a $k \times k$ matrix of forms of degree $d/k$?

Even writing $\det_4$ as the determinant of a $2 \times 2$ matrix of quadratic forms seems impossible, just intuitively. The space of $2 \times 2$ matrices of quadratic forms in $16$ variables has dimension $4 \cdot \binom{17}{2} = 544$, while the space of quartics in $16$ variables has dimension $\binom{19}{4} = 3876$.

$\endgroup$
13
  • 4
    $\begingroup$ One can see that $\det_4$ is equal to a sum of three $2 \times 2$ determinants of quadratic forms: The Laplace expansion in say the first two rows involves $6$ terms (each of which is the product of the determinant of a $2 \times 2$ minor in the first $2$ rows with the determinant of the complementary minor); these $6$ terms can be packaged into $3$ determinants of $2 \times 2$ minors. I wanted to rule out the possibility of writing $\det_4$ as a sum of $2$ determinants but I can't even rule out $\det_4$ being equal to a single $2 \times 2$ determinant, sigh. $\endgroup$ May 22, 2017 at 6:49
  • $\begingroup$ IMHO the "intuitive" argument about dimensions is possible to be made precise, it would however require some basic algebraic geometry. $\endgroup$ May 22, 2017 at 7:09
  • 2
    $\begingroup$ "space of quadrics in 16 variables " must be space of quartics in 16 variables" $\endgroup$ May 22, 2017 at 7:11
  • $\begingroup$ but is is not right to count all the quartics - you must only take poly-linear quartics (ones where each monomial consists of 4 different variables). $\endgroup$ May 22, 2017 at 7:13
  • $\begingroup$ although even with the latter restriction you have to select only these poly-linear quartics which can be represented by determinants---it's not the whole space... $\endgroup$ May 22, 2017 at 7:42

5 Answers 5

30
$\begingroup$

I think a result of Hochster allows to get a quick proof that it is not possible to express the determinant of the generic $d \times d$ matrix as the determinant of a $k \times k$ matrices with entries being homogeneous forms of degree $\dfrac{d}{k}$, provided that $1 <k <d$.

I will work over an algebraically closed field. Relying on some cohomological methods (similar to the ones Jason Starr is using in his answer), Hochster proved that the variety of $n \times n$ matrices with $\textrm{rank} < n-1$ can be set-theoretically defined by $2n$ equations and not less (see this paper by Bruns and Schwänzl for some improvement of Hochster's result : http://www.home.uni-osnabrueck.de/wbruns/brunsw/pdf-article/NumbEqDet.published.pdf).

Now, we proceed by absurd. Assume that the determinant of the generic $d \times d$ matrix can be written as the determinant of $k \times k$ matrix (say $A$) with forms homogeneous of degree $\dfrac{d}{k}$. We assume $1 < k < d$. We denote by $P_1, \ldots, P_{k^2}$ the entries $A$, which are homogeneous polynomials in $x_1, \ldots, x_{d^2}$ of degree $\dfrac{d}{k}$.

Let $B$ be the generic $k \times k$ matrix with entries $Y_1, \ldots, Y_{k^2}$. Denote by $Q_1, \ldots Q_{k^2}$ the $k-1$ minors of $B$. The variety defined by the vanishing of the $\{Q_i\}_{i=1\ldots k^2}$ is non-empty of codimension $4$ in $\mathbb{A}^{k^2}$ (here I use that $k>1$). Hence, if we replace $Y_i$ by $P_i(x_1, \ldots, x_{d^2})$, we see that the scheme defined by the vanishing of the $\{Q_i(P_1,\ldots,P_{k^2})\}_{i=1 \ldots k^2}$ has codimension at most $4$ in $\mathbb{A}^{d^2}$.

Furthermore, a simple computation of partial derivatives shows that the scheme defined by the vanishing of the $\{Q_i(P_1,\ldots,P_{k^2})\}_{i=1 \ldots k^2}$ is included in the singular locus of the variety defined by $\det A = 0$. But $\det A$ is the determinant of the generic $d \times d$ matrix, so that its singular locus is the variety of matrices of $\textrm{rank} < d-1$ : it is irreducible of codimension $4$ in $\mathbb{A}^{d^2}$.

From the above, we deduce that the variety of $d \times d$ matrices of $\textrm{rank} < d-1$ is set-theoretically equal to the scheme defined by the $\{Q_i(P_1,\ldots,P_{k^2})\}_{i=1 \ldots k^2}$.

By Hochster's result (the existence part), one can find $2k$ polynomials (say $T_1, \ldots, T_{2k}$) in the ideal generated by $Q_1, \ldots, Q_{k^2}$ such that $$\textrm{rad}(T_1, \ldots, T_{2k}) = \textrm{rad}(Q_1, \ldots, Q_{k^2}).$$

Replacing $Y_i$ by $P_i(x_1,\ldots, x_{d^2})$ in the $\{T_j\}_{j=1 \ldots 2k}$, we find that the vanishing of the $\{T_j(P_1, \ldots, P_{k^2}) \}_{j=1 \ldots 2k}$ defines set-theoretically the variety of $d \times d$ matrices of $\textrm{rank} < d-1$. Since $2k < 2d$, we get a contradiction with Hochster's result.

$\endgroup$
5
  • 6
    $\begingroup$ I am just making this comment for readers who may not have noticed themselves: Libli's answer also works in arbitrary positive characteristic (the answer I posted does not). $\endgroup$ May 22, 2017 at 22:53
  • 1
    $\begingroup$ You mean "of rank $<n-1$", not "of rank $\le n-1$", I suppose. For rank $\le n-1$, you just need one equation $\det=0$ ;-) $\endgroup$ May 22, 2017 at 23:06
  • $\begingroup$ @VladimirDotsenko : Of course, you're perfectly right. I will correct that! $\endgroup$
    – Libli
    May 23, 2017 at 8:39
  • $\begingroup$ How does one find the codimension of the variety of the $\{Q_i\}$? $\endgroup$
    – Jose Brox
    May 28, 2017 at 14:08
  • 1
    $\begingroup$ @JoseBrox : For instance here : en.wikipedia.org/wiki/Determinantal_variety $\endgroup$
    – Libli
    May 28, 2017 at 15:51
30
$\begingroup$

I will work over a field of characteristic $0$ so that reductive algebraic groups are linearly reductive; presumably there is a way to eliminate this hypothesis. In this case, for an integer $n>1$ and for a divisor $m$ such that $n>m>1$, there does not exist $f$ as above. The point is to consider the critical locus of the determinant. The computation below proves that "cohomology with supports" has cohomological dimension equal to $2(n-1)$ for the pair of the quasi-affine scheme $\textbf{Mat}_{n\times n} \setminus \text{Crit}(\Delta_{n\times n})$ and its closed subset $\text{Zero}(\Delta_{n\times n})\setminus \text{Crit}(\Delta_{n\times n})$. Since pushforward under affine morphisms preserve cohomology of quasi-coherent sheaves, that leads to a contradiction.

Denote the $m\times m$ determinant polynomial by $$\Delta_{m\times m}:\textbf{Mat}_{m\times m} \to \mathbb{A}^1.$$ For integers $m$ and $n$ such that $m$ divides $n$, you ask whether there exists a homogeneous polynomial morphism $$f:\textbf{Mat}_{n\times n}\to \textbf{Mat}_{m\times m}$$ of degree $n/m$ such that $\Delta_{n\times n}$ equals $\Delta_{m\times m}\circ f$. Of course that is true if $m$ equals $1$: just define $f$ to be $\text{Delta}_{n\times n}$. Similarly, this is true if $m$ equals $n$: just define $f$ to be the identity. Thus, assume that $2\leq m < n$; this manifests below through the fact that the critical locus of $\Delta_{m\times m}$ is nonempty. By way of contradiction, assume that there exists $f$ with $\Delta_{n\times n}$ equal to $\Delta_{m\times m}\circ f$.

Lemma 1. The inverse image under $f$ of $\text{Zero}(\Delta_{m\times m})$ equals $\text{Zero}(\Delta_{n\times n})$. In other words, the inverse image under $f$ of the locus of matrices with nullity $\geq 1$ equals the locus of matrices with nullity $\geq 1$.

Proof. This is immediate. QED

Lemma 2. The inverse image under $f$ of the critical locus of $\Delta_{m\times m}$ equals the critical locus of $\Delta_{n\times n}$. In other words, the inverse image under $f$ of the locus of matrices with nullity $\geq 2$ equals the locus of matrices with nullity $\geq 2$.

Proof. By the Chain Rule, $$d_A\Delta_{n\times n} = d_{f(A)}\Delta_{m\times m}\circ d_Af.$$ Thus, the critical locus of $\Delta_{n\times n}$ contains the inverse image under $f$ of the critical locus of $\Delta_{m\times m}$. Since $m\geq 2$, in each case, the critical locus is the nonempty set of those matrices whose kernel has dimension $\geq 2$, this critical locus contains the origin, and this critical locus is irreducible of codimension $4$. Thus, the inverse image under $f$ of the critical locus of $\Delta_{m\times m}$ is nonempty (it contains the origin) and has codimension $\leq 4$ (since $\textbf{Mat}_{m\times m}$ is smooth). Since this is contained in the critical locus of $\Delta_{n\times n}$, and since the critical locus of $\Delta_{n\times n}$ is irreducible of codimension $4$, the inverse image of the critical locus of $\Delta_{m\times m}$ equals the critical locus of $\Delta_{n\times n}$. QED

Denote by $U_n\subset \text{Zero}(\Delta_{n\times n})$, resp. $U_m\subset \text{Zero}(\Delta_{m\times m})$, the open complement of the critical locus, i.e., the locus of matrices whose kernel has dimension precisely equal to $1$. By Lemma 1 and Lemma 2, $f$ restricts to an affine morphism $$f_U:U_n\to U_m.$$

Proposition. The cohomological dimension of sheaf cohomology for quasi-coherent sheaves on the quasi-affine scheme $U_n$ equals $2(n-1)$.

Proof. The quasi-affine scheme $U_n$ admits a morphism, $$\pi_n:U_n \to \mathbb{P}^{n-1}\times (\mathbb{P}^{n-1})^*,$$ sending every singular $n\times n$ matrix $A$ parameterized by $U_n$ to the ordered pair of the kernel of $A$ and the image of $A$. The morphism $\pi_n$ is Zariski locally projection from a product, where the fiber is the affine group scheme $\textbf{GL}_{n-1}$. In particular, since $\pi_n$ is affine, the cohomological dimension of $U_n$ is no greater than the cohomological dimension of $\mathbb{P}^{n-1}\times (\mathbb{P}^{n-1})^*$. This equals the dimension $2(n-1)$.

More precisely, $U_n$ is simultaneously a principal bundle for both group schemes over $\mathbb{P}^{n-1}\times(\mathbb{P}^{n-1})^*$ that are the pullbacks via the two projections of $\textbf{GL}$ of the tangent bundle. Concretely, for a fixed $1$-dimension subspace $K$ of the $n$-dimensional vector space $V$ -- the kernel -- and for a fixed codimension $1$ subspace $I$ -- the image -- the set of invertible linear maps from $V/K$ to $I$ is simultaneously a principal bundle under precomposition by $\textbf{GL}(V/K)$ and a principal bundle under postcomposition by $\textbf{GL}(I)$. In particular, the pushforward of the structure sheaf, $$\mathcal{E}_n:=(\pi_n)_*\mathcal{O}_{U_n},$$ is a quasi-coherent sheaf that has an induced action of each of these group schemes.

The invariants for each of these actions is just $$\pi_n^\#:\mathcal{O}_{\mathbb{P}^{n-1}\times (\mathbb{P}^{n-1})^*}\to \mathcal{E}_n.$$ Concretely, the only functions on an algebraic group that are invariant under pullback by every element of the group are the constant functions. The group schemes and the principal bundle are each Zariski locally trivial. Consider the restriction of $\mathcal{E}_n$ on each open affine subset $U$ where the first group scheme is trivialized and the principal bundle is trivialized. The sections on this open affine give a $\mathcal{O}(U)$-linear representation of $\textbf{GL}_{n-1}$. Because $\textbf{GL}_{n-1}$ is linearly reductive, there is a unique splitting of this representation into its invariants, i.e., $\pi_n^\#\mathcal{O}(U)$, and a complementary representation (having trivial invariants and coinvariants). The uniqueness guarantees that these splittings glue together as we vary the trivializing opens. Thus, there is a splitting of $\pi_n^\#$ as a homomorphism of quasi-coherent sheaves, $$t_n:(\pi_n)_*\mathcal{O}_{U_n} \to \mathcal{O}_{\mathbb{P}^{n-1}\times (\mathbb{P}^{n-1})^*}.$$

For every invertible sheaf $\mathcal{L}$ on $\mathbb{P}^{n-1}\times (\mathbb{P}^{n-1})^*$, this splitting of $\mathcal{O}$-modules gives rise to a splitting, $$t_{n,\mathcal{L}}:(\pi_n)_*(\pi_n^*\mathcal{L})\to \mathcal{L}.$$ In particular, for every integer $q$, this gives rise to a surjective group homomorphism, $$H^q(t_{n,\mathcal{L}}):H^q(U_n,\pi_n^*\mathcal{L}) \to H^q(\mathbb{P}^{n-1}\times (\mathbb{P}^{n-1})^*,\mathcal{L}) .$$

Now let $\mathcal{L}$ be a dualizing invertible sheaf on $\mathbb{P}^{n-1}\times (\mathbb{P}^{n-1})^*.$ This has nonzero cohomology in degree $2(n-1)$. Thus, the cohomological dimension of sheaf cohomology for quasi-coherent $\mathcal{O}_{U_n}$-modules also equals $2(n-1)$. QED

Since $f_U$ is affine, the cohomological dimension for $U_n$ is no greater than the cohomological dimension for $U_m$. However, by the proposition, the cohomological dimension for $U_m$ equals $2(m-1)$. Since $1<m<n$, this is a contradiction.

$\endgroup$
1
  • $\begingroup$ Nice answer, thanks! I decided to accept the one that works in all characteristics. But I appreciate the detailed explanation you gave. $\endgroup$ May 23, 2017 at 4:07
12
$\begingroup$

This is not an answer, but it is to big to be a comment.

Let me write your matrix $X$ blockwise $(M_{\alpha\beta})_{1\le\alpha,\beta\le k}$, where the blocks are $k/d\times k/d$. If the blocks $M_{\alpha\beta}$ commutte to each other, then $$\det X=\det((d_{\alpha\beta})_{1\le\alpha,\beta\le k}$$ where $d_{\alpha\beta}:=\det M_{\alpha\beta}$.

A particular case of this property is the formula $\det(A\otimes B)=(\det A)^m(\det B)^n$ where $m$ is the size of $B$, $n$ that of $A$ (Thanks to Suvrit !).

As for the general case, I guess that the answer to your question is negative.

$\endgroup$
2
  • 1
    $\begingroup$ Actually, $\det(A\otimes B) \not=\det(A)\cdot \det(B)$, it equals $\det(A)^m\cdot \det(B)^n$, where $m$ and $n$ are the sizes of matrix $B$ and $A$, resp. $\endgroup$
    – Suvrit
    May 22, 2017 at 16:09
  • $\begingroup$ @Suvrit. Oups ! Right. I fix it $\endgroup$ May 22, 2017 at 16:43
5
$\begingroup$

The answer is that if $1 < k < d$, then such a matrix is equivalent to a diagonal matrix with $\det(x_{ij})=: D$ on the diagonal and the rest of the diagonal entries equal to $1$. This does not require that $k$ divides $d$.

The reason is the following: If $M$ is a $k\times k$ matrix with entries from the polynomial ring $S$ in the $x_{ij}$ over some field and $\det M = D$, then the cokernel of $M$, viewed as an $S$-linear map from $S^k$ to $S^k$ defines a maximal Cohen--Macaulay module of rank one over the hypersurface ring $R=S/(D)$, generated by at most $k$ elements. For this statement see Eisenbud's paper MR0570778 (82d:13013).

In the book by Bruns and Vetter on determinantal rings MR0953963 (89i:13001) all reflexive modules of rank one over a determinantal ring are determined and in the situation here they show that only the cokernel of the generic matrix itself or its transpose gives a nontrivial maximal Cohen-Macaulay module of rank one. However these need at least $d$ generators.

The trivial module is just $R$ itself and in terms of matrices it means that $k<d$ implies that $M$ is equivalent to a diagonal matrix with $D$ on the diagonal and the rest of the diagonal filled up by $1$'s. (The case I neglected in my original posting...)

Another interesting source for similar questions (and some answers) is the article by G. Bergman MR2205070 (2006k:15037).

$\endgroup$
5
  • 1
    $\begingroup$ What do you mean by "regardless of the form of the entries"? The answer is yes if the $d\times d$ determinant itself is admissible as an entry. $\endgroup$ May 24, 2017 at 9:45
  • $\begingroup$ I will take a look at the Bergman article, thanks! $\endgroup$ May 24, 2017 at 12:48
  • $\begingroup$ as @EmilJeřábek points out, the entries have to be homogeneous. Otherwise take a diagonal matrix with only $1$ on the diagonal and $\textrm{det}_d$ as the last diagonal entry. The fact that the entries are homogeneous then forces $k$ divides $d$. $\endgroup$
    – Libli
    May 24, 2017 at 15:17
  • $\begingroup$ I assume the idea is that the entries may be homogeneous of different degrees --- compatibly, of course, so that the degrees of entries satisfy $\deg(P_{i,j})+\deg(P_{k,l}) = \deg(P_{i,l})+\deg(P_{k,j})$ when $i\neq k$, $j \neq l$; and $\sum \deg(P_{i,i}) = d^2$. So for example, one might try to write the $3\times 3$ determinant as $\det \begin{pmatrix} L_1 & L_2 \\ Q_1 & Q_2 \end{pmatrix}$ for linear forms $L_i$ and quadratic forms $Q_i$, and I take it that @Ragnar's answer will rule this out. Do I understand correctly? (We had better assume strictly positive degrees to avoid silliness.) $\endgroup$ May 24, 2017 at 20:31
  • 1
    $\begingroup$ Even if you drop homogeneity, if you simply demand that the morphism $f:\textbf{Mat}_{n\times n} \to \textbf{Mat}_{m\times m}$ maps the origin to the origin, then the answers by Libli and myself still work because the image of $f$ intersects the critical locus of the determinant. $\endgroup$ May 25, 2017 at 9:13
0
$\begingroup$

As others fascinatingly settled negative answer for the general question, let me give a comment on certain situation when the answer is positive.

So if one has dxd matrix and d=k*n, we may split the matrix into nxn matrix which elements are kxk matrices. Now the statement is - if these elements satisfy certain commutation relations (i.e. elements in each column commutate and cross-diagonal commutators are equal - see Manin matrix), then $det_d = det_k det_n $ - i.e. first calculate determinant for nxn matrix whose elements are matrices - get a kxk matrix as an answer and then
calculate usual kxk determinant of the later. (See section 5.5 page 36 here - sorry for self-advertising ).

$\endgroup$
8
  • 1
    $\begingroup$ Isn't this answer just a paraphrase of Denis's answer below? $\endgroup$
    – Libli
    May 28, 2017 at 16:01
  • $\begingroup$ @Libil No it is not. It can be seen as a generalization. But Manin matrices are far from being commuative in general. $\endgroup$ May 28, 2017 at 16:57
  • $\begingroup$ in that case it might be interesting to mention in your answer that it is a generalization of Denis's one and give perhaps a concrete example where Manin matrices are not just matrices which blocks commute to each other. $\endgroup$
    – Libli
    May 28, 2017 at 18:18
  • $\begingroup$ @Libli if Manin matrices would be the same as matrices with commuting entries what would be the purpose to devise the special name ? It is obvious from the definition that they are not the same. Most simple examples comes from so-called Cartier-Foata matrices - elements from different rows commute, but absolutely no relation within same row. (Manin matrices are more general). $\endgroup$ May 28, 2017 at 18:35
  • 2
    $\begingroup$ I don't really care what are Manin matrices and if they are more general or not than matrices with commuting entries. I am just trying to give some tips for your answer to be more interesting and more comprehensible to a beginner/non-expert in non-commutative algebra. $\endgroup$
    – Libli
    May 28, 2017 at 19:02

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge that you have read and understand our privacy policy and code of conduct.

Not the answer you're looking for? Browse other questions tagged or ask your own question.