Consider the following result which I recently came across in a research paper in my area (Signal Processing)
Let $X$ be a $N\times N$ positive semidefinite (psd) matrix whose rank is $r$. Let $A$ be any symmetric $N\times N$ matrix. Then, there exist a set of vectors $x_1,\dots,x_r$ such that \begin{align} X & = \sum_{i=1}^{r}x_ix_i^T \\ x_i^TAx_i &= \frac{\mbox{trace}\{AX\}}{r},~~~\forall i \end{align}
The following is the proof for it which I can't verify.
Proof: Consider the following step-wise procedure whose inputs are $X$ and $A$.
$~~$0.$~~$ Inputs are $X$ (given $X\geq 0$, $rank(X)=r$) and $A$ (symmetric).
Decompose $X=RR^T$.
Generate the eigen decomposition $R^TAR=U\Lambda U^T$.
Let $h$ be any $N\times 1$ vector such that $\lvert h_i\rvert=1$ (each entry of $h$). Generate the vector $x_1$ and matrix $X_1$ as \begin{align} x_1&=\frac{1}{\sqrt{r}}RUh \\ X_1&=X-x_1x_1^T \end{align}
Outputs are $X_1$ and $x_1$.
The paper then claims that
- $X_1$ is psd and has rank $r-1$
- $x_1^TAx_1=\frac{1}{r}trace(AX)$
While I am able to verify the second claim, am not able to verify the first one? How is it true? If this can be done, the rest of the proof is straight forward. I am looking for a rigorous proof.
Read this if you are interested to know where this proof heads. Now do the stepwise algorithm earlier with inputs $X_1$ and $A$ to get $x_2$ and $X_2$ such that \begin{align}X_2&=X_1-x_2x_2^T\\&=X-x_1x_1^T-x_2x_2^T\end{align} and $$x_2^TAx_2=\frac{1}{r-1}trace(AX_1)=\frac{1}{r}trace(AX)$$ Then the result of the paper is that you can do this procedure $r$ times and get a rank-one decomposition $$X=\sum_{i=1}^{r}x_ix_i^T$$ with the property $$x_i^TAx_i\,=\,\frac{1}{r}trace(AX),~\forall i$$for any given psd X with rank $r$ and any symmetrix $A$.