I'm trying to solve a saddle point problem of the following form: Let
- $(E,\mathcal E,\lambda)$ be a measure space;
- $p$ be a probability density on $(E,\mathcal E,\lambda)$ and $\mu:=p\lambda$
- $W$ be a closed, convex subset of a $\mathbb R$-Hilbert space $H$ with empty interior and $\left\|w\right\|_H\le1$ for all $w\in W$;
- $\kappa_w$ be a sub-Markov kernel on $(E,\mathcal E)$ symmetric$^1$ with respect to $\mu$ for $w\in W$.
Note that $$\psi g:=\left(E^2\ni(x,y)\mapsto g(x)-g(y)\right)\;\;\;\text{for }g:E\to\mathbb R$$ is a bounded linear operator from $L^2(\mu)$ to $L^2(\mu\otimes\kappa_w)$ and hence $$L_w(g):=\int\mu({\rm d}x)\int\kappa_w(x,{\rm d}y)|(\psi g)(x,y)|^2=\left\|\psi g\right\|_{L^2(\mu\otimes\kappa_w)}^2\;\;\;\text{for }g\in L^2(\mu)\tag1$$ is continuous for all $w\in W$.
Note that $$K:=\left\{g\in\mathcal L^2(\mu):g\ge0,g\text{ is bounded and }\int g\:{\rm d}\mu=0\right\}$$ is closed and convex.
I want to choose a $w\in W$ maximizing (or at least enlarging as much as possible) the quantity $$\inf_{g\in K\setminus\{0\}}\frac{L_w(g)}{\left\|g\right\|_{L^2(\mu)}^2}\tag2.$$
Question: How can we deal with this problem? Which assumption on the dependence of $\kappa_w$ on $w\in W$ (e.g. Fréchet differentiability) do we need to impose? And if it's too hard to search for a true minimizer, can we find a "nearly optimal" solution (maybe in terms of a maximizer of a lower bound)?
EDIT: We may note that $L_w$ is convex (since any norm is convex and $\psi$ is linear) for all $w\in W$. However, $L^2(\mu)\setminus\{0\}\ni g\mapsto\frac{L_w(g)}{\left\|g\right\|_{L^2(\mu)}^2}$ shouldn't be convex anymore. On the other hand, we may restrict the infimum to $\tilde K:=\left\{g\in K:\left\|g\right\|_{L^2}=1\right\}$, but then $\tilde K$ is not convex anymore.
Maybe we're still able to prove the existence of a saddle point $(w_0,g_0)\in W\times\tilde K$ with $$L_{w_0}(g_0)=\max_{w\in W}\min_{g\in\tilde K}L_w(g)=\min_{g\in\tilde K}\max_{w\in W}L_w(g)\tag3.$$ Would it then be a feasible approach to fix $g\in\tilde K$ and try to find a maximizer $w_g\in W$? Could we find a maximizer $w$ from that? I'm willing to assume any suitable assumption on the dependence of $\kappa_w$ on $w\in W$, but if it's still too hard I'd be happy to find a nearly optimal solution (or a numerical solution) as well.
EDIT 2: You may want to notice that by reversibility of $\mu$ with respect to $\kappa_w$, $$\int\mu({\rm d}x)\int\kappa_w(x,{\rm d}y)|g(y)|^2=\int\mu({\rm d}y|g(y)|^2\int\kappa_w(y,{\rm d}x)\tag4$$ for all $w\in W$ and $g\in L^2(\mu)$.
EDIT 3: Alternatively, might it be a good idea to fix $w$ first and try to minimize $L_w$?
Remark: I've simplified the question in my latest edit, replacing the domain of $g$, since I think the new formulation is actually equivalent to the old one, but less complicated.
$^1$ i.e. $$\int\mu({\rm d}x)\int\kappa_w(x,{\rm d}y)f(x,y)=\int\mu({\rm d}y)\kappa_w(y,{\rm d}x)f(x,y)$$ for all bounded $\mathcal E^{\otimes2}$-measurable $f:E^2\to\mathbb R$.