I want to maximize $$F(w):=\sum_{1\le i,\:j\le2}\int\lambda^{\otimes2}({\rm d}(x,y))\left(w_i(x)f_j(x,y)\wedge w_j(y)f_i(y,x)\right)g_{ij}(x,y)$$ over the closed convex set $$S:=\left\{w\in{\mathcal L^2(\mu)}^2:w_1+w_2=1\;\mu\text{-almost surely}\right\},$$ where $(E,\mathcal E,\lambda)$ is a measure space, $\mu\ll\lambda$ is a probability measure on $(E,\mathcal E)$ and $f_i,g_{ij}:E^2\to[0,\infty)$ are $\mathcal E$-measurable.
By Theorem 10.47 in the book of Clark, if $w^\ast$ is a minimizer of $F$ over $S$, then $$0\in\partial F(w^\ast)+N_S(w^\ast)\tag1,$$ where $\partial F(w^\ast)$ denotes Clarke's generalized gradient of $F$ at $w^\ast$ (see Definition 2 on page 53 in the paper of Clarke) and $$N_S(w^\ast):=\left\{\varphi\in\left({L^2(\mu}^2\right)':\langle\varphi,v-w^\ast\rangle\le0\text{ for all }v\in S\right\}$$ denotes the normal cone to $S$ at $w^\ast$.
Now, denoting the integrand in the definition of $F(w)$ at $(x,y)\in E^2$ by $F_{(x,\:y)}(w)$, by Theorem 1 on page 59 of Clarke's paper, $$\partial F(w)\subseteq\int\lambda^{\otimes2}({\rm d}(x,y))\partial F_{(x,\:y)}(w)\tag2$$ for all $w\in{\mathcal L^2(\mu)}^2$.
For fixed $(x,y)\in E^2$, $\partial F_{(x,\:y)}(w)$ can be computed as in this answer. (The generalized gradient is only subadditive in general, but in the particular case, it should be additive.)
At this point I'm stuck. What are we able to infer on the form of $w^\ast$ from what we know?
EDIT: Let $$G:{\mathcal L^2(\mu)}^2\to\mathcal L^2(\mu)\;,\;\;\;w\mapsto w_1+w_2-1$$ so that $S=\{G=0\}$. Since the Fréchet derivative of $G$ is given by ${\rm D}G(w)v=v_1+v_2$ for all $v,w\in{\mathcal L^2(\mu)}^2$, there should be a $\Lambda\in\mathcal L^2(\mu)$ with $$N_S(w^\ast)=\left\{\begin{pmatrix}\Lambda\\\Lambda\end{pmatrix}\right\}\tag3.$$ So, by $(1)$, there is a $\Phi\in\partial F(w^\ast)$ with $$0=\Phi+\begin{pmatrix}\Lambda\\\Lambda\end{pmatrix}\tag4.$$ Moreover, by $(2)$, there is a $\varphi:E^2\to{\mathcal L^2(\mu)}^2$ with $$\varphi(x,y)\in\partial F_{(x,\:y)}(w^\ast)\;\;\;\text{for }\lambda^{\otimes2}\text{-almost all }(x,y)\in E^2\tag5$$ and $\langle\varphi,v\rangle\in\mathcal L^1\left(\lambda^{\otimes2}\right)$ with $$\langle\Phi,v\rangle=\int\lambda^{\otimes2}({\rm d}(x,y))\langle\varphi(x,y),v\rangle\tag6$$ for all $v\in{\mathcal L^2(\mu)}^2$. Now it's possible to show that \begin{equation}\begin{split}&\int\lambda^{\otimes2}({\rm d}(x,y))\langle\varphi(x,y),v\rangle\\&\;\;\;\;=\int\left(\theta_1(x,y)f_1(x,y)v_1(x)+\left(1-\theta_1(x,y)\right)f_1(y,x)v_1(y)\right)g_{11}(x,y)\\&\;\;\;\;\;\;\;\;+\left(\theta_2(x,)f_2(x,y)v_1(x)+\left(1-\theta_2(x,y)\right)f_1(y,x)v_2(y)\right)g_{12}(x,y)\\&\;\;\;\;\;\;\;\;+\left(\theta_3(x,y)f_1(x,y)v_2(x)+\left(1-\theta_3(x,y)\right)f_2(y,x)v_1(y)\right)g_{21}(x,y)\\&\;\;\;\;\;\;\;\;+\left(\theta_4(x,y)f_2(x,y)v_2(x)+\left(1-\theta_4(x,y)\right)f_2(y,x)v_2(y)\right)g_{22}(x,y)\lambda^{\otimes2}({\rm d}(x,y))\end{split},\tag7\end{equation} where \begin{equation}\begin{split}\theta_1(x,y)&\in\begin{cases}\{1\}&\text{, if }f_1(x,y)w^\ast_1(x)<f_1(y,x)w^\ast_1(y)\\\{0\}&\text{, if }f_1(x,y)w^\ast_1(x)>f_1(y,x)w^\ast_1(y)\\ [0,1]&\text{, if }f_1(x,y)w^\ast_1(x)=f_1(y,x)w^\ast_1(y)\end{cases}\\\theta_2(x,y)&\in\begin{cases}\{1\}&\text{, if }f_2(x,y)w^\ast_1(x)<f_1(y,x)w^\ast_1(y)\\\{0\}&\text{, if }f_2(x,y)w^\ast_1(x)>f_1(y,x)w^\ast_1(y)\\ [0,1]&\text{, if }f_2(x,y)w^\ast_1(x)=f_1(y,x)w^\ast_1(y)\end{cases}\\\theta_3(x,y)&\in\begin{cases}\{1\}&\text{, if }f_1(x,y)w^\ast_1(x)<f_2(y,x)w^\ast_1(y)\\\{0\}&\text{, if }f_1(x,y)w^\ast_1(x)>f_2(y,x)w^\ast_1(y)\\ [0,1]&\text{, if }f_1(x,y)w^\ast_1(x)=f_2(y,x)w^\ast_1(y)\end{cases}\\\theta_4(x,y)&\in\begin{cases}\{1\}&\text{, if }f_2(x,y)w^\ast_1(x)<f_2(y,x)w^\ast_1(y)\\\{0\}&\text{, if }f_2(x,y)w^\ast_1(x)>f_2(y,x)w^\ast_1(y)\\ [0,1]&\text{, if }f_2(x,y)w^\ast_1(x)=f_2(y,x)w^\ast_1(y)\end{cases}\end{split}\tag8\end{equation}
How can we proceed from here? $w^\ast$ is only implicitly involved in $(7)$ in the definitions of the $\theta_i$.
EDIT 2: The result on the interchange of minimization and integration given in Theorem 14.60 of the book Variational Analysis by Rockafellar and Wets (see also Theorem 2.1 in this paper) might be useful: