Here is the easiest explanation, two intersecting straight lines satisfy this property, and two intersecting straight lines are the worst-case among all convex functions when fixing the intersection point of the two functions.
And the following proof is an effort to make the above intuition rigorous and mathematical readable.
The following argument only available for convex function with reasonable regularity
First, as point out by Loic Teyssier, we need to consider the case independently: when $f_1,f_2$ do not intersection. And when $f_{1}, f_{2}$ is not intersect and then we can take $\lambda=0$ or 1 to make $\lambda f_{1}(x)+(1-\lambda) f_{2}(x)=\max \left\{f_{1}, f_{2}\right\}$.
Second, as point out by Dieter Kadelka, we need to consider the endpoint $\{0,1\}$ separate because the prove of N-L formula relay the point $a,b$ is in some interval, in the following argument. This can be down use the convex property to gain a lower bound on $f_1(0),f_1(1),f_2(0),f_2(1)$.
Then, this is true and in fact, we can determine $\lambda$. We need the following 2 lemmas.
In the case $\exists \quad x_{0} \in[0,1] . \quad f_{1}\left(x_{0}\right)=f_{2}\left(x_{0}\right)>0$
$f$ is a continue couvex function on [0.1], then $f$ is lipschitz function.
$f$ is a lipschitz function. then $f^{\prime}(x) \quad$ exist a.e. in $[0,1]$ and $\forall \quad 0\leq a< b\leq 1$,
$$f(b)-f(a)=\int_{a}^{b} f^{\prime}(x) d x$$
We can understand the previous N-L formula in distribution to avoid unnecessary trouble(The derivative does not exist, the left and right derivatives are not equal, and the derivative is zero but the function is a strictly monotonic function)
then $\forall x \in[0.1]$,
$\begin{aligned} \lambda f_{1}(x)+(1-\lambda) f_{2}(x) &=\lambda\left(f_{1}\left(x_{0}\right)+\int_{x_{0}}^{x} f_{1}^{\prime}(t) d t\right)+(1-\lambda)\left(f_{2}\left(x_{0}\right)+\int_{x_{0}}^{x} f_{2}(t) d t\right) \\ &=\lambda f_{1}\left(x_{0}\right)+(1-\lambda) f_{2}\left(x_{0}\right)+\lambda \int_{x_{0}}^{x} f_{1}^{\prime}(t) d_{t}+(1-\lambda) \int_{x_{0}}^{x} f_{2}^{\prime}(t) d t \end{aligned}$
Now $\quad$ WLOG $\quad$ we assame
$f_{1}(x)>0$ When $0 \leqslant x \leqslant x_{0}$, $f_{2}(x)>0 $ when $ x_{1} \leqslant x \leqslant 1$, and assume $f_{1}^{\prime}\left(x_{0}\right)>0. f_{2}^{\prime}\left(x_{0}\right)<0$.
When $ 1 \geqslant x>x_{0}$. Then,
$$
\begin{array}{l}
\int_{x_{0}}^{x} f_{1}^{\prime}(t) d t \geqslant\left|x-x_{0}\right| \cdot f_{1}^{\prime}\left(x_{0}\right) \\
\int_{x_{0}}^{x} f_{2}^{\prime}(t) d t \geqslant\left|x-x_{0}\right| \cdot f_{2}^{\prime}\left(x_{0}\right)
\end{array}
$$
When $0 \leqslant x<x_{0}$, then,
\begin{array}{l}
\int_{x_{0}}^{x} f_{1}^{\prime}\left(t) d t \geqslant\left|x-x_{0}\right| \cdot f_{1}^{\prime}\left(x_{0}\right)\right. \\
\int_{x_{0}}^{x} f_{2}^{\prime}(t) d t \geqslant \left|x-x_{0}\right| \cdot f_{2}^{\prime}\left(x_{0}\right)
\end{array}
So take $\lambda$ solve $\quad \lambda f_{1}^{\prime}\left(x_{0}\right)+(1-\lambda) f_{i}^{\prime}\left(x_{0}\right)=0$, then $\quad \lambda f(x)+(1-\lambda) f_{2}(x) \geqslant 0 \quad \forall \quad x\in [0,1]$.
To prove the property is true for the general convex function we may follow the argument above and modified the detail(especially the H-L formula part) by use supporting lines of $f_1,f_2$ and observe convex functions lie above their supporting lines.