I have been reading the paper Do GANs learn the distribution? Some theory and empirics.
In Corollary D.1, they reference the paper Generalization and Equilibrium in Generative Adversarial Nets which in Theorem B.2 constructs an ϵ-net to get an upper bound for an expectation.
I have little background in topology (CS grad student) so I'm not sure what the authors mean when they say they use "standard constructions" to obtain $\log |X | ≤ O(p\ \log(LL_\phi p/\epsilon))$ (in theorem B.2).
I'm also unsure as to where they get the probability $1 − \exp(−p)$ required for the proof from. Could someone more knowledgeable clear this up for me? Also a reference for these sorts of things would be appreciated.
Thanks in advance.