In machine learning, the overfitting phenomenon - disbalance between training-time and test-time performance of a model - has always represented one of the biggest challenges to overcome. Therefore, a strong theory on generalization properties of ML models has been built over the past decades. However, researchers mostly focused on single-objective optimization problems, which is in contrast with recent trends, where not just model's accuracy, but also fairness, robustness, interpretability, sparsity etc. are optimized for. As it turns out, the single-objective theory can be extended to its multi-objective counterpart straightforwardly. Moreover, the generalizations can be used to form multi-objective-specific statements of great meaning. Using simple tools, the theory provides insights into the behaviour of families of parametrized scalarizations of the loss vectors. The generalization statements hold with high probability globally for all the parametrizations at once. As a consequence, strong statements about multi-dimensional empirical risk minimization can be deduced. As a bonus, the provided generalization bounds are proved to be almost tight in some settings. The theory is supported by experiments on the adult dataset, demonstrating the whole Pareto curve behavior as well as empirical tightness of the theoretical bounds.