This talk is dedicated to the package `caret` (short for _C_lassification _A_nd _RE_gression _T_raining) that provides a consistent interface to the vast majority of machine learning and statistical models. Further, this package automates major steps of the model training, namely, data splitting techniques and resampling (e.g., cross-validation), data pre-processing (e.g., scaling), and parameters' tuning. To demonstrate how caret can simplify these routine processes, three classical models (k-NN, GLM, and neural networks) and their implementations as R functions (`class::knn`, `stats::glm`, and `nnet::nnet`, respectively) are brushed up first. Based on a simple data example, these models are calibrated and tuned using the standard manual workflow. Then, the whole process is rewritten from scratch by utilizing only the building blocks of the `caret` package. The emphasis is on the syntax sugar that this package provides, and how it can improve the efficiency and quality of the code.