Gradient Boosting is an approach in ensemble learning where models are trained in succession with adaptively weighted training sets to correct the errors of previous models. At its core, gradient boosting applies the principle of gradient descent to the functional space of predictive models.
Boosting algorithms typically use large populations of weak learners like shallow decision trees. Many variations of boosting have been introduced with different approaches to reweighting training data, with AdaBoost perhaps being the most popular. Consider a dataset $\{(x_1, y_1), ..., (x_n, y_n)\}$ where a set of weak classifiers $\{f_1, ..., f_m\}$ are combined to output a classification label $y_n \in \{1, ..., C\}$ for each item. The boosted ensemble makes predictions using a linear combination of the individual classifiers weighted by $\alpha_i$.
$$ F(x_i) = \alpha_1 f_1(x_i) + ... + \alpha_n f_m(x_i) $$
Observation weights are initialized to $w_i = \frac{1}{n}$ for each sample in the training set. Then, for each classifier in the ensemble of size $m$, a classifier $f_m$ is fit to the training data using the training set weights where the error is computed:
$$ err(f_m) = \frac{\sum_{i=1}^n w_i \mathbb{I} (c_i \neq f_m(x_i))}{\sum_{i=1}^n w_i} $$
The classifier weight $\alpha_m$ is then computed:
$$ \alpha_m = log \frac{1 - err(f_m)}{err(f_m)} + log(C - 1) $$
The observation weights are then updated to assign more importance to samples the previous classifier misclassified and then re-normalized such that the observation weights $\sum_{i=1}^n w_i = 1$.
$$ w_i \leftarrow w_i \cdot exp (\alpha_m \cdot \mathbb{I} (y_i \neq f_m(x_i))) $$