I misspoke today in response to Garrett’s question about a vector-valued loss function (instead of a scalar loss function). If your loss (or any other) function is a vector of values, then computing the partial derivative of each of those values with respect to each of those inputs is called the Jacobian matrix. It’s normally denoted as \( J_{\!f}(x) \), and its entries are \( J_{{\!f}_{ij}}(x) = \frac{\partial f_i}{\partial x_j} \).
The Hessian matrix, \( H_{\!f}(x) \), is actually similar to the Jacobian but has second-order partial derivatives. (In other words, its entries are \( H_{{\!f}_{ij}}=\frac{\partial^2 f}{\partial x_i\,\partial x_j} \).)