Posts

Contractive Autoencoders

Basic Idea of Contractive Autoencoder To add an explicit term in the loss that penalizes the solution and make the learned representation to be robust towards small changes around the training examples. It achieves that by using different penalty term imposed to the representation.  Contractive Loss Function  The loss function for the reconstruction term is  ℓ 2  loss  as similar to previous Autoencoders . The penalty term is to calculate the representation’s jacobian matrix with regards of the training data. Calculating a jacobian of the hidden layer ( h) with respect to input ( X)  is similar to gradient calculation. Recall than jacobian ( J ) is the generalization of gradient, i.e. when a function is a vector valued function, the partial derivative is a matrix called jacobian. Hence, the loss function is as follows: L = ∥ X − ^ X ∥ 2 2 + λ ∥ J h ( X ) ∥ 2 F L = ‖ X − X ^ ‖ 2 2 + λ ‖ J h ( X ) ‖ F 2 where ∥ J h ( X ) ∥ 2 F = ∑ i j (...