What is the role of the non-linearity in deep networks? In image classification, it was shown that it leads to a neural collapse, where images from the same class are all mapped to the same representation. There have been two main theories, one based on sparsity assumptions (using soft-thresoldings) and one based on symmetry groups (using complex moduli). We show that the mechanism at play in deep networks is the latter, and introduce the phase collapse operation which is both necessary and sufficient to reach high classification accuracies. It leads to a simplified architecture without learned biases nor spatial filters that is mathematically easier to reason about, while preserving performance.