Grouped Convolutions in Convolutional Neural Networks

The nn.Conv2d module in PyTorch has a parameter called groups. It refers to the relationship between number of inputs and number of output channels. If the parameter as the default value of 1, we want the number of output channels to be equal to input channels and hence, we would have a filter shape of (number of filters, 1, H, W). The one in the filter size refers to one filter per input channel leading to one output map. Now, if the groups parameter is any number higher than 1, it would have to be a number that is divisible by both the number of input channels and the number of output channels. For example, if we pass in the number 2 for groups (groups=2), and the number of inputs is 8, and the number of output channels is 4, we would create filters of size: (4, 2, 5,5) where the means that each filter would convolve with 2 input maps. On the other hand, if the number of output channels is higher than the number of input channels, we would create (number of output channels, 1, 5,5) where the 1 means that only one input channel will be convolved with each filter and if there are 4 input and 8 input, 2 filter maps would convolve each input channel to output the desired 8 output channels. \n

Group Convs

Grouped Convolutions for Parallel Computing

The above explanation shows that we can create one-to-one connections between the input channel and output channels as well as one to many in the case of more output channels than the input (which is mostly the case as we go deeper into CNNs). Now, we will look into how the grouped convolutions help in parallel computing which is highly useful for training large deep learning models.

Grouped Convolutions could be very useful if you want to learn multiple features so that two or more models can be created and trained in parallel. Grouped convolutions were first created and used in Alexnet so that the model could be trained on smaller and less powerful GPUs.