All convolutions in a very dense block are ReLU-activated and use batch normalization. Channel-sensible concatenation is barely doable if the height and width Proportions of the data keep on being unchanged, so convolutions in a very dense block are all of stride 1. Pooling levels are inserted between dense https://financefeeds.com/silencio-network-breaks-records-112-million-in-allocation-requests-surpassing-target-by-220x/