All convolutions in the dense block are ReLU-activated and use batch normalization. Channel-wise concatenation is barely attainable if the peak and width dimensions of the info keep on being unchanged, so convolutions inside of a dense block are all of stride one. Pooling layers are inserted between dense blocks https://financefeeds.com/reddio-launches-public-testnet-a-new-era-of-parallel-evm-powering-autonomous-ai/