Visual Pattern Recognition Networks (VPRNs) delivers remarkable performance using Deep Neural Networks (DNN). Various DNN architectures are available such as Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN) and its variants. Continuous emergence of novel architectures, availability of big data and enormous computing powers keeps this area flourishing. However, such DNN based VPRN models are plunged with enormous computational complexities, intense memory requirements, and high power consumptions which restricts its use in edgeAI environments. For instance, a typical pre-trained VGG-16 model needs 500MB memory, has over 138 million parameters and involves 15.5 billion Floating Point Operations (FLOPs). Such overheads demand for compression of VPRN models while retaining its performance. In this regard, various compression methods have been reported in literature. This paper presents experimentation on hybrid compression method which combines pruning and quantization to achieve higher compression ratio for nonsequential as well as sequential DNN based VPRNs. © 2022 IEEE.