Source Jouranl of CSCD
Source Journal of Chinese Scientific and Technical Papers
Included as T2 Level in the High-Quality Science and Technology Journals in the Field of Environmental Science
Core Journal of RCCSE
Included in the CAS Content Collection
Included in the JST China
Indexed in World Journal Clout Index (WJCI) Report
Volume 41 Issue 2
Feb.  2023
Turn off MathJax
Article Contents
XIA Jingming, XU Zifeng, TAN Lin. APPLICATION RESEARCH OF LIGHTWEIGHT NETWORK LW-GCNet IN GARBAGE CLASSIFICATION[J]. ENVIRONMENTAL ENGINEERING , 2023, 41(2): 173-180. doi: 10.13205/j.hjgc.202302023
Citation: XIA Jingming, XU Zifeng, TAN Lin. APPLICATION RESEARCH OF LIGHTWEIGHT NETWORK LW-GCNet IN GARBAGE CLASSIFICATION[J]. ENVIRONMENTAL ENGINEERING , 2023, 41(2): 173-180. doi: 10.13205/j.hjgc.202302023

APPLICATION RESEARCH OF LIGHTWEIGHT NETWORK LW-GCNet IN GARBAGE CLASSIFICATION

doi: 10.13205/j.hjgc.202302023
  • Received Date: 2022-05-02
    Available Online: 2023-05-25
  • Publish Date: 2023-02-01
  • Garbage classification is an important way to build a green city. The traditional garbage classification is commonly carried out manually, the classification is not thorough, and the labor intensity classification is high, which is not conducive to environmental protection and resource reuse. In order to improve the accuracy of garbage classification, this paper proposed a lightweight network model LW-GCNet (light weight garbage classify network) based on VGG16. The network model performed feature extraction by introducing depthwise separable convolution and SE (squeeze-and-excitation) modules, and organically fused the shallow and deep features of junk images. These modules enhanced the dependencies between channels of garbage images to be classified while reducing the computational complexity of the model and providing multi-level semantic information for accurate classification. In addition, the LW-GCNet model adopted adaptive max pooling and global average pooling to replace the fully connected layer in the VGG16 network, which effectively reduced the number of parameters. The performance of LW-GCNet was validated using the dataset GRAB125 consisting of four types of garbage images. The experimental results showed that, on the premise of ensuring the recognition speed, the average recognition accuracy rate of this method reached 77.17%, and the parameter quantity was 3.15 M, making it easy to be deployed in outdoor embedded systems.
  • loading
  • [1]
    孙晓杰, 王春莲, 李倩, 等. 中国生活垃圾分类政策制度的发展演变历程[J]. 环境工程, 2020, 38(8):65-70.
    [2]
    任中山, 陈瑛, 王永明. 生活垃圾分类对垃圾焚烧发电产业发展影响的分析[J]. 环境工程, 2021, 39(6):150-153.
    [3]
    王肇嘉, 秦玉, 顾军, 等. 生活垃圾焚烧飞灰二噁英控制技术研究进展[J]. 环境工程, 2021,39(10):116-123.
    [4]
    KRIZHEVSKY A, SUTSKEVER I, HINTON G E. Imagenet classification with deep convolutional neural networks[J]. Advances in neural information processing systems, 2012, 25:1097-1105.
    [5]
    SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[J]. arXiv:1409.1556, 2014.
    [6]
    SZEGEDY C, IOFFE S, VANHOUCKE V, et al. Inception-v4, inception-resnet and the impact of residual connections on learning[C]//The thirty-first AAAI conference on artificial intelligence. 2017.
    [7]
    TAN M X, LE Q V. Efficientnet:rethinking model scaling for convolutional neural networks[C]//the International Conference on Machine Learning. PMLR, 2019:6105-6114.
    [8]
    YANG M, THUNG G. Classification of trash for recyclability status[R]. CS229 Project Report, 2016, 2016:3.
    [9]
    ARAL R A, KESKIN R, KAYA M, et al. Classification of trashnet dataset based on deep learning models[C]//2018 IEEE International Conference on Big Data (Big Data). IEEE, 2018:2058-2062.
    [10]
    RABANO S L, CABATUAN M K, SYBINGCO E, et al. Common garbage classification using mobilenet[C]//2018 IEEE 10th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment and Management (HNICEM). IEEE, 2018:1-4.
    [11]
    KENNEDY T. OscarNet:Using transfer learning to classify disposable waste[R]. CS230 Report:Deep Learning. Stanford University, CA, Winter, 2018.
    [12]
    OZKAYA U, SEYFI L. Fine-tuning models comparisons on garbage classification for recyclability[J]. arXiv:1908.04393, 2019.
    [13]
    KANG Z, YANG J, LI G, et al. An automatic garbage classification system based on deep learning[J]. IEEE Access, 2020, 8:140019-140029.
    [14]
    SHI C P, XIA R Y, WANG L G. A novel multi-branch channel expansion network for garbage image classification[J]. IEEE Access, 2020, 8:154436-154452.
    [15]
    ZENG M, LU X Z, XU W K, et al. PublicGarbageNet:a deep learning framework for public garbage classification[C]//2020 39th Chinese Control Conference (CCC). IEEE, 2020:7200-7205.
    [16]
    LI Y F, LIU W. Deep learning-based garbage image recognition algorithm[J]. Applied Nanoscience, 2021:1-10.
    [17]
    MITTAL G, YAGNIK K B, GARG M, et al. Spotgarbage:smartphone app to detect garbage using deep learning[C]//Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, 2016:940-945.
    [18]
    PROENÇA P F, SIMÕES P. TACO:Trash annotations in context for litter detection[J]. arXiv:2003.06975, 2020.
    [19]
    PANWAR H, GUPTA P K, SIDDIQUI M K, et al. AquaVision:automating the detection of waste in water bodies using deep transfer learning[J]. Case Studies in Chemical and Environmental Engineering, 2020, 2:100026.
    [20]
    GUO J B, LI Y X, LIN W Y, et al. Network decoupling:from regular to depthwise separable convolutions[J]. arXiv:1808.05517, 2018.
    [21]
    CHOLLET F. Xception:Deep learning with depthwise separable convolutions[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017:1251-1258.
    [22]
    ZHAO Q T, SHENG T, WANG Y T, et al. M2det:A single-shot object detector based on multi-level feature pyramid network[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2019, 33(1):9259-9266.
    [23]
    HAASE D, AMTHOR M. Rethinking depthwise separable convolutions:How intra-kernel correlations lead to improved mobilenets[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020:14600-14609.
    [24]
    GUO Y H, LI Y D, WANG L Q, et al. Depthwise convolution is all you need for learning multiple visual domains[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2019, 33(1):8368-8375.
    [25]
    HUA B S, TRAN M K, YEUNG S K. Pointwise convolutional neural networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018:984-993.
    [26]
    HU J, SHEN L, SUN G. Squeeze-and-excitation networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018:7132-7141.
    [27]
    KAR A, RAI N, SIKKA K, et al. Adascan:adaptive scan pooling in deep convolutional neural networks for human action recognition in videos[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017:3376-3385.
    [28]
    LIN M, CHEN Q, YAN S. Network in network[J]. arXiv:1312.4400, 2013.
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Article Metrics

    Article views (152) PDF downloads(5) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return