<pre id="j9nnl"></pre>
        <ruby id="j9nnl"></ruby>
          <pre id="j9nnl"></pre><ruby id="j9nnl"><mark id="j9nnl"></mark></ruby>
          <p id="j9nnl"><mark id="j9nnl"><progress id="j9nnl"></progress></mark></p>

              <p id="j9nnl"><cite id="j9nnl"></cite></p>

                  <ruby id="j9nnl"></ruby>

                  <del id="j9nnl"><dfn id="j9nnl"><th id="j9nnl"></th></dfn></del>

                        <ruby id="j9nnl"><mark id="j9nnl"><thead id="j9nnl"></thead></mark></ruby>
                        <p id="j9nnl"></p>

                        <p id="j9nnl"><mark id="j9nnl"></mark></p>
                          <p id="j9nnl"><del id="j9nnl"><thead id="j9nnl"></thead></del></p>

                          科學研究

                          Research

                          首頁 >  論文  > 詳情

                          InternImage: Exploring Large-Scale Vision Foundation Models with Deformable Convolutions

                          發表會議及期刊:arXiv

                           Wenhai Wang1?, Jifeng Dai2,1?, Zhe Chen3,1?, Zhenhang Huang1?, Zhiqi Li3,1?, Xizhou Zhu4?

                          Xiaowei Hu1, Tong Lu3, Lewei Lu4, Hongsheng Li5, Xiaogang Wang4,5, Yu Qiao1

                          1 Shanghai AI Laboratory 2Tsinghua University 3Nanjing University 4SenseTime Research 5The Chinese University of Hong Kong

                           https://github.com/OpenGVLab/InternImage


                          Abstract

                          Compared to the great progress of large-scale vision transformers (ViTs) in recent years, large-scale models based on convolutional neural networks (CNNs) are still in an early state. This work presents a new large-scale CNN-based foundation model, termed InternImage, xinfengwhich can obtain the gain from increasing parameters and training data like ViTs. Different from the recent CNNs that focus on large dense kernels, InternImage takes deformable convolution as the core operator, so that our model not only has the large effective receptive field required for downstream tasks such as detection and segmentation, but also has the adaptive spatial aggregation conditioned by input and task information. As a result, the proposed InternImage reduces the strict inductive bias of traditional CNNs and makes it possible to learn stronger and more robust patterns with large-scale parameters from massive data like ViTs. The effectiveness of our model is proven on challenging benchmarks including ImageNet, COCO, and ADE20K. It is worth mentioning that InternImage-H achieved a new record 65.4 mAP on COCO test-dev and 62.9 mIoU on ADE20K, outperforming current leading CNNs and ViTs. 

                          comm@pjlab.org.cn

                          上海市徐匯區云錦路701號西岸國際人工智能中心37-38層

                          滬ICP備2021009351號-1

                          <pre id="j9nnl"></pre>
                                <ruby id="j9nnl"></ruby>
                                  <pre id="j9nnl"></pre><ruby id="j9nnl"><mark id="j9nnl"></mark></ruby>
                                  <p id="j9nnl"><mark id="j9nnl"><progress id="j9nnl"></progress></mark></p>

                                      <p id="j9nnl"><cite id="j9nnl"></cite></p>

                                          <ruby id="j9nnl"></ruby>

                                          <del id="j9nnl"><dfn id="j9nnl"><th id="j9nnl"></th></dfn></del>

                                                <ruby id="j9nnl"><mark id="j9nnl"><thead id="j9nnl"></thead></mark></ruby>
                                                <p id="j9nnl"></p>

                                                <p id="j9nnl"><mark id="j9nnl"></mark></p>
                                                  <p id="j9nnl"><del id="j9nnl"><thead id="j9nnl"></thead></del></p>
                                                  韩国伦理电影