laitimes

Brown University Chinese statistics 30 conferences, 25 years of the best paper: Microsoft first, Qingbei ranked 30 miles away

Brown University Chinese statistics 30 conferences, 25 years of the best paper: Microsoft first, Qingbei ranked 30 miles away

Reporting by XinZhiyuan

EDIT: LRS

Recently, a Chinese associate professor at Brown University counted 30 conferences by hand, the best paper award in 25 years, and the learning efficiency max! And the producer also attached a ranking for entertainment reference only: Microsoft ranked first, and Peking University Tsinghua ranked 30 meters away.

Computer conferences generally set up a best paper award, every time you see a god win an award, is there an impulse to worship and learn?

Brown University Chinese statistics 30 conferences, 25 years of the best paper: Microsoft first, Qingbei ranked 30 miles away

Recently, researchers have compiled all the best papers from major conferences in the field of computer science in the past 25 years (1996-2021), including AAAI, ACL, CVPR, KDD, SIGIR, WWW and other 30 conferences, so that you can learn enough at once!

Brown University Chinese statistics 30 conferences, 25 years of the best paper: Microsoft first, Qingbei ranked 30 miles away

Website: https://jeffhuang.com/best_paper_awards/

For conferences that do not have best paper awards, such as SIGGRAPH, CAV, etc., the researchers put papers from the Excellent Paper Award and the Outstanding Paper Awards into it, but the best student paper and the best paper of the decade were not included. And some of the conferences held in 2021 have not yet been updated, such as CVPR2022, AAAI2022.

Let's take a look at which Chinese papers can win the Best Paper Award!

CVPR

The most recent Chinese-winning CVPR paper is CVPR2020, which was picked up by Shangzhe Wu of the University of Oxford.

Brown University Chinese statistics 30 conferences, 25 years of the best paper: Microsoft first, Qingbei ranked 30 miles away

Address of the paper: https://www.semanticscholar.org/paper/2245620c912d669dd6ceb325c127ecbba01b1516

He is currently a fourth-year PhD student at Oxford VGG, where his research focuses on unsupervised 3D learning and reverse rendering, designing unsupervised algorithms that automatically capture physics-based representations on images and videos by training unlabeled images and videos without explicitly labeled data.

The paper proposes a way to learn three-dimensional deformable object classes from raw single-view images without the need for external labeling data. The method is based on an autoencoder that divides each input image into depth deepth, albedo, viewpoint viewpoint, and illumination illumination. In order to decompose these components without supervision, it is necessary to exploit the fact that many object categories have a symmetrical structure, at least in principle.

The results show that even though the appearance of the object does not appear symmetrical due to shadows, the reasoning of the lighting allows the model to take advantage of the underlying object symmetry. In addition, the researchers modeled objects that might be, but not necessarily symmetrical, by predicting symmetry probability plots, and performed end-to-end learning with other parts of the model.

Experimental results show that this method can very accurately recover the three-dimensional shapes of human faces, cat faces and cars from single-view images without the need for any supervision or pre-shape model. In benchmarking, the results also demonstrated higher accuracy compared to another method that uses supervision at the level of correspondence in two-dimensional images.

ACL

The best paper for ACL2021 comes from ByteDance's Jingjing Xu.

Brown University Chinese statistics 30 conferences, 25 years of the best paper: Microsoft first, Qingbei ranked 30 miles away

Address: https://aclanthology.org/2021.acl-long.571/

Code: https://github.com/Jingjing-NLP/VOLT

The choice of token vocabulary affects the performance of machine translation. The paper aims to figure out what good vocabulary is and whether the best vocabulary can be found without experimental training. To answer these questions, the researchers first developed another understanding of vocabulary from an informational theory perspective, formal vocalization—looking for the best token vocabulary of the right size—as an optimal transport (OT) problem.

The paper proposes a simple and effective solution volt, and empirical results show that VOLT performs better than the widely used vocabulary in different scenarios, including WMT-14 English-German translation and TED multilingual translation. For example, VOLT achieved a 70% reduction in vocabulary in the English-German translation and an increase in BLEU by 0.5. In addition, compared to BPE-search, VOLT reduced the search time for Anglo-German translation from 384 GPU hours to 30 GPU hours.

KDD

The best paper from KDD2019 comes from Yipeng Zhang of RMIT University.

Brown University Chinese statistics 30 conferences, 25 years of the best paper: Microsoft first, Qingbei ranked 30 miles away

Address of the paper: https://dl.acm.org/doi/10.1145/3292500.3330829

The researchers raised and studied the impact of optimizing outdoor advertising with the implication count in mind. Given a billboard database, each with a location and a non-uniform cost, a track database, and a budget, the goal is to find a set of billboards that have the most impact under the budget. Consistent with the Ad Consumer Behavior Study, when defining impact metrics, the researchers employed a logical function to consider the impression count of user trajectories of ads placed on different billboards. This also presents two challenges:

(1) For any polynomial time of ε>0, be at O(| T| 1-ε) is approximate within the coefficient, this problem is NP-hard;

(2) Influence measurement is non-submodular, which means that the direct greedy method does not apply.

Therefore, the researchers proposed a tangent-based algorithm to compute a submodular function to estimate the upper bound of influence. A branch and boundary framework with θ termination conditions is introduced, achieving an approximation rate of θ2/(1-1/e).

However, when | When U| is large, this framework is time-consuming. Therefore, the researchers further optimized with the progressive pruning upper bound estimation method, achieving an approximation rate of θ2/(1 -1/e - ε) and greatly reducing the run time. Experimental results conducted on real-world billboard and trajectory datasets show that the proposed method is 95% more effective than baseline. In addition, the optimized method is two orders of magnitude faster than the original framework.

AAAI

There are two best papers for AAAI2021, one of which is an Informer model from Beijing University of Aeronautics and Astronautics.

Brown University Chinese statistics 30 conferences, 25 years of the best paper: Microsoft first, Qingbei ranked 30 miles away

Address of the paper: https://www.aaai.org/AAAI21Papers/AAAI-7346.ZhouHaoyi.pdf

Real-world applications often require forecasting of long sequences of time series, such as power planning. Long-series time series forecasting (LSTF) requires models with high predictive capabilities, that is, the ability to effectively capture precise long-distance dependent couplings between outputs and inputs. Some studies have shown that Transformer has the potential to improve model predictive power, but Transformer has several serious problems that make it impossible to apply directly to LSTF, such as quadratic time complexity, higher memory consumption, and inherent limitations of encoder-decoder structures.

To solve these problems, the researchers designed an efficient Transformer-based model, named Informer, which has three notable features: (i) the ProbSparse Self-attention mechanism, which achieves O (Llog L) in terms of time complexity and memory usage, and performs fairly well on sequence dependency alignment; (ii) The refinement of self-attention highlights more important attention by halving the input of the cascading layer and is able to effectively handle extremely long input sequences; (iii) the generative style decoder (sytle decoder), although conceptually simple, can predict the prediction of long sequences in a single forward operation, rather than in a step-by-step manner, greatly improving the inference speed of long sequence predictions.

Experimental results on four large-scale datasets show that Informer's performance is significantly better than existing methods and provides a new solution to the LSTF problem.

Microsoft wins

In addition to viewing the best papers by conference and year, the producers also ranked the authors of each best paper.

The score is based on a score of 1 for the first author, 0.5 for the second author, 0.33 for the third author, etc., and then the scores are normalized.

Brown University Chinese statistics 30 conferences, 25 years of the best paper: Microsoft first, Qingbei ranked 30 miles away

The producers say the ranking may be inaccurate or incomplete, and may not reflect the current status of the Best Paper Awards, which is not an official list, so ignore it if you're not satisfied with the rankings of the institutions listed here.

Brown University Chinese statistics 30 conferences, 25 years of the best paper: Microsoft first, Qingbei ranked 30 miles away

You can see that Microsoft scored 62.4, followed by the University of Washington 56.9 and Carnegie Mellon University 52.2, compared to Google's score of just 21.3, less than a third of Microsoft's.

The top two research institutions in China are Peking University and Tsinghua University, both ranked behind the 30th.

Brown University Chinese statistics 30 conferences, 25 years of the best paper: Microsoft first, Qingbei ranked 30 miles away

Producer information

The data of the best paper collection is manually searched and collated by the author on the Internet, which can be said to be quite hard!

The author of the form is Jeff Huang, currently an associate professor in the Department of Computer Science at Brown University, whose main research areas are human-computer interaction and building personalized systems based on user behavior data, and applying the system to the fields of attention, mobility and health, and has won the National Science Foundation CARER Award, the Facebook Scholarship and the ARO Young Investigator Award.

Brown University Chinese statistics 30 conferences, 25 years of the best paper: Microsoft first, Qingbei ranked 30 miles away

Dr. Jeff Huang graduated from the University of Washington in Seattle with a degree in Information Science and received his Master's and Undergraduate degrees in Computer Science from the University of Illinois at Urbana-Champaign (UIUC). Prior to joining Brown University, he analyzed search behavior at Microsoft Research, Google, Yahoo, and Bing, and founded World Blender, a Techstars-backed geolocation mobile games.

Resources:

https://jeffhuang.com/best_paper_awards/ #

Read on