天天看點

大模型頂會最新進展 | 含IJCAI,CVPR,AAAI2023等會議論文

作者:AMiner科技情報挖掘
大模型頂會最新進展 | 含IJCAI,CVPR,AAAI2023等會議論文

大型語言模型(LLM)是一種基于深度學習的自然語言處理模型,它能夠學習到自然語言的文法和語義,進而可以生成人類可讀的文本。大型語言模型被用于許多領域,包括自然語言處理 (NLP)、計算機視覺、音頻和語音處理、生物學等,以及多個媒介之間進行模組化和表示學習,生成和語言翻譯任務。

ChatGPT的爆火讓更多的人關注大模型,我們根據今年AI頂會整理了大模型相關的論文,關于大模型的頂會論文清單如下(由于篇幅關系,本篇隻展現部分頂會論文,點選閱讀原文可直達頂會會議清單檢視所有論文)

1.WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences

2.Distilling Semantic Concept Embeddings from Contrastively Fine-Tuned Language Models

3.Generative Relevance Feedback with Large Language Models

4.Benchmarking Middle-Trained Language Models for Neural Search

5.What Makes Pre-trained Language Models Better Zero/Few-shot Learners?

6.GLM-130B: An Open Bilingual Pre-trained Model

7.Self-Consistency Improves Chain of Thought Reasoning in Language Models

8.Large Language Models Are Human-Level Prompt Engineers

9.EfficientViT: Memory Efficient Vision Transformer with Cascaded Group Attention

10.Discovering Latent Knowledge in Language Models Without Supervision

11.Language Modelling with Pixels

12.Is Reinforcement Learning (Not) for Natural Language Processing?: Benchmarks, Baselines, and Building Blocks for Natural Language Policy Optimization

13.Learning on Large-scale Text-attributed Graphs via Variational Inference

14.Generate rather than Retrieve: Large Language Models are Strong Context Generators

15.Selection-Inference: Exploiting Large Language Models for Interpretable Logical Reasoning

16.Selective Annotation Makes Language Models Better Few-Shot Learners

17.PEER: A Collaborative Language Model

18.ReAct: Synergizing Reasoning and Acting in Language Models

19.Reward Design with Language Models

20.Automatic Chain of Thought Prompting in Large Language Models

21.UniMax: Fairer and More Effective Language Sampling for Large-Scale Multilingual Pretraining

22.Quantifying Memorization Across Neural Language Models

23.Compositional Semantic Parsing with Large Language Models

24.Ask Me Anything: A simple strategy for prompting language models

25.Language Models are Multilingual Chain-of-Thought Reasoners

26.Learning to Jointly Share and Prune Weights for Grounding Based Vision and Language Models

27.Generating Sequences by Learning to Self-Correct

28.Meta Learning to Bridge Vision and Language Models for Multimodal Few-Shot Learning

29.Visual Classification via Description from Large Language Models

30.Recitation-Augmented Language Models

31.Language Models are Realistic Tabular Data Generators

32.Training language models for deeper understanding improves brain alignment

33.Progressive Prompts: Continual Learning for Language Models without Forgetting

34.Can discrete information extraction prompts generalize across language models?

35.On Pre-training Language Model for Antibody

36.Open-Vocabulary Object Detection upon Frozen Vision and Language Models

37.Mass-Editing Memory in a Transformer

38.Language Models Can Teach Themselves to Program Better

39.Out-of-Distribution Detection and Selective Generation for Conditional Language Models

40.Compositional Task Representations for Large Language Models

41.Planning with Large Language Models for Code Generation

42.Prototypical Calibration for Few-shot Learning of Language Models

43.Multi-lingual Evaluation of Code Generation Models

44.Planning with Language Models through Iterative Energy Minimization

45.Dataless Knowledge Fusion by Merging Weights of Language Models

46.Task Ambiguity in Humans and Language Models

47.Language Models Can (kind of) Reason: A Systematic Formal Analysis of Chain-of-Thought

48.Sub-Task Decomposition Enables Learning in Sequence to Sequence Tasks

49.Interpretability in the Wild: a Circuit for Indirect Object Identification in GPT-2 small

50.Leveraging Large Language Models for Multiple Choice Question Answering

如何使用ChatPaper?

為了讓更多科研人更高效的擷取文獻知識,AMiner基于GLM-130B大模型能力,開發了Chatpaper,幫助科研人快速提高檢索、閱讀論文效率,擷取最新領域研究動态,讓科研工作更加遊刃有餘。

大模型頂會最新進展 | 含IJCAI,CVPR,AAAI2023等會議論文

ChatPaper是一款集檢索、閱讀、知識問答于一體的對話式私有知識庫,AMiner希望通過技術的力量,讓大家更加高效地擷取知識。

ChatPaper:https://www.aminer.cn/chat/g