天天看点

大模型顶会最新进展 | 含IJCAI,CVPR,AAAI2023等会议论文

作者:AMiner科技情报挖掘
大模型顶会最新进展 | 含IJCAI,CVPR,AAAI2023等会议论文

大型语言模型(LLM)是一种基于深度学习的自然语言处理模型,它能够学习到自然语言的语法和语义,从而可以生成人类可读的文本。大型语言模型被用于许多领域,包括自然语言处理 (NLP)、计算机视觉、音频和语音处理、生物学等,以及多个媒介之间进行建模和表示学习,生成和语言翻译任务。

ChatGPT的爆火让更多的人关注大模型,我们根据今年AI顶会整理了大模型相关的论文,关于大模型的顶会论文列表如下(由于篇幅关系,本篇只展现部分顶会论文,点击阅读原文可直达顶会会议列表查看所有论文)

1.WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences

2.Distilling Semantic Concept Embeddings from Contrastively Fine-Tuned Language Models

3.Generative Relevance Feedback with Large Language Models

4.Benchmarking Middle-Trained Language Models for Neural Search

5.What Makes Pre-trained Language Models Better Zero/Few-shot Learners?

6.GLM-130B: An Open Bilingual Pre-trained Model

7.Self-Consistency Improves Chain of Thought Reasoning in Language Models

8.Large Language Models Are Human-Level Prompt Engineers

9.EfficientViT: Memory Efficient Vision Transformer with Cascaded Group Attention

10.Discovering Latent Knowledge in Language Models Without Supervision

11.Language Modelling with Pixels

12.Is Reinforcement Learning (Not) for Natural Language Processing?: Benchmarks, Baselines, and Building Blocks for Natural Language Policy Optimization

13.Learning on Large-scale Text-attributed Graphs via Variational Inference

14.Generate rather than Retrieve: Large Language Models are Strong Context Generators

15.Selection-Inference: Exploiting Large Language Models for Interpretable Logical Reasoning

16.Selective Annotation Makes Language Models Better Few-Shot Learners

17.PEER: A Collaborative Language Model

18.ReAct: Synergizing Reasoning and Acting in Language Models

19.Reward Design with Language Models

20.Automatic Chain of Thought Prompting in Large Language Models

21.UniMax: Fairer and More Effective Language Sampling for Large-Scale Multilingual Pretraining

22.Quantifying Memorization Across Neural Language Models

23.Compositional Semantic Parsing with Large Language Models

24.Ask Me Anything: A simple strategy for prompting language models

25.Language Models are Multilingual Chain-of-Thought Reasoners

26.Learning to Jointly Share and Prune Weights for Grounding Based Vision and Language Models

27.Generating Sequences by Learning to Self-Correct

28.Meta Learning to Bridge Vision and Language Models for Multimodal Few-Shot Learning

29.Visual Classification via Description from Large Language Models

30.Recitation-Augmented Language Models

31.Language Models are Realistic Tabular Data Generators

32.Training language models for deeper understanding improves brain alignment

33.Progressive Prompts: Continual Learning for Language Models without Forgetting

34.Can discrete information extraction prompts generalize across language models?

35.On Pre-training Language Model for Antibody

36.Open-Vocabulary Object Detection upon Frozen Vision and Language Models

37.Mass-Editing Memory in a Transformer

38.Language Models Can Teach Themselves to Program Better

39.Out-of-Distribution Detection and Selective Generation for Conditional Language Models

40.Compositional Task Representations for Large Language Models

41.Planning with Large Language Models for Code Generation

42.Prototypical Calibration for Few-shot Learning of Language Models

43.Multi-lingual Evaluation of Code Generation Models

44.Planning with Language Models through Iterative Energy Minimization

45.Dataless Knowledge Fusion by Merging Weights of Language Models

46.Task Ambiguity in Humans and Language Models

47.Language Models Can (kind of) Reason: A Systematic Formal Analysis of Chain-of-Thought

48.Sub-Task Decomposition Enables Learning in Sequence to Sequence Tasks

49.Interpretability in the Wild: a Circuit for Indirect Object Identification in GPT-2 small

50.Leveraging Large Language Models for Multiple Choice Question Answering

如何使用ChatPaper?

为了让更多科研人更高效的获取文献知识,AMiner基于GLM-130B大模型能力,开发了Chatpaper,帮助科研人快速提高检索、阅读论文效率,获取最新领域研究动态,让科研工作更加游刃有余。

大模型顶会最新进展 | 含IJCAI,CVPR,AAAI2023等会议论文

ChatPaper是一款集检索、阅读、知识问答于一体的对话式私有知识库,AMiner希望通过技术的力量,让大家更加高效地获取知识。

ChatPaper:https://www.aminer.cn/chat/g