laitimes

Big Model Top Conference Update | Including IJCAI, CVPR, AAAI2023 and other conference papers

author:AMiner scientific and technological intelligence mining
Big Model Top Conference Update | Including IJCAI, CVPR, AAAI2023 and other conference papers

Large Language Model (LLM) is a deep learning-based natural language processing model that learns the syntax and semantics of natural language to generate human-readable text. Large language models are used in many fields, including natural language processing (NLP), computer vision, audio and speech processing, biology, and more, as well as modeling and representation learning, generative, and language translation tasks between multiple media.

The explosion of ChatGPT has made more people pay attention to large models, we have sorted out papers related to large models according to this year's AI summit conference, and the list of top papers on large models is as follows (due to space constraints, this article only shows some top conference papers, click to read the original article to go directly to the top conference list to view all papers)

1.WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences

2.Distilling Semantic Concept Embeddings from Contrastively Fine-Tuned Language Models

3.Generative Relevance Feedback with Large Language Models

4.Benchmarking Middle-Trained Language Models for Neural Search

5.What Makes Pre-trained Language Models Better Zero/Few-shot Learners?

6.GLM-130B: An Open Bilingual Pre-trained Model

7.Self-Consistency Improves Chain of Thought Reasoning in Language Models

8.Large Language Models Are Human-Level Prompt Engineers

9.EfficientViT: Memory Efficient Vision Transformer with Cascaded Group Attention

10.Discovering Latent Knowledge in Language Models Without Supervision

11.Language Modelling with Pixels

12.Is Reinforcement Learning (Not) for Natural Language Processing?: Benchmarks, Baselines, and Building Blocks for Natural Language Policy Optimization

13.Learning on Large-scale Text-attributed Graphs via Variational Inference

14.Generate rather than Retrieve: Large Language Models are Strong Context Generators

15.Selection-Inference: Exploiting Large Language Models for Interpretable Logical Reasoning

16.Selective Annotation Makes Language Models Better Few-Shot Learners

17.PEER: A Collaborative Language Model

18.ReAct: Synergizing Reasoning and Acting in Language Models

19.Reward Design with Language Models

20.Automatic Chain of Thought Prompting in Large Language Models

21.UniMax: Fairer and More Effective Language Sampling for Large-Scale Multilingual Pretraining

22.Quantifying Memorization Across Neural Language Models

23.Compositional Semantic Parsing with Large Language Models

24.Ask Me Anything: A simple strategy for prompting language models

25.Language Models are Multilingual Chain-of-Thought Reasoners

26.Learning to Jointly Share and Prune Weights for Grounding Based Vision and Language Models

27.Generating Sequences by Learning to Self-Correct

28.Meta Learning to Bridge Vision and Language Models for Multimodal Few-Shot Learning

29.Visual Classification via Description from Large Language Models

30.Recitation-Augmented Language Models

31.Language Models are Realistic Tabular Data Generators

32.Training language models for deeper understanding improves brain alignment

33.Progressive Prompts: Continual Learning for Language Models without Forgetting

34.Can discrete information extraction prompts generalize across language models?

35.On Pre-training Language Model for Antibody

36.Open-Vocabulary Object Detection upon Frozen Vision and Language Models

37.Mass-Editing Memory in a Transformer

38.Language Models Can Teach Themselves to Program Better

39.Out-of-Distribution Detection and Selective Generation for Conditional Language Models

40.Compositional Task Representations for Large Language Models

41.Planning with Large Language Models for Code Generation

42.Prototypical Calibration for Few-shot Learning of Language Models

43.Multi-lingual Evaluation of Code Generation Models

44.Planning with Language Models through Iterative Energy Minimization

45.Dataless Knowledge Fusion by Merging Weights of Language Models

46.Task Ambiguity in Humans and Language Models

47.Language Models Can (kind of) Reason: A Systematic Formal Analysis of Chain-of-Thought

48.Sub-Task Decomposition Enables Learning in Sequence to Sequence Tasks

49.Interpretability in the Wild: a Circuit for Indirect Object Identification in GPT-2 small

50.Leveraging Large Language Models for Multiple Choice Question Answering

How to use ChatPaper?

In order to allow more researchers to obtain literature knowledge more efficiently, AMiner has developed Chatpaper based on the GLM-130B large model capability to help researchers quickly improve the efficiency of retrieval and reading of papers, obtain the latest research trends in the field, and make scientific research work more comfortable.

Big Model Top Conference Update | Including IJCAI, CVPR, AAAI2023 and other conference papers

ChatPaper is a conversational private knowledge base that integrates retrieval, reading, and knowledge Q&A, and AMiner hopes to use the power of technology to make people more efficient in acquiring knowledge.

ChatPaper:https://www.aminer.cn/chat/g