Current Issue Cover
大小模型端云协同进化技术进展

王永威1,2, 沈弢1, 张圣宇1, 吴帆3, 赵洲1, 蔡海滨4, 吕承飞1,5, 马利庄3, 杨承磊6, 吴飞1,2(1.浙江大学人工智能研究所, 杭州 310058;2.浙江大学上海高等研究院, 上海 201203;3.上海交通大学计算机科学与工程系, 上海 200241;4.华东师范大学软件工程学院, 上海 200062;5.淘宝(中国)软件有限公司, 杭州 310023;6.山东大学软件学院, 济南 250011)

摘 要
生成式基座大模型正在引发人工智能领域的重大变革,在自然语言处理、多模态理解与内容合成等任务展现通用能力。大模型部署于云侧提供通用智能服务,但面临时延大、个性化不足等关键挑战,小模型部署于端侧捕捉个性化场景数据,但存在泛化性不足的难题。大小模型端云协同技术旨在结合大模型通用能力和小模型专用能力,以协同交互方式学习演化进而赋能下游垂直行业场景。本文以大语言模型和多模态大模型为代表,梳理生成式基座大模型的主流架构、典型预训练技术和适配微调等方法,介绍在大模型背景下模型剪枝、模型量化和知识蒸馏等大模型小型化关键技术的发展历史和研究近况,依据模型间协作目的及协同原理异同,提出大小模型协同训练、协同推理和协同规划的协同进化分类方法,概述端云模型双向蒸馏、模块化设计和生成式智能体等系列代表性新技术、新思路。总体而言,本文从生成式基座大模型、大模型小型化技术和大小模型端云协同方式 3 个方面探讨大小模型协同进化的国际和国内发展现状,对比优势和差距,并从应用前景、模型架构设计、垂直领域模型融合、个性化和安全可信挑战等层面分析基座赋能发展趋势。
关键词
Advances in edge-cloud collaboration and evolution for large-small models

Wang Yongwei1,2, Shen Tao1, Zhang Shengyu1, Wu Fan3, Zhao Zhou1, Cai Haibin4, Lyu Chengfei1,5, Ma Lizhuang3, Yang Chenglei6, Wu Fei1,2(1.Institute of Artificial Intelligence, Zhejiang University, Hangzhou 310058, China;2.Shanghai Institute for Advanced Study, Zhejiang University, Shanghai 201203, China;3.Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai 200241, China;4.School of Software Engineering, East China Normal University, Shanghai 200062, China;5.Taobao(China) Software Co., Ltd., Hangzhou 310023, China;6.School of Software, Shandong University, Jinan 250011, China)

Abstract
Generative foundation models are facilitating significant transformations in the field of artificial intelligence. They demonstrate general artificial intelligence in diverse research fields,including natural language processing,multimodal content understanding,imagery,and multimodal content synthesis. Generative foundation models often consist of billions or even hundreds of billions of parameters. Thus,they are often deployed on the cloud side to provide powerful and general intelligent services. However,this type of service can be confronted with crucial challenges in practice,such as high latency induced by communications between the cloud and local devices,and insufficient personalization capabilities due to the fact that servers often do not have access to local data considering privacy concerns. By contrast,low-complexity lightweight models are located at the edge side to capture personalized and dynamic scenario data. However,they may suffer from poor generalization. Large and lightweight(or large-small)model collaboration aims to integrate the general intelligence of large foundation models and the personalized intelligence of small lightweight models. This integration empowers downstream vertical domain-specific applications through the interaction and collaboration of both types of intelligent models. Large and small model collaboration has recently attracted increasing attention and becomes the focus of research and development in academia and industry. It has also been predicted to be an important trend in technology. We therefore try to thoroughly investigate this area by highlighting recent progress and bringing potential inspirations for related research. In this study,we first overview representative large language models(LLMs)and large multimodal models. We focus on their mainstream Transformer-based model architectures including encoder-only,decoder-only,and encoder-decoder models. Corresponding pre-training technologies such as next sentence prediction,sequence-to-sequence modeling,contrastive learning,and parameter-efficient fine-tuning methods with representatives including low-rank adaptation and prompt tuning are also explored. We then review the development history and the latest advancement of model compression techniques, including model pruning,model quantization,and knowledge distillation in the era of foundation models. Based on the differences in terms of model collaboration purposes and mechanisms,we propose a new classification method and taxonomies for the large-small model collaboration study,namely,collaborative training,collaborative inference,and collaborative planning. Specifically,we summarize recent and representative methods that consist of dual-directional knowledge distillation between large models at the cloud side and small models deployed at the edge side,modular design of intelligent models that split functional models between the cloud and edge,and generative agents that collaborate to complete more complex tasks in an autonomous and intelligent manner. In collaborative training,a main challenge is dealing with the heterogeneity in data distribution and model architectures between the cloud and client sides. Data privacy may also be a concern during collaborative training,particularly in privacy sensitive cases. Despite much progress in collaborative inference,slicing and completing a complicated task in a collective way automatically remain challenging. Furthermore,the communication costs between computing facilities might be another concern. Collective planning is a new paradigm that gains attention with the increasing study and promising progress of LLM-centric agents(LLM agents). This paradigm often involves multiple LLM agents who compete or cooperate together to complete a challenging task. It often leverages emerging capabilities such as in-context learning and chain-of-thoughts of LLMs to automatically dive a complicated task into several subtasks. By completing and assembling different subtasks,the global task can be conducted in a collaborative manner. This scheme finds diverse applications such as developing games and simulating social societies. However,it may suffer from drawbacks inherent in LLMs,including hallucination and adversarial vulnerabilities. Thus,more robust and reliable collaborative planning schemes remain to be investigated. In summary,this work surveys the large-small model collaboration techniques from the perspectives of generative foundation models,model compression,and heterogeneous model collaboration via LLM agents. This work also compares the advantages and disadvantages between international and domestic technology developments in this research realm. We conclude that,although the gaps are narrowing between domestic and advanced international studies in this area,particularly for newly emerging LLM agents,we may still lack original and major breakthroughs. Certain notable advantages of domestic progress are closely related to industrial applications due to its rich data resources from industries. Therefore,the development of domain specific LLMs is advanced. In addition,this study envisions the applications of large-small model collaboration and discusses certain key challenges and promising directions in this topic. 1)The design of efficient model architectures includes developing new model architectures that can achieve lowcomplexity inference speed while maintaining efficient long-sequence modeling abilities as Transformers and further improving the scalability of mixture-of-expert-based architectures. 2)Current model compression methods are mainly designed for vision models. Thus,developing techniques specifically for LLMs and large multimodal models is important to preserve their emergent abilities during compression. 3)Existing personalization methods specially focus on discriminative models, and due attention needs to be paid for efficient personalization for generative foundation models. 4)Generative intelligence often suffers from fraudulent contents(e. g. ,generated fake imagery,deepfake videos,and fake news)and different types of attacks(e. g. ,adversarial attacks,the jailing breaking attacks,and the Byzantine attacks). Thus,security and trustworthy issues arise in their practical applications. Therefore,this study also advocates a deeper investigation of these emerging security threats. Then,it develops effective defenses accordingly to countermeasure these crucial issues during large-small model collaboration for empowering vertical domains more safely.
Keywords

订阅号|日报