Awesome motion generation. We have listed this requirement as to be determined.

Awesome motion generation. 馃搫 Paper | 馃寪 Project Page | 馃捇 Code.

Awesome motion generation Contribute to plyfager/Awesome-Motion-Generation development by creating an account on GitHub. Talking-head Generation with Rhythmic Head Motion: This article presents a method for generating realistic talking-head videos with natural head movements, addressing the challenge of generating lip-synced videos while incorporating natural head motion. Self-Supervised Class-Agnostic Motion Prediction with Spatial and Temporal Consistency Regularizations. ust. Since few-shot image generation is a very broad concept, there are various experimental settings and research lines in the realm of few-shot image generation. A curated list of sign language procesing (e. 馃搫 Paper | 馃寪 Project Page | 馃捇 Code. 馃搫 Paper | 馃捇 Code. Our MCM-LDM's cornerstone lies in its ability first to disentangle and then intricately weave together motion's tripartite components: motion trajectory, motion content, and motion style. MakeHuman - MakeHuman is an open source (AGPL3) tool designed to simplify the creation of virtual humans using a Graphical User Interface, also commonly 2 days ago 路 Showing the latest 50 out of 61 papers. In response, we introduce Sitcom-Crafter, a comprehensive and extendable system for human motion generation in 3D space, which Sep 2, 2024 路 Human video generation task has gained significant attention with the advancement of deep generative models. , sign language recognition, sign language translation) and related area (e. Zhu et al. a, dynamic scene graph generation, aims to provide a detailed and structured interpretation of the whole scene by parsing an event into a sequence of interactions between different visual entities. , Arxiv 2024 馃搫 Paper | 馃寪 Project Page | 馃捇 Code MotionDreamer: Zero-Shot 3D Mesh Animation from Video Diffusion Models, Uzolas et al. More than 100 million people use GitHub to discover, fork, and contribute to over 420 million projects. Note: Here is also a collection of materials for reinforcement learning, decision making and motion planning Motion Generation From Motor Control to Team Play in Simulated Humanoid Football. Contribute to layumi/Awesome-Text2Motion-Generation development by creating an account on GitHub. The old version can be found here. , speech translation, motion generation) resources. Motion Intensity Resampling: We resampled videos to balance the dataset, ensuring an even distribution of motion intensity values within the range of 0 to 20. Most research within this field focuses on generating human motions based on conditional Papers for Talking Head Generation, released codes collections. Papers for Talking Head Generation, released codes collections. Any addition or bug about talking head generation,please open an issue, pull requests or e-mail me by fhongac@cse. edu Please feel free to pull request to add new resources or send emails to us for questions, discussion and collaborations. - weihaox/awesome-digital-human Pose Guided Human Video Generation. in arXiv (2022 A curated (continually updated) list of Text-to-Video studies. Human Motion Generation: A Survey, Zhu et al. Czarnecki et al. 22 Oct 2022. Motion generation conditioned on natural language is challenging, as motion and text are from different MT-VAE: Learning Motion Transformations to Generate Multimodal Human Dynamics - Xinchen Yan, Akash Rastogi, Ruben Villegas, Kalyan Sunkavalli, Eli Shechtman, Sunil Hadap, Ersin Yumer, Honglak Lee (ECCV 2018) Few-Shot Human Motion Prediction via Meta-Learning - Liang-Yan Gui, Yu-Xiong Wang, Deva Ramanan, and Jos 虂e M. The goal of Gesture Generation is to generate gestures that Jul 24, 2024 路 Awesome-3D-Human-Motion-Generation 3D Human Motion Generation aims to generate natural and plausible motions from conditions such as text descriptions, action labels, music, etc. arXiv. Mar 23, 2023 路 Zero-shot Generation of Training Data with Denoising Diffusion Probabilistic Model for Handwritten Chinese Character Recognition SinMDM: Single Motion Diffusion Anonymous Authors . 馃懁 Papers for Talking Head Synthesis, released Autonomous driving traffic agent/scene simulation/generation - zachytong/Awesome-Traffic-Simulation Oct 3, 2024 路 Notably, this is the first survey that discusses the potential of large language models in enhancing human motion video generation. The computational challenge of motion generation lies in capturing the nonlinear relationship between musical pieces and the intricate hand motions required for piano EMDM: Efficient Motion Diffusion Model for Fast, High-Quality Human Motion Generation, Zhou et al. Puppet-Master: Scaling Interactive Video Generation as a Motion Prior for Part-Level Dynamics. , "right arm straight". Moura (ECCV 2018) Large-scale piano-motion datasets are the foundation of a nuanced approach for motion generation, which offer valuable guidance for physical performance and musical expression. arXiv 2024. However, video generation still faces considerable challenges in various aspects, such as controllability, video length, and richness of details, which hinder the application and popularization of this technology. LION: Latent Point Diffusion Models for 3D Shape Generation Xiaohui Zeng, Arash Vahdat, Francis Williams, Zan Gojcic, Or Litany, Sanja Fidler, Karsten Kreis NeurIPS 2022. Feb 26, 2024 路 Conditional human motion generation is an important topic with many applications in virtual reality, gaming, and robotics. F. edu; {hengbo_ma, jinning_li}@berkeley. Liu, Siqi, Guy Lever, Zhe Wang, Josh Merel, S. arXiv Code. 10, Harmon: Whole-Body Motion Generation of Humanoid Robots from Language Descriptions, website arXiv 2024. Also with paired dance-music data for training! Aug 31, 2022 路 Human motion modeling is important for many modern graphics applications, which typically require professional skills. In this context, we introduce FlowMDM, the first diffusion-based model that generates seamless Human Motion Compositions (HMC) without any postprocessing or redundant denoising steps. Sep 10, 2024 路 [2] We provide an in-depth analysis of human motion video generation from both motion planning and motion generation perspectives, a dimension that has been underexplored in existing reviews. The progress in the field of 3D has been extremely rapid, and many methods have become obsolete. . M2D2M. M2D2M: Multi-Motion Generation from Text with Discrete Diffusion Models, Chi et al. T-PAMI 2023. A Lip Sync Expert Is All You Need for Speech to Lip Generation In The Wild: ACMMM: paper: code: LRS2: Talking-head Generation with Rhythmic Head Motion: ECCV: paper: code: Crema, Grid, Voxceleb, Lrs3: MEAD: A Large-scale Audio-visual Dataset for Emotional Talking-face Generation: ECCV: paper: code: VoxCeleb2, AffectNet: Neural voice puppetry A Survey on Human Motion Video Generation. It's based on our survey paper: From Sora What We Can See: A Survey of Text-to-Video Generation. 12 Oct 2022 Mar 2, 2021 路 Modiff: Action-Conditioned 3D Motion Generation with Denoising Diffusion Probabilistic Models Mengyi Zhao, Mengyuan Liu, Bin Ren, Shuling Dai, Nicu Sebe arXiv 2023. - awesome-3d-diffusion/README. 12) InfiniMotion: Mamba Boosts Memory in Transformer for Arbitrary Long Motion Generation (2024. - ucasmjc/Awesome-Motion-Generation-based-on-Diffusion-Model Jul 11, 2024 路 1) Different Scope. , Arxiv 2023 InterControl: Generate Human Motion Interactions by Controlling Every Joint , Wang et al. Human motion modelling is a classical problem at the intersection of graphics and computer vision, with applications spanning human-computer interaction, motion synthesis, and motion prediction for virtual and augmented reality. OMG: Towards Open-vocabulary Motion Generation via Mixture of Controllers MAS: Multi-view Ancestral Sampling for 3D motion generation using 2D diffusion [ project ] [ paper ] [ code ] AnySkill: Learning Open-Vocabulary Physical Skill for Interactive Agents [ project ] [ paper ] [ Video ] [ code ] Human Motion Prediction under Unexpected Perturbation. A list of awesome human motion generation papers. We then examine the main methods employed for three key sub-tasks within human video generation: text-driven, audio-driven, and pose-driven motion generation. A paperlist for diffusion-based Motion Generation. This is an open collection of state-of-the-art (SOTA), novel Text to X (X can be everything) methods (papers, codes and datasets). You signed out in another tab or window. md Jun 28, 2024 路 In recent years, generative artificial intelligence has achieved significant advancements in the field of image generation, spawning a variety of applications. Current 4D generation methods have achieved noteworthy efficacy with the aid of advanced diffusion generative models. Motiondiffuse: Text-driven human motion generation with diffusion model. Therefore automatically generating motion from textual descriptions, which allows producing meaningful motion data, could save time and be more eco-nomical. [3] We clearly delineate established baselines and evaluation metrics, offering detailed insights into the key challenges shaping this field. In this survey, We have conducted a comprehensive exploration of existing works in the Text-to-Video field using OpenAI’s Sora as a clue, and we have also summarized 24 datasets and 9 evaluation metrics in this field. paper code demo; Mingyuan Zhang, Zhongang Cai, Liang Pan, Fangzhou Hong, Xinying Guo, Lei Yang, and Ziwei Liu. Our MCM-LDM significantly emphasizes preserving trajectories, recognizing their fundamental role in defining the essence and fluidity of motion content. Here is a preliminary classification new motion in the game industry is to perform motion cap-ture, which is expensive. 馃専Full-Body Articulated Human-Object Interaction. MotionDiffuse: Text-Driven Human Motion Generation with Diffusion Model A collection of works about motion generation. Subsequent team members will conduct the writing work succeeding when they have time. Xin Shen, Heming Du, Hongwei Sheng, Shuyun Wang, Hui Chen, Huiqiang Chen, Zhuojie Wu Jan 9, 2025 路 Modiff: Action-Conditioned 3D Motion Generation with Denoising Diffusion Probabilistic Models Skeleton-based action analysis for ADHD diagnosis [ paper ] Fine-grained Action Analysis: A Multi-modality and Multi-task Dataset of Figure Skating [ paper ] 馃専T2M-GPT: Generating Human Motion from Textual Descriptions with Discrete Representations. Recent news of this GitHub repo are listed as follows Jun 26, 2024 路 A paperlist for diffusion-based Motion Generation. Code. Input: Audio, Text, Gesture, . MoST: Multi-modality Scene Tokenization for Motion Prediction. This is a curated list of remarkable AIGC 3D papers, which is inspired by awesome-NeRF. In this paper, we propose \\textbf{Morph}, a \\textbf{Mo}tion-f\\textbf{r}ee \\textbf{ph Sep 6, 2019 路 The Our Generation Awesome Academy™ is ready to show you why “school” rhymes with “cool”! What will we l Get an A+ in FUN with our school for 18-inch dolls! Abstract. M. k. g. Star 257. 12) Motion Mamba: Efficient and Long Sequence Motion Generation with Hierarchical and Bidirectional Selective SSM Paper Code (Arxiv 24. Parallelizing Autoregressive Generation with Variational State Space Models (2024. -> Output: Gesture Motion Gesture Generation is the process of generating gestures from speech or text. Continuing editing (Not finished yet) The goal of project is focus on Audio-driven Gesture Generation with output is 3D keypoints gesture. Recent advancements in state space models (SSMs), notably Mamba, have showcased considerable promise in long sequence modeling with an efficient hardware-aware design, which appears to be a promising direction to build motion MotionLCM: Real-time Controllable Motion Generation via Latent Consistency Model, Dai et al. ICLR (2023). We will appreciate it a lot. Ren et al. Jun 5, 2024 路 A work list of recent human video generation method. hk. However, these methods lack multi-view spatial-temporal modeling and encounter challenges in integrating diverse prior knowledge from multiple diffusion models, resulting in inconsistent temporal appearance and flickers. Direct-a-Video: Customized Video Generation with User-Directed Camera Movement and Object Motion Contribute to Run542968/Awesome-3D-Human-Motion-Generation development by creating an account on GitHub. A curated list of recent diffusion models for video generation, editing, restoration, understanding, etc. A collection of papers on diffusion models for 3D generation. , 2023 A list of awesome human motion generation papers. This paper introduces an Anatomically-Informed VQ-VAE, designed to leverage the inherent structure of the human body, a key yet previously underutilized This is a curated list of remarkable AIGC 3D papers, which is inspired by awesome-NeRF. CoMo represents motions as pose codes, with each code defining the state of a specific body part at a given moment, e. , arXiv 2024 | Project | Code Synergy and Synchrony in Couple Dances , Maluleke et al. For this, we introduce the Blended Positional Encodings, a technique that leverages both absolute and relative positional encodings in the denoising chain. We have listed this requirement as to be determined. The proposed approach utilizes a 3D-aware generative network along with a hybrid embedding AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars (SIGGRAPH 2022) AvatarBooth: High-Quality and Customizable 3D Human Avatar Generation (arXiv 16/06/2023) AvatarFusion: Zero-shot Generation of Clothing-Decoupled 3D Avatars Using 2D Diffusion (ACMMM 2023) TADA! Text to Animatable Digital Avatars (arXiv 2023) Guy Tevet, Sigal Raab, Brian Gordon, Yonatan Shafir, Daniel Cohen-Or, and Amit H Bermano. md","path":"README. 26) State Space Models as Foundation Models: A Control Theoretic Overview Paper derikon / awesome-human-motion Star 410. , 2024: Customizing Motion in Text-to-Video Diffusion Models-Dec. Human motion generation aims to generate natural human pose sequences and shows immense potential for real-world applications. To the best of our knowledge, this is the first survey to offer such a comprehensive framework for human motion video generation. While prior works have focused on generating motion guided by text, music, or scenes, these typically result in isolated motions confined to short durations. Eslami, Daniel Hennes, Wojciech M. Compared with the general video generation task that many previous surveys [SIGGRAPH ASIA, 2022] Transformer Inertial Poser: Real-time Human Motion Reconstruction from Sparse IMUs with Simultaneous Terrain Generation Authors : Yifeng Jiang, Yuting Ye, Deepak Gopinath, Jungdam Won, Alexander W. Dataset Summary: The Human-Motion dataset consists of 106,292 video clips. Learning to Forecast and Refine Residual Motion for Image-to-Video Generation. Our details survey is online now. Mar 12, 2024 路 Human motion generation stands as a significant pursuit in generative computer vision, while achieving long-sequence and efficient motion generation remains challenging. Text2Performer: Text-Driven Human Video Generation. Karen Liu 馃敟 [ECCV 2024] Motion Mamba: Efficient and Long Sequence Motion Generation. com Motion Generation Review: Exploring Deep Learning for Lifelike Animation with Manifold, Zhao et al. 11) Hamba: Single-view 3D Hand Reconstruction with Graph-guided Bi-Scanning Mamba (2024. Motion-I2V: Consistent and Controllable Image-to-Video Generation with Explicit Motion Modeling--Jan. ECCV 2024. , arXiv 2024 | Project Jul 20, 2023 路 Human motion generation aims to generate natural human pose sequences and shows immense potential for real-world applications. This paper introduces an Anatomically-Informed VQ-VAE, designed to leverage the inherent structure of the human body, a key yet previously underutilized MotionBooth: Motion-Aware Customized Text-to-Video Generation. Continuing to be updated!!! - Silverster98/Awesome-Human-Motion-Generation The goal of project is focus on Audio-driven Gesture Generation with output is 3D keypoints gesture. Code Issues Pull requests To associate your repository with the motion-generation topic, visit In the era of big models for art creation, 3D human motion generation has become a crucial research direction, with Vector Quantized Variational Auto-Encoders (VQ-VAEs) playing a pivotal role in bridging modalities for cross-modal tasks. Emails: jiachen_li@stanford. Human motion diffusion model. -> Output: Gesture Motion. However, it remains challenging to achieve diverse and fine-grained motion generation with various text inputs. Producing and Leveraging Online Map Uncertainty in Trajectory Prediction. 09 , Opt2Skill: Imitating Dynamically-feasible Whole-Body Trajectories for Versatile Humanoid Loco-Manipulation, Website Apr 24, 2015 路 (Arxiv 24. 147 stars 8 forks 183 projects Awesome-Talking-Head-Generation A curated list of papers focused on talking head animation, talking head animation, intend to keep pace with the anticipated surge of research in the coming months. Contribute to Winn1y/Awesome-Human-Motion-Video-Generation development by creating an account on GitHub. Code Issues Pull requests (GCN) for human motion generation from music. Curated list of papers and resources focused on human motion generation - zhanglbthu/awesome-human-motion-generation Nov 22, 2024 路 Human motion generation plays a vital role in applications such as digital humans and humanoid robot control. 2022. - wentaoL86/Awesome-Human-Video-Generation Awesome-Text2X-Resources. Most research within this field focuses on generating human motions based on conditional Oct 29, 2024 路 If you find our work useful, you can cite the paper as following formats. Continuing to be updated!!! Resources See full list on github. Comparing with previous methods, our approach has several highlights. Thanks for your attention. Large Motion Model. If you have any suggestions about this repository, please feel free to start a new issue or pull requests. Leveraging pose codes as interpretable motion representations, the three main components of CoMo work jointly to effectively generate and edit motion: (1) The Motion Encoder-Decoder parses motions into sequences of pose codes and reconstructs them back {"payload":{"allShortcutsEnabled":false,"fileTree":{"":{"items":[{"name":"LICENSE","path":"LICENSE","contentType":"file"},{"name":"README. Substantial progress has been made recently in motion data collection technologies and generation methods, laying the foundation for increasing interest in human motion generation. Code and links to the motion-generation topic page so that developers can more easily learn about it. If you are researching in talking head generation task, you can add my discord account: Fa-Ting Hong#6563 for Long Video Generation: Building on ConFiner, ConFiner-Long uses strategies like consistency initialization, motion coherence guidance, and staggered refinement to ensure smooth transitions between video segments for long video generation. This repository is built mainly to track mainstream Text-to-Motion works, and also contains papers and datasets related to it. - ucasmjc/Awesome-Motion-Generation-based-on-Diffusion-Model STAR: Skeleton-aware Text-based 4D Avatar Generation with In-Network Motion Retargeting, Chai et al. However, most existing approaches disregard physics constraints, leading to the frequent production of physically implausible motions with pronounced artifacts such as floating and foot sliding. Each clip was rigorously filtered and re-annotated to ensure high-quality identity and motion information Oct 14, 2024 路 Recent advancements in human motion synthesis have focused on specific types of motions, such as human-scene interaction, locomotion or human-human interaction, however, there is a lack of a unified system capable of generating a diverse combination of motion types. Nov 9, 2023 路 Awesome Autoregressive Visual Generation Models A curated list of recent autoregressive models for image/video generation, editing, restoration, etc (only focusing on next-set prediction paradigm). Bidirectionally Deformable Motion Modulation For Video-based Human Pose Transfer. 13) ClinicalMamba: A Generative Clinical Language Model on Longitudinal Clinical Notes Paper Code (Arxiv 24. This repository focus on half/full body human video generation method, The Nerf, Gaussian splashing, Motion Pose, and talking head/Portrait is not included. ArXiv 2024. 鐭ヨ瘑鏄熺悆 Talking Head Generation with Audio and Speech Related Facial Action Units [BMVC 2021] Paper; Audio2Head: Audio-driven One-shot Talking-head Generation with Natural Head Motion [IJCAI 2021] Paper; Write-a-speaker: Text-based Emotional and Rhythmic Talking-head Generation [AAAI 2021] Paper Apr 11, 2024 路 A list of works on evaluation of visual generation models, including evaluation metrics, models, and systems - ziqihuangg/Awesome-Evaluation-of-Visual-Generation Digital Human Resource Collection: 2D/3D/4D human modeling, avatar generation & animation, clothed people digitalization, virtual try-on, and others. - showlab/Awesome-Video-Diffusion pansanity666 / Awesome-Avatars. Generating realistic videos with human movements is challenging in nature, due to the intricacies of human body topology and sensitivity to visual artifacts. In this work, we CVPR(2021) PIRenderer: Controllable Portrait Image Generation via Semantic Neural Rendering PDF/code CVPR(2022) StyleHEAT: One-Shot High-Resolution Editable Talking Face Generation via Pre-trained StyleGAN PDF/code CVPR(2022) Depth-Aware Generative Adversarial Network for Talking Head Video Apr 5, 2024 路 ControlMM: Controllable Masked Motion Generation: 14 Oct 2024: Link: Link: Link: 2024: MotionCLR: Motion Generation and Training-free Editing via Understanding Attention Mechanisms: 24 Oct 2024: Link: Link: Link: 2024: KMM: Key Frame Mask Mamba for Extended Motion Generation: 10 Nov 2024: Link: Link: Link: 2024: KinMo: Kinematic-aware Human Sep 11, 2024 路 GitHub is where people build software. , Arxiv 2024 You signed in with another tab or window. , Arxiv 2023 Awesome Few-Shot Image Generation A curated list of resources including papers, datasets, and relevant links pertaining to few-shot image generation. In order to remove the skill barriers for laymen, recent motion generation methods can directly generate human motions conditioned on natural languages. Survey v1: Towards Unifying Understanding and Generation in the Era of Vision Foundation Models: A Survey from the Autoregression Perspective 5. 3d-generation aigc gaussian-splatting generative-model human-generation human-object-interaction image-to-4d motion-generation t2v text-driven. This survey focuses on human video generation, which is a 2D video generation task that uses a generative model to input text, audio, posture, or other modal data and uses full-body or half-body characters, including hands and faces as generated subjects. PhysDiff: Physics-Guided Human Motion Diffusion Model Ye Yuan, Jiaming Song, Umar Iqbal, Arash Vahdat, Jan Kautz arXiv 2022. 07. The extensively studied 2D media generation methods take advantage of massive human media datasets, but struggle with 3D-aware Jul 11, 2024 路 We start with an introduction to the fundamentals of human video generation and the evolution of generative models that have facilitated the field's growth. arXiv Code This GitHub repository summarizes papers and resources related to the video generation task. Large Motion Model for Awesome-Text2Motion-Generation . Code Issues Pull requests This repository organizes papers, codes and resources related to generative adversarial networks (GANs) 馃 and neural radiance fields (NeRF) 馃帹, with a main focus on image-driven and audio-driven talking head synthesis papers and released codes. Extensive experiments demonstrate the effectiveness of our approach in both evaluating and improving the quality of generated human motions by aligning with human perceptions. If you are researching in talking head generation task, you can add my discord account: Fa-Ting Hong#6563 for A collection of works about motion generation. 14) Awesome Gesture Generation . 6. Reload to refresh your session. Table of Contents Diffusion Motion: Generate Text-Guided 3D Human Motion by Diffusion Model Zhiyuan Ren, Zhihong Pan, Xin Zhou, Le Kang arXiv 2022. Follow Your Pose: Pose-Guided Text-to-Video Generation using Pose Our critic model offers a more accurate metric for assessing motion quality and could be readily integrated into the motion generation pipeline to enhance generation quality. Winkler, C. SMPL - SMPL is a realistic 3D model of the human body that is based on skinning and blend shapes and is learned from thousands of 3D body scans. zhshj0110 / Awesome-Motion-Diffusion-Models Star 17. etc. Our survey reviews the latest developments and technological trends in human motion video generation across three primary modalities: vision, text, and audio. [1] We decompose human motion video generation into five key phases, covering all subtasks across various driving sources and body regions. Here is a preliminary classification In the era of big models for art creation, 3D human motion generation has become a crucial research direction, with Vector Quantized Variational Auto-Encoders (VQ-VAEs) playing a pivotal role in bridging modalities for cross-modal tasks. Yang, Ceyuan and Wang, Zhe and Zhu, Xinge and Huang, Chen and Shi, Jianping and Lin, Dahua. Aug 30, 2024 路 In this paper, we introduce TIMotion (Temporal and Interactive Modeling), an efficient and effective framework for human-human motion generation. To address ALEEEHU / Awesome-Text2X-Resources Star 29. You switched accounts on another tab or window. Specifically, we first propose Causal Interactive Injection to model two separate sequences as a causal sequence leveraging the temporal and causal properties. Realistic Human Motion Generation with Cross-Diffusion Models, Ren et al. Multi-subject Open-set Personalization in Video Generation (Published: 2025-01-10) Authors: Tsai-Shien Chen, Aliaksandr Siarohin, Willi Menapace, Yuwei Fang, Kwot Sin Lee, Ivan Skorokhodov, Kfir Aberman, Jun-Yan Zhu, Ming-Hsuan Yang, Sergey Tulyakov Motion generation ScaMo: Exploring the Scaling Law in Autoregressive Motion Generation Model , Lu et al. Dec 19, 2024 路 Experiments show that our approach outperforms existing state-of-the-art models on various motion generation tasks, including text-to-motion generation, compositional motion generation, and multi-concept motion generation. Gesture Generation is the process of generating gestures from speech Spatio-Temporal (Video) Scene Graph Generation, a. 10 , Whole-Body Dynamic Throwing with Legged Manipulators arXiv 2024. , Arxiv 2023 DiffusionPhase: Motion Diffusion in Frequency Domain , Wan et al. ECCV 2018. 03. md at main · cwchenwang/awesome-3d-diffusion Jul 1, 2024 路 In this work, we propose a controllable video generation framework, dubbed MimicMotion, which can generate high-quality videos of arbitrary length with any motion guidance. , 2024: Motion-Zero: Zero-Shot Moving Object Control Framework for Diffusion-Based Video Generation--Jan. wujclcv cpi nxeg afnh pom orkg qxjkcim pqhkha btfxnj pmebhj