ChatHuman: Language-driven 3D Human Understanding
with Retrieval-Augmented Tool Reasoning

1Max Planck Institute for Intelligent Systems, 2ETH Zürich, 3Meshcapade,
4Tsinghua University, 5University of Cambridge, *Equal Contribution

ChatHuman is a multi-modal LLM specialized for understanding humans with the assistance of tools.

Abstract

Numerous methods have been proposed to detect, estimate, and analyze properties of people in images, including the estimation of 3D pose, shape, contact, human-object interaction, emotion, and more. Each of these methods works in isolation instead of synergistically. Here we address this problem and build a language-driven human understanding system that combines and integrates the skills of many different methods. To do so, we fine-tune a Large Language Model (LLM) to select and use a wide variety of existing tools in response to user inputs. In doing so, our ChatHuman is able to combine information from multiple tools to solve problems more accurately than the individual tools themselves and to leverage tool output to improve its ability to reason about humans. The novel features of ChatHuman include leveraging academic publications to guide the application of 3D human-related tools, employing a retrieval augmented generation model to generate in-context-learning examples for handling new tools, and discriminating and integrating tool results to enhance 3D human understanding. Extensive quantitative evaluation shows that ChatHuman outperforms existing models in both tool selection accuracy and performance across multiple 3D human-related tasks. ChatHuman is a step towards consolidating diverse methods for human analysis into a single, powerful, system for reasoning about people in 3D.

Method

ChatHuman takes input text queries, images, or other 3D human-related modalities such as vectors like SMPL pose. Then, based on the user query, ChatHuman adopts a paper-based retrieval-augmented generation mechanism to generate a textual response about the tool use and call the tools. Finally, the tool results are transformed into a textual or visual format and fed into the multimodal LLM-based agent, which will incorporate the tool results with its generic world knowledge to generate a response in the form of text, images, or other modalities related to 3D humans.

Real World Application

Gradio Interface demo for

BibTeX


    @misc{lin2024chathuman,
        title={{ChatHuman}: Language-driven {3D} Human Understanding with Retrieval-Augmented Tool Reasoning},
        author={Jing Lin and Yao Feng and Weiyang Liu and Michael J. Black},
        year={2024},
        eprint={2405.04533},
        archivePrefix={arXiv},
        primaryClass={cs.CV}
  }

Acknowledgements

We thank Naureen Mahmood, Nicolas Keller and Nicolas Heron for the support of the data collection.
Disclosure. MJB has received research gift funds from Adobe, Intel, Nvidia, Meta/Facebook, and Amazon. MJB has financial interests in Amazon and Meshcapade GmbH. While MJB is a co-founder and Chief Scientist at Meshcapade, his research in this project was performed solely at, and funded solely by, the Max Planck Society.