ChatHuman: Language-driven 3D Human Understanding with Tools

1Max Planck Institute for Intelligent Systems, 2Standford University, 3Meshcapade,
4Tsinghua University, 5University of Cambridge, *Equal Contribution

ChatHuman is a multi-modal LLM specialized for understanding humans with the assistance of tools.

Abstract

Numerous methods have been proposed to detect, estimate, and analyze properties of people in images, including 3D pose, shape, contact, human-object interaction, and emotion. While widely applicable in vision and other areas, such methods require expert knowledge to select, use, and interpret the results. To address this, we introduce ChatHuman, a language-driven system that integrates the capabilities of specialized methods into a unified framework. ChatHuman functions as an assistant proficient in utilizing, analyzing, and interacting with tools specific to 3D human tasks, adeptly discussing and resolving related challenges. Built on a Large Language Model (LLM) framework, ChatHuman is trained to autonomously select, apply, and interpret a diverse set of tools in response to user inputs. Our approach overcomes significant hurdles in adapting LLMs to 3D human tasks, including the need for domain-specific knowledge and the ability to interpret complex 3D outputs. The innovations of ChatHuman include leveraging academic publications to instruct the LLM on tool usage, employing a retrieval-augmented generation model to create in-context learning examples for managing new tools, and effectively discriminating between and integrating tool results by transforming specialized 3D outputs into comprehensible formats. Experiments demonstrate that ChatHuman surpasses existing models in both tool selection accuracy and overall performance across various 3D human tasks, and it supports interactive chatting with users. ChatHuman represents a significant step toward consolidating diverse analytical methods into a unified, robust system for 3D human tasks.

Method

ChatHuman takes input text queries, images, or other 3D human-related modalities such as vectors like SMPL pose. Then, based on the user query, ChatHuman adopts a paper-based retrieval-augmented generation mechanism to generate a textual response about the tool use and call the tools. Finally, the tool results are transformed into a textual or visual format and fed into the multimodal LLM-based agent, which will incorporate the tool results with its generic world knowledge to generate a response in the form of text, images, or other modalities related to 3D humans.

Real World Application

Gradio Interface demo for

BibTeX


    @misc{lin2024chathuman,
        title={{ChatHuman}: Language-driven {3D} Human Understanding with Tools},
        author={Jing Lin and Yao Feng and Weiyang Liu and Michael J. Black},
        year={2024},
        eprint={2405.04533},
        archivePrefix={arXiv},
        primaryClass={cs.CV}
  }

Acknowledgements

We thank Naureen Mahmood, Nicolas Keller and Nicolas Heron for the support of the data collection.
Disclosure. MJB has received research gift funds from Adobe, Intel, Nvidia, Meta/Facebook, and Amazon. MJB has financial interests in Amazon and Meshcapade GmbH. While MJB is a co-founder and Chief Scientist at Meshcapade, his research in this project was performed solely at, and funded solely by, the Max Planck Society.