In this work, our goal is to autonomously rearrange various kinds of large objects, such as chairs, tables, and bins, to their target positions and orientations using a legged manipulator. To this end, a complete large-object rearrangement system is shown in (a), which mainly consists of tailored perception, TAMP, low-level WBC, and high-level object velocity controller. First, given objects' target poses, the task planning algorithm generates task sequences (S1 → S2 → · · · → Si), including the object visitation order and the object-to-target assignment. Next, the coarse-to-fine robot trajectories for each subtask Si are computed based on the estimated object pose, which are then tracked by the low-level WBC. Once the robot reaches the target pose, it grasps the object, and a collision-free trajectory is planned for the object. This trajectory is tracked by the high-level object controller until the object is released at its goal pose. The whole process above is repeated until all subtasks are finished. In this task, the perception module provides global poses of the robot and objects. Similar to prior large-object rearrangement settings, we assume that the approximate positions of target objects are available (without requiring precise poses) and that feasible grasp regions are provided. It allows us to focus on the core challenges, i.e., robust post-grasp interaction control across diverse large objects.