Skip to content

Prediction seems inaccurate in Embodied Object Understanding 😭 #19

@Bokai-Ji

Description

@Bokai-Ji

图片

Thank you for the excellent work!
I encountered problem that I can't get ideal prediction for Embodied Object Understanding.

Here is an example of the prediction:

  • USER: Where I can grasp to open the upper drawer?
  • ASSISTANT: To open the upper drawer, you can grasp the round fixed handle at the bounding box [[-0.55, -0.07, 0.45], [-0.55, -0.1, 0.45], [-0.55, -0.1, 0.42], [-0.55, -0.07, 0.42], [-0.51, -0.07, 0.45], [-0.51, -0.1, 0.45], [-0.51, -0.1, 0.42], [-0.51, -0.07, 0.42]].

The rendered result is shown in the figure above, where the predicted bounding box is far from the ground-truth position.
I tried several objects in PartNet-Mobility Dataset and none of the predictions are even close to the ground-truth.
Is this caused by the mismatch of the axes of the point clouds and the predicted bounding boxes?
Currently I'm using the preprocessing code in mm_utils.py, which is

def pc_norm(pc):
    """ pc: NxC, return NxC """
    centroid = np.mean(pc, axis=0)
    pc = pc - centroid
    m = np.max(np.sqrt(np.sum(pc ** 2, axis=1)))
    if m < 1e-6:
        pc = np.zeros_like(pc)
    else:
        pc = pc / m
    return pc

Appreciate for any support!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions