Multi-GraspLLM: A Multimodal LLM for Multi-Hand Semantic Guided Grasp Generation
Paper
•
2412.08468
•
Published
Error code: FileFormatMismatchBetweenSplitsError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
We introduce Multi-GraspSet, the first large-scale multi-hand grasp dataset enriched with automatic contact annotations.
|
|
The Construction process of Multi-GraspSet |
|
|
Visualization of Multi-GraspSet with contact annotations |
Follow these steps to set up the evaluation environment:
Create the Environment
conda create --name eval python=3.9
Install PyTorch and Dependencies
pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2
⚠️ Ensure the CUDA toolkit version matches your installed PyTorch version.
Install Pytorch Kinematics
cd ./pytorch_kinematics
pip install -e .
Install Remaining Requirements
pip install -r requirements_eval.txt
Run the Visualization Code
Open and execute the vis_mid_dataset.ipynb file to visualize the dataset.