collect_sensor_data()
📖 Method Description
Collect vision-based tactile data from the sensor, including images, depth maps, Marker point displacement, slip state, and other multi-dimensional information. This is the core method for obtaining sensor data.
📝 Syntax
data = sensor.collect_sensor_data(*data_types, frame=None)
🔧 Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
*data_types | int | - | One or more GFDataType enum values specifying the types to collect |
frame | np.ndarray | None | None | Input image for offline data processing shape is (H, W, 3)Set to None in online mode |
📤 Return Type
dict - Dictionary containing requested data types and their corresponding data
Available Data Types (GFDataType)
| Enum Value | Data Type | Shape | Description |
|---|---|---|---|
TIME_STAMP | int | - | Millisecond-level timestamp |
CALIBRATE_IMG | np.ndarray, uint8 | (H, W, 3) | Calibration image |
RAW_IMG | np.ndarray, uint8 | (H, W, 3) | Raw captured image |
WARPED_IMG | np.ndarray, uint8 | (H, W, 3) | Distortion-corrected image |
DIFF_IMG | np.ndarray, uint8 | (H, W, 3) | Difference image |
MARKER_IMG | np.ndarray, uint8 | (H, W, 3) | Marker visualization image |
DEPTH_MAP | np.ndarray, float64 | (H, W) | Depth map |
MARKER_ORIGIN_VECTOR | np.ndarray, float64 | (N, M, 2) | Initial Marker coordinates |
MARKER_CURRENT_VECTOR | np.ndarray, float64 | (N, M, 2) | Current Marker coordinates |
MARKER_OFFSET_VECTOR | np.ndarray, float64 | (N, M, 2) | Marker displacement vector |
XYZ_VECTOR | np.ndarray, float64 | (N, M, 3) | 3D coordinate data |
FORCE6D_VECTOR | np.ndarray, float64 | (6,) | 6-axis force/torque data: Fx, Fy, Fz, Mx, My, Mz |
SLIP_STATE | IntEnum | - | Slip state |
Slip States (SLIP_STATE)
| Enum Value | Description |
|---|---|
NO_OBJ | No object contact |
CONTACT | Initial contact |
STEADY_HOLD | Steady hold |
INCIPIENT_SLIP | Incipient slip |
PARTIAL_SLIP | Partial slip |
COMPLETE_SLIP | Complete slip |
💡 Example Code
Collect Single Data Type
from pyvitaisdk import GF225, VTSDeviceFinder, GFDataType
# Initialize sensor
finder = VTSDeviceFinder()
devices = finder.get_devices()
sensor = GF225(devices[0])
sensor.calibrate()
# Collect corrected image
data = sensor.collect_sensor_data(GFDataType.WARPED_IMG)
warped_img = data[GFDataType.WARPED_IMG]
print(f"Image size: {warped_img.shape}")
# Release resources
sensor.release()
Collect Multiple Data Types
from pyvitaisdk import GF225, VTSDeviceFinder, GFDataType
import cv2
finder = VTSDeviceFinder()
devices = finder.get_devices()
sensor = GF225(devices[0])
sensor.calibrate()
datatypes = [
GFDataType.WARPED_IMG,
GFDataType.DEPTH_MAP,
GFDataType.SLIP_STATE,
GFDataType.MARKER_OFFSET_VECTOR
]
# Collect multiple data types simultaneously
data = sensor.collect_sensor_data(*datatypes)
# Access different data
warped_img = data[GFDataType.WARPED_IMG]
depth_map = data[GFDataType.DEPTH_MAP]
slip_state = data[GFDataType.SLIP_STATE]
marker_offset = data[GFDataType.MARKER_OFFSET_VECTOR]
print(f"Corrected image size: {warped_img.shape}")
print(f"Depth map size: {depth_map.shape}")
print(f"Slip state: {slip_state.name}")
print(f"Marker offset vector: {marker_offset.shape}")
# Visualization
cv2.imshow("warped_img", warped_img)
cv2.waitKey(1)
sensor.release()
Offline Data Processing
import cv2
from pyvitaisdk import GF225, GFDataType
# Initialize in offline mode
sensor = GF225(config=None)
# Load calibration image and calibrate
calib_img = cv2.imread("calib.jpg")
sensor.calibrate(calib_img=calib_img)
# Load image to process
frame = cv2.imread("test_frame.jpg")
# Offline processing
data = sensor.collect_sensor_data(
GFDataType.DEPTH_MAP,
GFDataType.MARKER_OFFSET_VECTOR,
frame=frame
)
depth_map = data[GFDataType.DEPTH_MAP]
print(f"Depth map generated: {depth_map.shape}")
sensor.release()
⚠️ Notes
Prerequisites
- Must call
calibrate()first to complete calibration - Ensure sensor is properly initialized
Performance Optimization
- Only request needed data types to avoid unnecessary computation
- For high frame rate applications, consider reducing the number of simultaneously requested data types
Data Description
- Depth Map: Larger values indicate greater deformation (greater contact pressure)
- Marker Offset: Unit is pixels, representing Marker point displacement
- Slip State: Real-time detection of object contact and slip status
Offline Mode Notes
- Must provide
frameparameter for offline processing - Ensure input images are from the same collection conditions as calibration images