-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Copy stuff #1
base: master
Are you sure you want to change the base?
Copy stuff #1
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,23 @@ | ||
catkin_ws/build | ||
catkin_ws/devel | ||
|
||
backup/* | ||
logs/* | ||
downloads/server | ||
downloads/*.pth | ||
|
||
real/camera_depth_scale.txt | ||
real/camera_pose.txt | ||
|
||
realsense/CMakeFiles/* | ||
realsense/CMakeCache.txt | ||
realsense/Makefile | ||
realsense/cmake_install.cmake | ||
realsense/realsense | ||
|
||
*.pyc | ||
*__pycache__* | ||
|
||
commands.txt | ||
visualization.push.png | ||
visualization.grasp.png |
This file was deleted.
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,187 @@ | ||
#!/usr/bin/env python | ||
|
||
import matplotlib.pyplot as plt | ||
import numpy as np | ||
import time | ||
import cv2 | ||
from real.camera import Camera | ||
from robot import Robot | ||
from scipy import optimize | ||
from mpl_toolkits.mplot3d import Axes3D | ||
|
||
|
||
# User options (change me) | ||
# --------------- Setup options --------------- | ||
tcp_host_ip = '100.127.7.223' # IP and port to robot arm as TCP client (UR5) | ||
tcp_port = 30002 | ||
rtc_host_ip = '100.127.7.223' # IP and port to robot arm as real-time client (UR5) | ||
rtc_port = 30003 | ||
workspace_limits = np.asarray([[0.3, 0.748], [0.05, 0.4], [-0.2, -0.1]]) # Cols: min max, Rows: x y z (define workspace limits in robot coordinates) | ||
calib_grid_step = 0.05 | ||
checkerboard_offset_from_tool = [0,-0.13,0.02] | ||
tool_orientation = [-np.pi/2,0,0] # [0,-2.22,2.22] # [2.22,2.22,0] | ||
# --------------------------------------------- | ||
|
||
|
||
# Construct 3D calibration grid across workspace | ||
gridspace_x = np.linspace(workspace_limits[0][0], workspace_limits[0][1], 1 + (workspace_limits[0][1] - workspace_limits[0][0])/calib_grid_step) | ||
gridspace_y = np.linspace(workspace_limits[1][0], workspace_limits[1][1], 1 + (workspace_limits[1][1] - workspace_limits[1][0])/calib_grid_step) | ||
gridspace_z = np.linspace(workspace_limits[2][0], workspace_limits[2][1], 1 + (workspace_limits[2][1] - workspace_limits[2][0])/calib_grid_step) | ||
calib_grid_x, calib_grid_y, calib_grid_z = np.meshgrid(gridspace_x, gridspace_y, gridspace_z) | ||
num_calib_grid_pts = calib_grid_x.shape[0]*calib_grid_x.shape[1]*calib_grid_x.shape[2] | ||
calib_grid_x.shape = (num_calib_grid_pts,1) | ||
calib_grid_y.shape = (num_calib_grid_pts,1) | ||
calib_grid_z.shape = (num_calib_grid_pts,1) | ||
calib_grid_pts = np.concatenate((calib_grid_x, calib_grid_y, calib_grid_z), axis=1) | ||
|
||
measured_pts = [] | ||
observed_pts = [] | ||
observed_pix = [] | ||
|
||
# Move robot to home pose | ||
print('Connecting to robot...') | ||
robot = Robot(False, None, None, workspace_limits, | ||
tcp_host_ip, tcp_port, rtc_host_ip, rtc_port, | ||
False, None, None) | ||
robot.open_gripper() | ||
|
||
# Slow down robot | ||
robot.joint_acc = 1.4 | ||
robot.joint_vel = 1.05 | ||
|
||
# Make robot gripper point upwards | ||
robot.move_joints([-np.pi, -np.pi/2, np.pi/2, 0, np.pi/2, np.pi]) | ||
|
||
# Move robot to each calibration point in workspace | ||
print('Collecting data...') | ||
for calib_pt_idx in range(num_calib_grid_pts): | ||
tool_position = calib_grid_pts[calib_pt_idx,:] | ||
robot.move_to(tool_position, tool_orientation) | ||
time.sleep(1) | ||
|
||
# Find checkerboard center | ||
checkerboard_size = (3,3) | ||
refine_criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001) | ||
camera_color_img, camera_depth_img = robot.get_camera_data() | ||
bgr_color_data = cv2.cvtColor(camera_color_img, cv2.COLOR_RGB2BGR) | ||
gray_data = cv2.cvtColor(bgr_color_data, cv2.COLOR_RGB2GRAY) | ||
checkerboard_found, corners = cv2.findChessboardCorners(gray_data, checkerboard_size, None, cv2.CALIB_CB_ADAPTIVE_THRESH) | ||
if checkerboard_found: | ||
corners_refined = cv2.cornerSubPix(gray_data, corners, (3,3), (-1,-1), refine_criteria) | ||
|
||
# Get observed checkerboard center 3D point in camera space | ||
checkerboard_pix = np.round(corners_refined[4,0,:]).astype(int) | ||
checkerboard_z = camera_depth_img[checkerboard_pix[1]][checkerboard_pix[0]] | ||
checkerboard_x = np.multiply(checkerboard_pix[0]-robot.cam_intrinsics[0][2],checkerboard_z/robot.cam_intrinsics[0][0]) | ||
checkerboard_y = np.multiply(checkerboard_pix[1]-robot.cam_intrinsics[1][2],checkerboard_z/robot.cam_intrinsics[1][1]) | ||
if checkerboard_z == 0: | ||
continue | ||
|
||
# Save calibration point and observed checkerboard center | ||
observed_pts.append([checkerboard_x,checkerboard_y,checkerboard_z]) | ||
# tool_position[2] += checkerboard_offset_from_tool | ||
tool_position = tool_position + checkerboard_offset_from_tool | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. What is tool_position, or more specifically, what is the tool they are referring to? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I believe the tool_position is referring to the gripper. The code sets the tool_position to 'calib_grid_pts' that does calibration of the camera/gripper. Part of the paper: " The camera is localized with respect to the robot base by an automatic calibration procedure, during which the camera tracks the location of a checkerboard pattern taped onto the gripper. The calibration optimizes for extrinsics as the robot moves the gripper over a grid of 3D locations (predefined with respect to robot coordinates) within the camera field of view" |
||
|
||
measured_pts.append(tool_position) | ||
observed_pix.append(checkerboard_pix) | ||
|
||
# Draw and display the corners | ||
# vis = cv2.drawChessboardCorners(robot.camera.color_data, checkerboard_size, corners_refined, checkerboard_found) | ||
vis = cv2.drawChessboardCorners(bgr_color_data, (1,1), corners_refined[4,:,:], checkerboard_found) | ||
cv2.imwrite('%06d.png' % len(measured_pts), vis) | ||
cv2.imshow('Calibration',vis) | ||
cv2.waitKey(10) | ||
|
||
# Move robot back to home pose | ||
robot.go_home() | ||
|
||
measured_pts = np.asarray(measured_pts) | ||
observed_pts = np.asarray(observed_pts) | ||
observed_pix = np.asarray(observed_pix) | ||
world2camera = np.eye(4) | ||
|
||
# Estimate rigid transform with SVD (from Nghia Ho) | ||
def get_rigid_transform(A, B): | ||
assert len(A) == len(B) | ||
N = A.shape[0]; # Total points | ||
centroid_A = np.mean(A, axis=0) | ||
centroid_B = np.mean(B, axis=0) | ||
AA = A - np.tile(centroid_A, (N, 1)) # Centre the points | ||
BB = B - np.tile(centroid_B, (N, 1)) | ||
H = np.dot(np.transpose(AA), BB) # Dot is matrix multiplication for array | ||
U, S, Vt = np.linalg.svd(H) | ||
R = np.dot(Vt.T, U.T) | ||
if np.linalg.det(R) < 0: # Special reflection case | ||
Vt[2,:] *= -1 | ||
R = np.dot(Vt.T, U.T) | ||
t = np.dot(-R, centroid_A.T) + centroid_B.T | ||
return R, t | ||
|
||
def get_rigid_transform_error(z_scale): | ||
global measured_pts, observed_pts, observed_pix, world2camera, camera | ||
|
||
# Apply z offset and compute new observed points using camera intrinsics | ||
observed_z = observed_pts[:,2:] * z_scale | ||
observed_x = np.multiply(observed_pix[:,[0]]-robot.cam_intrinsics[0][2],observed_z/robot.cam_intrinsics[0][0]) | ||
observed_y = np.multiply(observed_pix[:,[1]]-robot.cam_intrinsics[1][2],observed_z/robot.cam_intrinsics[1][1]) | ||
new_observed_pts = np.concatenate((observed_x, observed_y, observed_z), axis=1) | ||
|
||
# Estimate rigid transform between measured points and new observed points | ||
R, t = get_rigid_transform(np.asarray(measured_pts), np.asarray(new_observed_pts)) | ||
t.shape = (3,1) | ||
world2camera = np.concatenate((np.concatenate((R, t), axis=1),np.array([[0, 0, 0, 1]])), axis=0) | ||
|
||
# Compute rigid transform error | ||
registered_pts = np.dot(R,np.transpose(measured_pts)) + np.tile(t,(1,measured_pts.shape[0])) | ||
error = np.transpose(registered_pts) - new_observed_pts | ||
error = np.sum(np.multiply(error,error)) | ||
rmse = np.sqrt(error/measured_pts.shape[0]); | ||
return rmse | ||
|
||
# Optimize z scale w.r.t. rigid transform error | ||
print('Calibrating...') | ||
z_scale_init = 1 | ||
optim_result = optimize.minimize(get_rigid_transform_error, np.asarray(z_scale_init), method='Nelder-Mead') | ||
camera_depth_offset = optim_result.x | ||
|
||
# Save camera optimized offset and camera pose | ||
print('Saving...') | ||
np.savetxt('real/camera_depth_scale.txt', camera_depth_offset, delimiter=' ') | ||
get_rigid_transform_error(camera_depth_offset) | ||
camera_pose = np.linalg.inv(world2camera) | ||
np.savetxt('real/camera_pose.txt', camera_pose, delimiter=' ') | ||
print('Done.') | ||
|
||
# DEBUG CODE ----------------------------------------------------------------------------------- | ||
|
||
# np.savetxt('measured_pts.txt', np.asarray(measured_pts), delimiter=' ') | ||
# np.savetxt('observed_pts.txt', np.asarray(observed_pts), delimiter=' ') | ||
# np.savetxt('observed_pix.txt', np.asarray(observed_pix), delimiter=' ') | ||
# measured_pts = np.loadtxt('measured_pts.txt', delimiter=' ') | ||
# observed_pts = np.loadtxt('observed_pts.txt', delimiter=' ') | ||
# observed_pix = np.loadtxt('observed_pix.txt', delimiter=' ') | ||
|
||
# fig = plt.figure() | ||
# ax = fig.add_subplot(111, projection='3d') | ||
# ax.scatter(measured_pts[:,0],measured_pts[:,1],measured_pts[:,2], c='blue') | ||
|
||
# print(camera_depth_offset) | ||
# R, t = get_rigid_transform(np.asarray(measured_pts), np.asarray(observed_pts)) | ||
# t.shape = (3,1) | ||
# camera_pose = np.concatenate((np.concatenate((R, t), axis=1),np.array([[0, 0, 0, 1]])), axis=0) | ||
# camera2robot = np.linalg.inv(camera_pose) | ||
# t_observed_pts = np.transpose(np.dot(camera2robot[0:3,0:3],np.transpose(observed_pts)) + np.tile(camera2robot[0:3,3:],(1,observed_pts.shape[0]))) | ||
|
||
# ax.scatter(t_observed_pts[:,0],t_observed_pts[:,1],t_observed_pts[:,2], c='red') | ||
|
||
# new_observed_pts = observed_pts.copy() | ||
# new_observed_pts[:,2] = new_observed_pts[:,2] * camera_depth_offset[0] | ||
# R, t = get_rigid_transform(np.asarray(measured_pts), np.asarray(new_observed_pts)) | ||
# t.shape = (3,1) | ||
# camera_pose = np.concatenate((np.concatenate((R, t), axis=1),np.array([[0, 0, 0, 1]])), axis=0) | ||
# camera2robot = np.linalg.inv(camera_pose) | ||
# t_new_observed_pts = np.transpose(np.dot(camera2robot[0:3,0:3],np.transpose(new_observed_pts)) + np.tile(camera2robot[0:3,3:],(1,new_observed_pts.shape[0]))) | ||
|
||
# ax.scatter(t_new_observed_pts[:,0],t_new_observed_pts[:,1],t_new_observed_pts[:,2], c='green') | ||
|
||
# plt.show() |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,44 @@ | ||
#!/usr/bin/env python | ||
|
||
import matplotlib.pyplot as plt | ||
import numpy as np | ||
import scipy as sc | ||
import time | ||
import cv2 | ||
import os | ||
import random | ||
from robot import Robot | ||
import utils | ||
|
||
|
||
# User options (change me) | ||
# --------------- Setup options --------------- | ||
obj_mesh_dir = os.path.abspath('objects/blocks') | ||
num_obj = 10 | ||
random_seed = 1234 | ||
workspace_limits = np.asarray([[-0.724, -0.276], [-0.224, 0.224], [-0.0001, 0.4]]) # Cols: min max, Rows: x y z (define workspace limits in robot coordinates) | ||
# --------------------------------------------- | ||
|
||
# Set random seed | ||
np.random.seed(random_seed) | ||
|
||
# Initialize robot simulation | ||
robot = Robot(True, obj_mesh_dir, num_obj, workspace_limits, | ||
None, None, None, None, | ||
True, False, None) | ||
|
||
test_case_file_name = raw_input("Enter the name of the file: ") # test-10-obj-00.txt | ||
|
||
# Fetch object poses | ||
obj_positions, obj_orientations = robot.get_obj_positions_and_orientations() | ||
|
||
# Save object information to file | ||
file = open(test_case_file_name, 'w') | ||
for object_idx in range(robot.num_obj): | ||
# curr_mesh_file = os.path.join(robot.obj_mesh_dir, robot.mesh_list[robot.obj_mesh_ind[object_idx]]) # Use absolute paths | ||
curr_mesh_file = os.path.join(robot.mesh_list[robot.obj_mesh_ind[object_idx]]) | ||
file.write('%s %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e %.18e\n' % (curr_mesh_file, | ||
robot.obj_mesh_color[object_idx][0], robot.obj_mesh_color[object_idx][1], robot.obj_mesh_color[object_idx][2], | ||
obj_positions[object_idx][0], obj_positions[object_idx][1], obj_positions[object_idx][2], | ||
obj_orientations[object_idx][0], obj_orientations[object_idx][1], obj_orientations[object_idx][2])) | ||
file.close() |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,39 @@ | ||
#!/usr/bin/env python | ||
|
||
import numpy as np | ||
import time | ||
from robot import Robot | ||
|
||
|
||
# User options (change me) | ||
# --------------- Setup options --------------- | ||
tcp_host_ip = '100.127.7.223' # IP and port to robot arm as TCP client (UR5) | ||
tcp_port = 30002 | ||
workspace_limits = np.asarray([[0.3, 0.748], [-0.224, 0.224], [-0.255, -0.1]]) # Cols: min max, Rows: x y z (define workspace limits in robot coordinates) | ||
# --------------------------------------------- | ||
|
||
# Initialize robot and move to home pose | ||
robot = Robot(False, None, None, workspace_limits, | ||
tcp_host_ip, tcp_port, None, None, | ||
False, None, None) | ||
|
||
# Repeatedly grasp at middle of workspace | ||
grasp_position = np.sum(workspace_limits, axis=1)/2 | ||
grasp_position[2] = -0.25 | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Lines 22-25, where did they get these numbers from/what are these numbers for with respect to what? |
||
|
||
grasp_position[0] = 86 * 0.002 + workspace_limits[0][0] | ||
grasp_position[1] = 120 * 0.002 + workspace_limits[1][0] | ||
grasp_position[2] = workspace_limits[2][0] | ||
|
||
while True: | ||
robot.grasp(grasp_position, 11*np.pi/8, workspace_limits) | ||
# robot.push(push_position, 0, workspace_limits) | ||
# robot.restart_real() | ||
time.sleep(1) | ||
|
||
# Repeatedly move to workspace corners | ||
# while True: | ||
# robot.move_to([workspace_limits[0][0], workspace_limits[1][0], workspace_limits[2][0]], [2.22,-2.22,0]) | ||
# robot.move_to([workspace_limits[0][0], workspace_limits[1][1], workspace_limits[2][0]], [2.22,-2.22,0]) | ||
# robot.move_to([workspace_limits[0][1], workspace_limits[1][1], workspace_limits[2][0]], [2.22,-2.22,0]) | ||
# robot.move_to([workspace_limits[0][1], workspace_limits[1][0], workspace_limits[2][0]], [2.22,-2.22,0]) |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,65 @@ | ||
#!/usr/bin/env python | ||
|
||
import os | ||
import argparse | ||
import numpy as np | ||
import matplotlib.pyplot as plt | ||
|
||
|
||
# Parse session directories | ||
parser = argparse.ArgumentParser(description='Plot performance of a session over training time.') | ||
parser.add_argument('--session_directory', dest='session_directory', action='store', type=str, help='path to session directory for which to measure performance') | ||
parser.add_argument('--method', dest='method', action='store', type=str, help='set to \'reactive\' (supervised learning) or \'reinforcement\' (reinforcement learning ie Q-learning)') | ||
parser.add_argument('--num_obj_complete', dest='num_obj_complete', action='store', type=int, help='number of objects picked before considering task complete') | ||
args = parser.parse_args() | ||
session_directory = args.session_directory | ||
method = args.method | ||
num_obj_complete = args.num_obj_complete | ||
|
||
# Parse data from session (action executed, reward values) | ||
# NOTE: reward_value_log just stores some value which is indicative of successful grasping, which could be a class ID (reactive) or actual reward value (from MDP, reinforcement) | ||
transitions_directory = os.path.join(session_directory, 'transitions') | ||
executed_action_log = np.loadtxt(os.path.join(transitions_directory, 'executed-action.log.txt'), delimiter=' ') | ||
max_iteration = executed_action_log.shape[0] | ||
executed_action_log = executed_action_log[0:max_iteration,:] | ||
reward_value_log = np.loadtxt(os.path.join(transitions_directory, 'reward-value.log.txt'), delimiter=' ') | ||
reward_value_log = reward_value_log[0:max_iteration] | ||
clearance_log = np.loadtxt(os.path.join(transitions_directory, 'clearance.log.txt'), delimiter=' ') | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Did we copy over the clearance.log.txt file? What did they put in this file? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think all the log files would be generated during testing/training? |
||
max_trials = len(clearance_log) | ||
clearance_log = np.concatenate((np.asarray([0]), clearance_log), axis=0).astype(int) | ||
|
||
# Count number of pushing/grasping actions before completion | ||
num_actions_before_completion = clearance_log[1:(max_trials+1)] - clearance_log[0:(max_trials)] | ||
|
||
grasp_success_rate = np.zeros((max_trials)) | ||
grasp_num_success = np.zeros((max_trials)) | ||
grasp_to_push_ratio = np.zeros(max_trials) | ||
for trial_idx in range(1,len(clearance_log)): | ||
|
||
# Get actions and reward values for current trial | ||
tmp_executed_action_log = executed_action_log[clearance_log[trial_idx-1]:clearance_log[trial_idx],0] | ||
tmp_reward_value_log = reward_value_log[clearance_log[trial_idx-1]:clearance_log[trial_idx]] | ||
|
||
# Get indices of pushing and grasping actions for current trial | ||
tmp_grasp_attempt_ind = np.argwhere(tmp_executed_action_log == 1) | ||
tmp_push_attempt_ind = np.argwhere(tmp_executed_action_log == 0) | ||
|
||
grasp_to_push_ratio[trial_idx-1] = float(len(tmp_grasp_attempt_ind))/float(len(tmp_executed_action_log)) | ||
|
||
# Count number of times grasp attempts were successful | ||
if method == 'reactive': | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This was mentioned in the paper: the different point values for reactive and reinforcement. |
||
tmp_num_grasp_success = np.sum(tmp_reward_value_log[tmp_grasp_attempt_ind] == 0) # Class ID for successful grasping is 0 (reactive policy) | ||
elif method == 'reinforcement': | ||
tmp_num_grasp_success = np.sum(tmp_reward_value_log[tmp_grasp_attempt_ind] >= 0.5) # Reward value for successful grasping is anything larger than 0.5 (reinforcement policy) | ||
|
||
grasp_num_success[trial_idx-1] = tmp_num_grasp_success | ||
grasp_success_rate[trial_idx-1] = float(tmp_num_grasp_success)/float(len(tmp_grasp_attempt_ind)) | ||
|
||
# Which trials reached task completion? | ||
valid_clearance = grasp_num_success >= num_obj_complete | ||
|
||
# Display results | ||
print('Average %% clearance: %2.1f' % (float(np.sum(valid_clearance))/float(max_trials)*100)) | ||
print('Average %% grasp success per clearance: %2.1f' % (np.mean(grasp_success_rate[valid_clearance])*100)) | ||
print('Average %% action efficiency: %2.1f' % (100*np.mean(np.divide(float(num_obj_complete), num_actions_before_completion[valid_clearance])))) | ||
print('Average grasp to push ratio: %2.1f' % (np.mean(grasp_to_push_ratio[valid_clearance])*100)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Part in the paper that links to the code:
"the location of a checkerboard pattern taped onto the gripper. The calibration optimizes for extrinsics as the robot moves the gripper over a grid of 3D locations (predefined with respect to robot coordinates) within the camera field of view"