You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Example experiments: 22-004 (Fairfield), 22-025 (SmartSolo). Both examples are single deployment experiments (no moving of instruments to a different station). These cases were found during PH5 to TileDB experiment analysis review. For 22-004, the Das_t information is from the PH5 to TileDB analysis detail csv generated by Matt Briggs, which is obtained from the Das_t in the DMC experiment holdings. For 22-025, the Das_t information is looked up via kefedit in the Das_t in the experiment at EPIC. To my knowledge, they are equally accurate methods for seeing the extent of a given Das_t.
22-004 example:
Station 3011 has pickup time 23:39:00 on January 13 in Array_t. The Das_t for this station (1X3011) continues until approximately 07:00:00 on January 15. The ph5_validate warning for this station is: WARNING: Data exists after pickup time: 1352 seconds.
22-025 example:
Station 9074 was not turned off when intended. The Array_t pickup time is 17:00:00 on August 20. The Das_t for this station (9X9074) continues until approximately 16:00:00 on August 25. The ph5_validate warning for this station is: WARNING: Data exists after pickup time: 1800 seconds.
This could be considered a bug, but the check is only a warning. I think this means that if pickup times are adjusted manually in Array_t, the user must be careful since these cases suggest ph5_validate will not show if a large chunk of data is accidentally orphaned. If there is a conflict between addressing this and retaining ph5_validate's ability to handle cases where a Texan is rolled to different stations, I recommend the latter take priority.
Does this affect pre-deploy as well? Unknown, we may need to search for a case with data more than 1 recording window in length in order to determine this.
The text was updated successfully, but these errors were encountered:
Example experiments: 22-004 (Fairfield), 22-025 (SmartSolo). Both examples are single deployment experiments (no moving of instruments to a different station). These cases were found during PH5 to TileDB experiment analysis review. For 22-004, the Das_t information is from the PH5 to TileDB analysis detail csv generated by Matt Briggs, which is obtained from the Das_t in the DMC experiment holdings. For 22-025, the Das_t information is looked up via kefedit in the Das_t in the experiment at EPIC. To my knowledge, they are equally accurate methods for seeing the extent of a given Das_t.
22-004 example:
Station 3011 has pickup time 23:39:00 on January 13 in Array_t. The Das_t for this station (1X3011) continues until approximately 07:00:00 on January 15. The ph5_validate warning for this station is:
WARNING: Data exists after pickup time: 1352 seconds.
22-025 example:
Station 9074 was not turned off when intended. The Array_t pickup time is 17:00:00 on August 20. The Das_t for this station (9X9074) continues until approximately 16:00:00 on August 25. The ph5_validate warning for this station is:
WARNING: Data exists after pickup time: 1800 seconds.
This could be considered a bug, but the check is only a warning. I think this means that if pickup times are adjusted manually in Array_t, the user must be careful since these cases suggest ph5_validate will not show if a large chunk of data is accidentally orphaned. If there is a conflict between addressing this and retaining ph5_validate's ability to handle cases where a Texan is rolled to different stations, I recommend the latter take priority.
Does this affect pre-deploy as well? Unknown, we may need to search for a case with data more than 1 recording window in length in order to determine this.
The text was updated successfully, but these errors were encountered: