You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For this section we wish to highlight how tskit allows efficient processing of large, single trees.
So far I have converted trees from http://hgdownload.soe.ucsc.edu/goldenPath/wuhCor1/UShER_SARS-CoV-2/ to tskit (thanks @jeromekelleher for starting code here) this gives a tree with 799318 nodes weighing in at 120MB (half of this is metadata). map_mutations for each of the 27754 sites can be performed in an average of 19ms per site.
Next steps:
Perf as a function of number of nodes/samples in the tree?
Measure perf of subsetting operations based on metadata.
Possible comparison with usher?
The text was updated successfully, but these errors were encountered:
I'm imaging this section as a narrative showing that we can do real world things easily with tskit, efficiently, using the Python API. So, we say we loaded can load the trees into memory (x ms). Then identified identical samples. I guess a reasonable thing to aim for would be to duplicate some matUtils/usher operations using the Python API, and report the relative timings of these.
For this section we wish to highlight how tskit allows efficient processing of large, single trees.
So far I have converted trees from http://hgdownload.soe.ucsc.edu/goldenPath/wuhCor1/UShER_SARS-CoV-2/ to tskit (thanks @jeromekelleher for starting code here) this gives a tree with 799318 nodes weighing in at 120MB (half of this is metadata).
map_mutations
for each of the 27754 sites can be performed in an average of 19ms per site.Next steps:
The text was updated successfully, but these errors were encountered: