Skip to content

Commit

Permalink
Combine batches of successive roles for same nodes,
Browse files Browse the repository at this point in the history
We can speed-up the application of (n+1)th role if both(n,n+1)
roles are being applied on the same node. This speedup of deployment of
ceilometer by atleast 1m20s (measured: 90sec) and swift by ~20s.

eg. In our 2node deployment ceilometer{server,central} are always
applied on the same node, given that they have different priorities, they are
to be applied, one after the other.

This does not violate any order constraints as the application procedure
of (n+1)th role is transparent to the nth role
  • Loading branch information
Sumit Jamgade committed Oct 16, 2017
1 parent 264b1f1 commit f038c32
Showing 1 changed file with 31 additions and 0 deletions.
31 changes: 31 additions & 0 deletions crowbar_framework/app/models/service_object.rb
Original file line number Diff line number Diff line change
Expand Up @@ -919,6 +919,35 @@ def self.proposal_to_role(proposal, bc_name)
RoleObject.new role
end

# we can speed-up the application of (n+1)th role if both(n,n+1)
# roles are being applied on the same node.
#
# eg. In our 2node deployment ceilometer{server,central} are always
# applied on the same node, given that they have different priorities,
# they are to be applied, one after the other.
#
# In other words: it's actually reducing the number of times chef-client
# is run rather than speeding up execution of any single run, by
# merging batches together
#
# a batch is [roles, nodes]
def mergebatches(batches)
merged_batches = []
unless bathces.empty?
current_batch = batches[0]
batches[1..-1].each do |next_batch|
if next_batch[1] == current_batch[1] && !current_batch[0].nil?
current_batch[0] << next_batch[0]
next
end
merged_batches << current_batch
current_batch = next_batch
end
merged_batches << current_batch
end
return merged_batches
end

#
# After validation, this is where the role is applied to the system The old
# instance (if one exists) is compared with the new instance. roles are
Expand Down Expand Up @@ -1171,6 +1200,8 @@ def apply_role(role, inst, in_queue, bootstrap = false)

batches << [roles, nodes_in_batch] unless nodes_in_batch.empty?
end

batches = mergebatches(batches)
Rails.logger.debug "batches: #{batches.inspect}"

# Cache attributes that are useful later on
Expand Down

0 comments on commit f038c32

Please sign in to comment.