Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
yangliu28
GitHub Repository: yangliu28/swarm_formation_sim
Path: blob/master/trigridnet_probabilistic_consensus.py
104 views
1
# This simulation tests the consensus achievement algorithm (collective decision making
2
# algorithm) on randomly generated 2D triangle grid network. The algorithm is extracted
3
# from previous loop reshape simulations for further testing, so that it can be generalized
4
# to 2D random network topologies. The distribution evolution method is similar to the
5
# loop reshape simulation, except more than three nodes are involved in the weighted
6
# averaging process.
7
8
# input arguments:
9
# '-f': filename of the triangle grid network
10
# '-d': number of decisions each node can choose from
11
# '-r': repeat times of simulation, with different initial random distribution; default=0
12
# '--nobargraph': option to skip the bar graph visualization
13
14
# Pygame will be used to animate the dynamic group changes in the network;
15
# Matplotlib will be used to draw the unipolarity in a 3D bar graph, the whole decision
16
# distribution is not necessary.
17
18
# Algorithm to update the groups in 2D triangle grid network:
19
# Using a pool of indices for nodes that have not been grouped, those labeled into a
20
# group will be remove from the pool. As long as the pool is not enpty, a while loop
21
# will continue searching groups one by one. In the loop, it initialize a new group
22
# with first node in the pool. It also initialize a fifo-like variable(p_members) for
23
# potential members of this group, and an index variable(p_index) for iterating through
24
# the potential members. If p_members[p_index] doesn't share same decidion with first node,
25
# p_index increases by 1, will check next value in p_members. If it does, then a new member
26
# for this group has been found, it will be removed from the pool and p_members, and added
27
# to current group. The new group member will also introduce its neighbors as new
28
# potential members, but they should not be in p_members and current group, and should be
29
# in node pool. The member search for this group will end if p_index iterates to the end
30
# of p_members.
31
32
# 01/19/2018
33
# Testing an invented concept called "holistic dependency", for measuring how much the most
34
# depended node is being depended on more than others for maintaining the network connectivity
35
# in the holistic view. The final name of the concept may change.
36
37
# 01/29/2018
38
# Dr. Lee suggest adding colors to different groups. A flashy idea I though at first, but
39
# after reconsidering, it has a much better visual effect.
40
# Link of a list of 20 distinct colors:
41
# https://sashat.me/2017/01/11/list-of-20-simple-distinct-colors/
42
43
# 02/06/2018
44
# Adding another experiment scenario: for 100 nodes, at the begining, forcing 10 closely
45
# located nodes to have a specific decision (colored in blue), and run simulation.
46
47
# 02/12/2018
48
# Adding another experiment scenario: for 100 nodes, forcing 10 closely located nodes to
49
# have a specific decision (colored in blue), and in the half way toward consensus, seed
50
# 20 "stubborn" nodes that won't change their mind at all. Ideally the collective decision
51
# will reverse to the one insisted by the new 20 nodes.
52
# (The program is really messy now.)
53
54
# 02/13/2018
55
# Replace the unipolarity with discrete entropy for evaluating the decisiveness of the
56
# preference distributions. Adding same colors of the consensus groups to the bar graph.
57
58
# 02/26/2018
59
# (an excursion of thought)
60
# Writing a paper and keep revising it totally ruin the fun of developping this simulation.
61
# Writing paper is boring to many people I know, but it's specially painful to me in a
62
# subconscious way. I have very strong personal style which runs through every aspect of my
63
# life, and the style does not mix well with academia. I love simplicity, elegance, and truth.
64
# I rarely claim something in my belief, but these I am very proud to have, and have no
65
# intention to compromise them with anything. Academic paper as a form to present the
66
# state-of-the-art, and to communicate thought, should contain succinct and honest description
67
# of the work. However the two advisors I have worked with (I respect them from the deep of my
68
# heart) both revised my papers toward being more acceptable by the publishers. Most of the
69
# changes like pharsing and some of the restructuring are very good, reflecting their years of
70
# academic training. They make the paper more explicit and easy to understand. Others changes,
71
# IMHO, are simply trying to elevate the paper, so on first read people would think this is
72
# a very good one by those abstract words. This is what makes me suffer. These changes are
73
# steering the paper to the opposite of succinct and honesty. For example, if it is only a
74
# method to solve a specific problem, saying it is a systematic approach to solve a series
75
# of problem, and using abstract words for the technical terms, does not make the work any
76
# better. It would only confuse the readers and let them spend more time to dig out the idea.
77
# I could never fight my advisors on these changes, they would say it is just a style of
78
# writing, and it brings out the potential of the work. This is like to say, our presented
79
# technology can do this, can do that, and potentially it could do much more amazing things.
80
# I totally hate the last part! It just give people a chance to bragging things they would
81
# never reach. As I find out, quite a few published papers that I believe I understand every
82
# bit of it, are trying to elevate their content to a level they don't deserve. If I were to
83
# rewrite them, I would cut all the crap and leave only the naked truth and idea. Many
84
# researcher picked up this habit in their training with publishing papers, like an untold
85
# rule. As for my paper, I really wish my work were that good to be entitled to those words,
86
# but they are not. Academia is a sublime place in many people's thinking, as in mine. I have
87
# very high standards for the graduation of a PhD. If I kept following my style, and be honest
88
# with myself, I knew long ago I would never graduate. I am loving it too much to hate it.
89
# (addition to the simulation)
90
# Requested by the paper, when visualizing the bar graph for the discrete entropy, adding the
91
# summation of all discrete entropy to show how the entropy reduction works.
92
93
# 02/28/2018
94
# Remove color grey, beige, and coral from the distinct color set, avoid blending with white
95
# background. Make the node size larger.
96
97
98
import pygame
99
import matplotlib.pyplot as plt
100
from mpl_toolkits.mplot3d import Axes3D
101
from trigridnet_generator import *
102
from formation_functions import *
103
import math, sys, os, getopt, time
104
import numpy as np
105
106
net_size = 30 # default size of the triangle grid network
107
net_folder = 'trigrid-networks' # folder for triangle grid network files
108
net_filename = '30-1' # default filename of the network file, if no input
109
net_filepath = os.path.join(os.getcwd(), net_folder, net_filename) # corresponding filepath
110
111
deci_num = 30 # default number of decisions each node can choose from
112
113
repeat_times = 1 # default times of running the program repeatly
114
115
nobargraph = False # option as to whether or not skipping the 3D bar graph
116
117
# read command line options
118
try:
119
opts, args = getopt.getopt(sys.argv[1:], 'f:d:r:', ['nobargraph'])
120
# The colon after 'f' means '-f' requires an argument, it will raise an error if no
121
# argument followed by '-f'. But if '-f' is not even in the arguments, this won't raise
122
# an error. So it's necessary to define the default network filename
123
except getopt.GetoptError as err:
124
print str(err)
125
sys.exit()
126
for opt,arg in opts:
127
if opt == '-f':
128
# get the filename of the network
129
net_filename = arg
130
# check if this file exists
131
net_filepath = os.path.join(os.getcwd(), net_folder, net_filename)
132
if not os.path.isfile(net_filepath):
133
print "{} does not exist".format(net_filename)
134
sys.exit()
135
# parse the network size
136
net_size = int(net_filename.split('-')[0]) # the number before first dash
137
elif opt == '-d':
138
# get the number of decisions
139
deci_num = int(arg)
140
elif opt == '-r':
141
# get the times of repeatence
142
repeat_times = int(arg)
143
elif opt == '--nobargraph':
144
nobargraph = True
145
146
# read the network from file
147
nodes = [] # integers only is necessary to describe the network's node positions
148
f = open(net_filepath, 'r')
149
new_line = f.readline()
150
while len(new_line) != 0: # not the end of the file yet
151
pos_str = new_line[0:-1].split(' ') # get rid of '\n' at end
152
pos = [int(pos_str[0]), int(pos_str[1])] # convert to integers
153
nodes.append(pos)
154
new_line = f.readline()
155
156
# generate the connection matrix, 0 for not connected, 1 for connected
157
connections = [[0 for j in range(net_size)] for i in range(net_size)] # all zeros
158
for i in range(net_size):
159
for j in range(i+1, net_size):
160
# find if nodes[i] and nodes[j] are neighbors
161
diff_x = nodes[i][0] - nodes[j][0]
162
diff_y = nodes[i][1] - nodes[j][1]
163
if abs(diff_x) + abs(diff_y) == 1 or diff_x * diff_y == -1:
164
# condition 1: one of the axis value difference is 1, the other is 0
165
# condition 2: one of the axis value difference is 1, the other is -1
166
connections[i][j] = 1
167
connections[j][i] = 1
168
# another list type variable for easily indexing from the nodes
169
# converted from the above connection matrix variable
170
connection_lists = [] # the lists of connecting nodes for each node
171
for i in range(net_size):
172
connection_list_temp = []
173
for j in range(net_size):
174
if connections[i][j]: connection_list_temp.append(j)
175
connection_lists.append(connection_list_temp)
176
177
# until here, the network information has been read and interpreted completely
178
# calculate the "holistic dependency"
179
calculate_h_dependency = False # option for calculating holistic dependency
180
# this computation takes significant time when net_size is above 50
181
# the algorithm below has been optimized to the best efficient I can achieve
182
if calculate_h_dependency:
183
dependencies = [0.0 for i in range(net_size)] # individual dependency for each robot
184
holistic_dependency_abs = 0.0 # absolute holistic dependency
185
holistic_dependency_rel = 0.0 # relative holistic dependency
186
# Search the shortest path for every pair of nodes i and j.
187
for i in range(net_size-1): # i is the starting node
188
for j in range(i+1, net_size): # j is the target node
189
# (There might be multiple path sharing the shortest length, which are totally fine,
190
# all the path should count.)
191
# Some useful tricks have been used to make the search efficient.
192
path_potential = [[i]] # the paths that are in progress, potentially the shortest
193
path_succeed = [] # the shortest paths
194
# start searching
195
search_end = False # flag indicating if the shortest path is found
196
nodes_reached = {i} # dynamic pool for nodes in at least one of the paths
197
# key to speed up this search algorithm
198
while not search_end:
199
# increase one step for all paths in path_potential
200
path_potential2 = [] # for the paths adding one step from path_potential
201
nodes_reached_add = set() # to be added to nodes_reached
202
node_front_pool = dict() # solutions for next qualified nodes of front node
203
for path in path_potential:
204
node_front = path[-1] # front node in this current path
205
nodes_next = [] # for nodes qualified as next one on path
206
if node_front in node_front_pool.keys():
207
nodes_next = node_front_pool[node_front]
208
else:
209
for node_n in connection_lists[node_front]: # neighbor node
210
if node_n == j: # the shortest path found, only these many steps needed
211
nodes_next.append(node_n)
212
search_end = True
213
continue
214
if node_n not in nodes_reached:
215
nodes_reached_add.add(node_n)
216
nodes_next.append(node_n)
217
node_front_pool[node_front] = nodes_next[:] # add new solution
218
for node_next in nodes_next:
219
path_potential2.append(path + [node_next])
220
for node in nodes_reached_add:
221
nodes_reached.add(node)
222
# empty the old potential paths
223
path_potential = []
224
# assign the new potential paths, copy with list comprehension method
225
path_potential = [[node for node in path] for path in path_potential2]
226
# if here, the shortest paths have been found; locate them
227
for path in path_potential:
228
if path[-1] == j:
229
path_succeed.append(path)
230
# distribute the dependency value evenly for each shortest paths
231
d_value = 1.0 / len(path_succeed)
232
for path in path_succeed:
233
for node in path[1:-1]: # exclude start and end nodes
234
dependencies[node] = dependencies[node] + d_value
235
# print(dependencies)
236
dependency_mean = sum(dependencies)/net_size
237
node_max = dependencies.index(max(dependencies))
238
holistic_dependency_abs = dependencies[node_max] - dependency_mean
239
holistic_dependency_rel = dependencies[node_max] / dependency_mean
240
print "absolute holistic dependency {}".format(holistic_dependency_abs)
241
print "relative holistic dependency {}".format(holistic_dependency_rel)
242
# Also uncomment two lines somewhere below to highlight maximum individual dependency node,
243
# and halt the program after drawing the network.
244
245
# plot the network as dots and lines in pygame window
246
pygame.init() # initialize the pygame
247
# find appropriate window size from current network
248
# convert node positions from triangle grid to Cartesian, for plotting
249
nodes_plt = np.array([trigrid_to_cartesian(pos) for pos in nodes])
250
(xmin, ymin) = np.amin(nodes_plt, axis=0)
251
(xmax, ymax) = np.amax(nodes_plt, axis=0)
252
world_size = (xmax-xmin + 5.0, ymax-ymin + 2.0) # extra length for clearance
253
pixels_per_length = 50 # resolution for converting from world to display
254
# display origin is at left top corner
255
screen_size = (int(round(world_size[0] * pixels_per_length)),
256
int(round(world_size[1] * pixels_per_length)))
257
color_white = (255,255,255)
258
color_black = (0,0,0)
259
# a set of 20 distinct colors (black and white excluded)
260
# distinct_color_set = ((230,25,75), (60,180,75), (255,225,25), (0,130,200), (245,130,48),
261
# (145,30,180), (70,240,240), (240,50,230), (210,245,60), (250,190,190),
262
# (0,128,128), (230,190,255), (170,110,40), (255,250,200), (128,0,0),
263
# (170,255,195), (128,128,0), (255,215,180), (0,0,128), (128,128,128))
264
distinct_color_set = ((230,25,75), (60,180,75), (255,225,25), (0,130,200), (245,130,48),
265
(145,30,180), (70,240,240), (240,50,230), (210,245,60), (250,190,190),
266
(0,128,128), (230,190,255), (170,110,40), (128,0,0),
267
(170,255,195), (128,128,0), (0,0,128))
268
color_quantity = 17 # grey is removed, and two others
269
# convert the color set to float, for use in matplotlib
270
distinct_color_set_float = tuple([tuple([color[0]/255.0, color[1]/255.0, color[2]/255.0])
271
for color in distinct_color_set])
272
node_size = 10 # node modeled as dot, number of pixels for radius
273
group_line_width = 5 # width of the connecting lines inside the groups
274
norm_line_width = 2 # width of other connections
275
# set up the simulation window and surface object
276
icon = pygame.image.load("icon_geometry_art.jpg")
277
pygame.display.set_icon(icon)
278
screen = pygame.display.set_mode(screen_size)
279
pygame.display.set_caption("Probabilistic Consensus of 2D Triangle Grid Network")
280
281
# shift the node display positions to the middle of the window
282
centroid_temp = np.mean(nodes_plt, axis=0)
283
nodes_plt = nodes_plt - centroid_temp + (world_size[0]/2.0, world_size[1]/2.0)
284
nodes_disp = [world_to_display(nodes_plt[i], world_size, screen_size)
285
for i in range(net_size)]
286
287
# draw the network for the first time
288
screen.fill(color_white) # fill the background
289
# draw the connecting lines
290
for i in range(net_size):
291
for j in range(i+1, net_size):
292
if connections[i][j]:
293
pygame.draw.line(screen, color_black, nodes_disp[i], nodes_disp[j], norm_line_width)
294
# draw the nodes as dots
295
for i in range(net_size):
296
pygame.draw.circle(screen, color_black, nodes_disp[i], node_size, 0)
297
# highlight the node with maximum individual dependency
298
# pygame.draw.circle(screen, color_black, nodes_disp[node_max], node_size*2, 2)
299
pygame.display.update()
300
301
# # use the following to find the indices of the nodes on graph
302
# for i in range(net_size):
303
# pygame.draw.circle(screen, color_black, nodes_disp[i], node_size*2,1)
304
# pygame.display.update()
305
# print "node {} is drawn".format(i)
306
# raw_input("<Press enter to continue>")
307
308
# hold the program here to check the netwrok
309
# raw_input("Press the <ENTER> key to continue")
310
311
############### the probabilistic consensus ###############
312
313
# for network 100-3
314
# the closely located 10 nodes to command at beginning of the simulation
315
# command_nodes_10 = [0,1,2,5,7,10,11,12,13,35] # in the middle of the network
316
# command_nodes_10 = [22,34,36,50,57,61,72,87,91,92] # top right corner
317
# command_nodes_10 = [87,91,72,92,61,57,36,50,34,22] # top right corner, sort from boundary inward
318
# # the closely located 20 nodes to command during the simulation
319
# command_nodes_20 = [8,9,20,22,24,27,34,36,44,45,
320
# 50,52,57,61,67,72,77,87,91,92]
321
# iter_cutin = 5 # the 20 nodes command at this time stamp
322
323
all_steps = [0 for i in range(repeat_times)] # steps taken to converge for all simulations
324
all_deci_orders = [0 for i in range(repeat_times)] # the order of the final decisions
325
steps_seed = [] # number of steps for converged simulations with seed robots
326
for sim_index in range(repeat_times): # repeat the simulation for these times
327
328
print("\n{}th simulation".format(sim_index))
329
330
# variable for decision distribution of all individuals
331
deci_dist = np.random.rand(net_size, deci_num)
332
# # tweak the decision distribution for the command nodes
333
# for i in command_nodes_10:
334
# deci_dist[i][0] = 1.0 # force the first probability to be the largest
335
# normalize the random numbers such that the sum is 1.0
336
sum_temp = np.sum(deci_dist, axis=1)
337
for i in range(net_size):
338
deci_dist[i][:] = deci_dist[i][:] / sum_temp[i]
339
# calculate the average decision distribution
340
mean_temp = np.mean(deci_dist, axis=0)
341
avg_dist_sort = np.sort(mean_temp)[::-1]
342
avg_dist_id_sort = np.argsort(mean_temp)[::-1]
343
# the dominant decision of all nodes
344
deci_domi = np.argmax(deci_dist, axis=1)
345
print deci_domi
346
# only adjacent block of nodes sharing same dominant decision belongs to same group
347
groups = [] # put nodes in groups by their local consensus
348
group_sizes = [0 for i in range(net_size)] # the group size that each node belongs to
349
color_initialized = False # whether the color assignment has been done for the first time
350
deci_colors = [-1 for i in range(deci_num)] # color index for each exhibited decision
351
# -1 for not assigned
352
color_assigns = [0 for i in range(color_quantity)] # number of assignments for each color
353
group_colors = [] # color for the groups
354
node_colors = [0 for i in range(net_size)] # color for the nodes
355
# Difference of two distributions is the sum of absolute values of differences
356
# of all individual probabilities.
357
# Overflow threshold for the distribution difference. Distribution difference larger than
358
# this means neighbors are not quite agree with each other, so no further improvement on
359
# unipolarity will be performed. If distribution difference is lower than the threshold,
360
# linear multiplier will be used to improve unipolarity on the result distribution.
361
dist_diff_thres = 0.3
362
# Variable for ratio of distribution difference to distribution difference threshold.
363
# The ratio is in range of [0,1], it will be used for constructing the corresponding linear
364
# multiplier. At one side, the smaller the ratio, the smaller the distribution difference,
365
# and faster the unipolarity should increase. At the other side, the ratio will be positively
366
# related to the small end of the linear multiplier. The smallee the ratio gets, the steeper
367
# the linear multiplier will be, and the faster the unipolarity increases.
368
dist_diff_ratio = [0.0 for i in range(net_size)]
369
# Exponent of a power function to map the distribution difference ratio to a larger value,
370
# and therefore slow donw the growing rate.
371
dist_diff_power = 0.3
372
373
# start the matplotlib window first before the simulation cycle
374
fig = plt.figure()
375
# fig.canvas.set_window_title('Unipolarity of 2D Triangle Grid Network')
376
fig.canvas.set_window_title('Discrete Entropy of the Preference Distributions')
377
ax = fig.add_subplot(111, projection='3d')
378
x_pos = [pos[0] for pos in nodes_plt] # the positions of the bars
379
y_pos = [pos[1] for pos in nodes_plt]
380
z_pos = np.zeros(net_size)
381
dx = 0.5 * np.ones(net_size) # the sizes of the bars
382
dy = 0.5 * np.ones(net_size)
383
dz = np.zeros(net_size)
384
if not nobargraph:
385
fig.canvas.draw()
386
fig.show()
387
388
# the simulation cycle
389
sim_exit = False # simulation exit flag
390
sim_pause = False # simulation pause flag
391
iter_count = 0
392
time_now = pygame.time.get_ticks() # return milliseconds
393
time_last = time_now # reset as right now
394
time_period = 500 # simulation frequency control, will jump the delay if overflow
395
skip_speed_control = True # if skip speed control, run as fast as it can
396
while not sim_exit:
397
# exit the program by close window button, or Esc or Q on keyboard
398
for event in pygame.event.get():
399
if event.type == pygame.QUIT:
400
sim_exit = True # exit with the close window button
401
if event.type == pygame.KEYUP:
402
if event.key == pygame.K_SPACE:
403
sim_pause = not sim_pause # reverse the pause flag
404
if (event.key == pygame.K_ESCAPE) or (event.key == pygame.K_q):
405
sim_exit = True # exit with ESC key or Q key
406
407
# skip the rest if paused
408
if sim_pause: continue
409
410
# # the 20 nodes start to cut in here
411
# if iter_count == iter_cutin:
412
# for i in command_nodes_20:
413
# deci_dist[i][1] = 1.0 # force the second probability to be the largest
414
# # normalize again
415
# sum_temp = np.sum(deci_dist[i])
416
# deci_dist[i][:] = deci_dist[i][:] / sum_temp
417
418
# prepare information for the decision distribution evolution
419
# including 1.dominant decisions, 2.groups, and 3.group sizes
420
421
# 1.update the dominant decision for all nodes
422
deci_domi = np.argmax(deci_dist, axis=1)
423
424
# 2.update the groups
425
groups = [] # empty the group container
426
group_deci = [] # the exhibited decision of the groups
427
# a diminishing global pool for node indices, for nodes not yet assigned into groups
428
n_pool = range(net_size)
429
# start searching groups one by one from the global node pool
430
while len(n_pool) != 0:
431
# start a new group, with first node in the n_pool
432
first_member = n_pool[0] # first member of this group
433
group_temp = [first_member] # current temporary group
434
n_pool.pop(0) # pop out first node in the pool
435
# a list of potential members for current group
436
# this list may increase when new members of group are discovered
437
p_members = connection_lists[first_member][:]
438
# an index for iterating through p_members, in searching group members
439
p_index = 0 # if it climbs to the end, the searching ends
440
# index of dominant decision for current group
441
current_domi = deci_domi[first_member]
442
# dynamically iterating through p_members with p_index
443
while p_index < len(p_members): # index still in valid range
444
if deci_domi[p_members[p_index]] == current_domi:
445
# a new member has been found
446
new_member = p_members[p_index] # get index of the new member
447
p_members.remove(new_member) # remove it from p_members list
448
# but not increase p_index, because new value in p_members will flush in
449
n_pool.remove(new_member) # remove it from the global node pool
450
group_temp.append(new_member) # add it to current group
451
# check if new potential members are available, due to new node discovery
452
p_members_new = connection_lists[new_member] # new potential members
453
for member in p_members_new:
454
if member not in p_members: # should not already in p_members
455
if member not in group_temp: # should not in current group
456
if member in n_pool: # should be available in global pool
457
# if conditions satisfied, it is qualified as a potential member
458
p_members.append(member) # append at the end
459
else:
460
# a boundary node(share different decision) has been met
461
# leave it in p_members, will help to avoid checking back again on this node
462
p_index = p_index + 1 # shift right one position
463
# all connected members for this group have been located
464
groups.append(group_temp) # append the new group
465
group_deci.append(deci_domi[first_member]) # append new group's exhibited decision
466
# update the colors for the exhibited decisions
467
if not color_initialized:
468
color_initialized = True
469
select_set = range(color_quantity) # the initial selecting set
470
all_deci_set = set(group_deci) # put all exhibited decisions in a set
471
for deci in all_deci_set: # avoid checking duplicate decisions
472
if len(select_set) == 0:
473
select_set = range(color_quantity) # start a new set to select from
474
# # force color blue for first decision, color orange for second decision
475
# if deci == 0:
476
# chosen_color = 3 # color blue
477
# # elif deci == 1:
478
# # chosen_color = 4 # color orange
479
# else:
480
# chosen_color = np.random.choice(select_set)
481
chosen_color = np.random.choice(select_set)
482
select_set.remove(chosen_color)
483
deci_colors[deci] = chosen_color # assign the chosen color to decision
484
# increase the assignments of chosen color by 1
485
color_assigns[chosen_color] = color_assigns[chosen_color] + 1
486
else:
487
# remove the color for a decision, if it's no longer the decision of any group
488
all_deci_set = set(group_deci)
489
for i in range(deci_num):
490
if deci_colors[i] != -1: # there was a color assigned before
491
if i not in all_deci_set:
492
# decrease the assignments of chosen color by 1
493
color_assigns[deci_colors[i]] = color_assigns[deci_colors[i]] - 1
494
deci_colors[i] = -1 # remove the assigned color
495
# assign color for an exhibited decision if not assigned
496
select_set = [] # set of colors to select from, start from empty
497
for i in range(len(groups)):
498
if deci_colors[group_deci[i]] == -1:
499
if len(select_set) == 0:
500
# construct a new select_set
501
color_assigns_min = min(color_assigns)
502
color_assigns_temp = [j - color_assigns_min for j in color_assigns]
503
select_set = range(color_quantity)
504
for j in range(color_quantity):
505
if color_assigns_temp[j] != 0:
506
select_set.remove(j)
507
# if here, the select_set is good to go
508
# # force color orange for second decision
509
# if group_deci[i] == 1:
510
# chosen_color = 4 # color orange
511
# else:
512
# chosen_color = np.random.choice(select_set)
513
# if chosen_color in select_set:
514
# select_set.remove(chosen_color)
515
chosen_color = np.random.choice(select_set)
516
select_set.remove(chosen_color)
517
deci_colors[group_deci[i]] = chosen_color # assign the chosen color
518
# increase the assignments of chosen color by 1
519
color_assigns[chosen_color] = color_assigns[chosen_color] + 1
520
# update the colors for the groups
521
group_colors = []
522
for i in range(len(groups)):
523
group_colors.append(deci_colors[group_deci[i]])
524
525
# 3.update the group size for each node
526
for i in range(len(groups)):
527
size_temp = len(groups[i])
528
color_temp = group_colors[i]
529
for node in groups[i]:
530
group_sizes[node] = size_temp
531
node_colors[node] = color_temp # update the color for each node
532
533
# the decision distribution evolution
534
converged_all = True # flag for convergence of entire network
535
deci_dist_t = np.copy(deci_dist) # deep copy of the 'deci_dist'
536
for i in range(net_size):
537
# # skip updating the 20 commanding nodes, stubborn in their decisions
538
# if i in command_nodes_20:
539
# if iter_count >= iter_cutin:
540
# continue
541
host_domi = deci_domi[i]
542
converged = True
543
for neighbor in connection_lists[i]:
544
if host_domi != deci_domi[neighbor]:
545
converged = False
546
break
547
# action based on convergence of dominant decision
548
if converged: # all neighbors have converged with host
549
# step 1: take equally weighted average on all distributions
550
# including host and all neighbors
551
deci_dist[i] = deci_dist_t[i]*1.0 # start with host itself
552
for neighbor in connection_lists[i]:
553
# accumulate neighbor's distribution
554
deci_dist[i] = deci_dist[i] + deci_dist_t[neighbor]
555
# normalize the distribution such that sum is 1.0
556
sum_temp = np.sum(deci_dist[i])
557
deci_dist[i] = deci_dist[i] / sum_temp
558
# step 2: increase the unipolarity by applying the linear multiplier
559
# (this step is only for when all neighbors are converged)
560
# First find the largest difference between two distributions in this block
561
# of nodes, including the host and all its neighbors.
562
comb_pool = [i] + connection_lists[i] # add host to a pool with its neighbors
563
# will be used to form combinations from this pool
564
comb_pool_len = len(comb_pool)
565
dist_diff = []
566
for j in range(comb_pool_len):
567
for k in range(j+1, comb_pool_len):
568
j_node = comb_pool[j]
569
k_node = comb_pool[k]
570
dist_diff.append(np.sum(abs(deci_dist[j_node] - deci_dist[k_node])))
571
dist_diff_max = max(dist_diff) # maximum distribution difference of all
572
if dist_diff_max < dist_diff_thres:
573
# distribution difference is small enough,
574
# that linear multiplier should be applied to increase unipolarity
575
dist_diff_ratio = dist_diff_max/dist_diff_thres
576
# Linear multiplier is generated from value of smaller and larger ends, the
577
# smaller end is positively related with dist_diff_ratio. The smaller the
578
# maximum distribution difference, the smaller the dist_diff_ratio, and the
579
# steeper the linear multiplier.
580
# '1.0/deci_num' is the average value of the linear multiplier
581
small_end = 1.0/deci_num * np.power(dist_diff_ratio, dist_diff_power)
582
large_end = 2.0/deci_num - small_end
583
# sort the magnitude of the current distribution
584
dist_temp = np.copy(deci_dist[i]) # temporary distribution
585
sort_index = range(deci_num)
586
for j in range(deci_num-1): # bubble sort, ascending order
587
for k in range(deci_num-1-j):
588
if dist_temp[k] > dist_temp[k+1]:
589
# exchange values in 'dist_temp'
590
temp = dist_temp[k]
591
dist_temp[k] = dist_temp[k+1]
592
dist_temp[k+1] = temp
593
# exchange values in 'sort_index'
594
temp = sort_index[k]
595
sort_index[k] = sort_index[k+1]
596
sort_index[k+1] = temp
597
# applying the linear multiplier
598
for j in range(deci_num):
599
multiplier = small_end + float(j)/(deci_num-1) * (large_end-small_end)
600
deci_dist[i][sort_index[j]] = deci_dist[i][sort_index[j]] * multiplier
601
# normalize the distribution such that sum is 1.0
602
sum_temp = np.sum(deci_dist[i])
603
deci_dist[i] = deci_dist[i]/sum_temp
604
else:
605
# not applying linear multiplier when distribution difference is large
606
pass
607
else: # at least one neighbor has different opinion with host
608
converged_all = False # the network is not converged
609
# take unequal weights in the averaging process based on group sizes
610
deci_dist[i] = deci_dist_t[i]*group_sizes[i] # start with host itself
611
for neighbor in connection_lists[i]:
612
# accumulate neighbor's distribution
613
deci_dist[i] = deci_dist[i] + deci_dist_t[neighbor]*group_sizes[neighbor]
614
# normalize the distribution
615
sum_temp = np.sum(deci_dist[i])
616
deci_dist[i] = deci_dist[i] / sum_temp
617
618
# graphics animation, both pygame window and matplotlib window
619
# 1.pygame window for dynamics of network's groups
620
screen.fill(color_white)
621
# draw the regualr connecting lines
622
for i in range(net_size):
623
for j in range(i+1, net_size):
624
if connections[i][j]:
625
pygame.draw.line(screen, color_black,
626
nodes_disp[i], nodes_disp[j], norm_line_width)
627
# draw the connecting lines marking the groups
628
for i in range(len(groups)):
629
group_len = len(groups[i])
630
for j in range(group_len):
631
for k in range(j+1, group_len):
632
j_node = groups[i][j]
633
k_node = groups[i][k]
634
# check if two nodes in one group is connected
635
if connections[j_node][k_node]:
636
# wider lines for group connections
637
pygame.draw.line(screen, distinct_color_set[group_colors[i]],
638
nodes_disp[j_node], nodes_disp[k_node], group_line_width)
639
# draw the nodes as dots
640
for i in range(net_size):
641
pygame.draw.circle(screen, distinct_color_set[node_colors[i]],
642
nodes_disp[i], node_size, 0)
643
pygame.display.update()
644
# # 2.matplotlib window for 3D bar graph of unipolarity of decision distribution
645
# if not nobargraph:
646
# dz = [deci_dist[i][deci_domi[i]] for i in range(net_size)]
647
# ax.bar3d(x_pos, y_pos, z_pos, dx, dy, dz, color='b')
648
# fig.canvas.draw()
649
# fig.show()
650
# 2.matplotlib window for 2D bar graph of discrete entropy
651
if not nobargraph:
652
# calculate the discrete entropy for all distributions
653
ent_sum = 0
654
for i in range(net_size):
655
ent = 0
656
for j in range(deci_num):
657
if deci_dist[i][j] == 0: continue
658
ent = ent - deci_dist[i][j] * math.log(deci_dist[i][j],2)
659
dz[i] = ent
660
ent_sum = ent_sum + ent
661
print("summation of entropy of all distributions: {}".format(ent_sum))
662
# draw the bar graph
663
# somehow the old bars are overlapping the current ones, have to clear the
664
# figure first, and rebuild the bar graph, as a temporary solution
665
fig.clear()
666
ax = fig.add_subplot(111, projection='3d')
667
bar_colors = [distinct_color_set_float[node_colors[i]] for i in range(net_size)]
668
barlist = ax.bar3d(x_pos, y_pos, z_pos, dx, dy, dz, bar_colors)
669
fig.canvas.draw()
670
fig.show()
671
672
# simulation speed control
673
if not skip_speed_control:
674
# simulation updating frequency control
675
time_now = pygame.time.get_ticks()
676
time_past = time_now - time_last # time past since last time_last
677
# needs to delay a bit more if time past has not reach desired period
678
# will skip if time is overdue
679
if time_now - time_last < time_period:
680
pygame.time.delay(time_period-time_past)
681
time_last = pygame.time.get_ticks() # reset time-last
682
683
# iteration count
684
print "iteration {}".format(iter_count)
685
iter_count = iter_count + 1
686
# hold the program to check the network
687
# raw_input("<Press Enter to continue>")
688
689
# exit as soon as the network is converged
690
if converged_all:
691
print("steps to converge: {}".format(iter_count-1))
692
print("converged to the decision: {} ({} in order)".format(deci_domi[0],
693
list(avg_dist_id_sort).index(deci_domi[0]) + 1))
694
print("the order of average initial decision:")
695
print(avg_dist_id_sort)
696
break
697
698
# record result of this simulation
699
all_steps[sim_index] = iter_count - 1
700
all_deci_orders[sim_index] = list(avg_dist_id_sort).index(deci_domi[0]) + 1
701
if deci_domi[0] == 0:
702
steps_seed.append(iter_count - 1)
703
704
# report statistic result if simulation runs more than once
705
if repeat_times > 1:
706
print("\nstatistics\nsteps to converge: {}".format(all_steps))
707
print("final decision in order: {}".format(all_deci_orders))
708
print("average steps: {}".format(np.mean(np.array(all_steps))))
709
print("maximum steps: {}".format(max(all_steps)))
710
print("minimum steps: {}".format(min(all_steps)))
711
print("std dev of steps: {}".format(np.std(np.array(all_steps))))
712
# # statistics for simulations with seed robots
713
# print("{} out of {} trials follow command from seed robots".format(
714
# len(steps_seed), repeat_times))
715
# if len(steps_seed) != 0:
716
# print(steps_seed)
717
# print("\ton average of {} steps".format(np.mean(steps_seed)))
718
719
720
721