Path: blob/master/trigridnet_probabilistic_consensus.py
104 views
# This simulation tests the consensus achievement algorithm (collective decision making1# algorithm) on randomly generated 2D triangle grid network. The algorithm is extracted2# from previous loop reshape simulations for further testing, so that it can be generalized3# to 2D random network topologies. The distribution evolution method is similar to the4# loop reshape simulation, except more than three nodes are involved in the weighted5# averaging process.67# input arguments:8# '-f': filename of the triangle grid network9# '-d': number of decisions each node can choose from10# '-r': repeat times of simulation, with different initial random distribution; default=011# '--nobargraph': option to skip the bar graph visualization1213# Pygame will be used to animate the dynamic group changes in the network;14# Matplotlib will be used to draw the unipolarity in a 3D bar graph, the whole decision15# distribution is not necessary.1617# Algorithm to update the groups in 2D triangle grid network:18# Using a pool of indices for nodes that have not been grouped, those labeled into a19# group will be remove from the pool. As long as the pool is not enpty, a while loop20# will continue searching groups one by one. In the loop, it initialize a new group21# with first node in the pool. It also initialize a fifo-like variable(p_members) for22# potential members of this group, and an index variable(p_index) for iterating through23# the potential members. If p_members[p_index] doesn't share same decidion with first node,24# p_index increases by 1, will check next value in p_members. If it does, then a new member25# for this group has been found, it will be removed from the pool and p_members, and added26# to current group. The new group member will also introduce its neighbors as new27# potential members, but they should not be in p_members and current group, and should be28# in node pool. The member search for this group will end if p_index iterates to the end29# of p_members.3031# 01/19/201832# Testing an invented concept called "holistic dependency", for measuring how much the most33# depended node is being depended on more than others for maintaining the network connectivity34# in the holistic view. The final name of the concept may change.3536# 01/29/201837# Dr. Lee suggest adding colors to different groups. A flashy idea I though at first, but38# after reconsidering, it has a much better visual effect.39# Link of a list of 20 distinct colors:40# https://sashat.me/2017/01/11/list-of-20-simple-distinct-colors/4142# 02/06/201843# Adding another experiment scenario: for 100 nodes, at the begining, forcing 10 closely44# located nodes to have a specific decision (colored in blue), and run simulation.4546# 02/12/201847# Adding another experiment scenario: for 100 nodes, forcing 10 closely located nodes to48# have a specific decision (colored in blue), and in the half way toward consensus, seed49# 20 "stubborn" nodes that won't change their mind at all. Ideally the collective decision50# will reverse to the one insisted by the new 20 nodes.51# (The program is really messy now.)5253# 02/13/201854# Replace the unipolarity with discrete entropy for evaluating the decisiveness of the55# preference distributions. Adding same colors of the consensus groups to the bar graph.5657# 02/26/201858# (an excursion of thought)59# Writing a paper and keep revising it totally ruin the fun of developping this simulation.60# Writing paper is boring to many people I know, but it's specially painful to me in a61# subconscious way. I have very strong personal style which runs through every aspect of my62# life, and the style does not mix well with academia. I love simplicity, elegance, and truth.63# I rarely claim something in my belief, but these I am very proud to have, and have no64# intention to compromise them with anything. Academic paper as a form to present the65# state-of-the-art, and to communicate thought, should contain succinct and honest description66# of the work. However the two advisors I have worked with (I respect them from the deep of my67# heart) both revised my papers toward being more acceptable by the publishers. Most of the68# changes like pharsing and some of the restructuring are very good, reflecting their years of69# academic training. They make the paper more explicit and easy to understand. Others changes,70# IMHO, are simply trying to elevate the paper, so on first read people would think this is71# a very good one by those abstract words. This is what makes me suffer. These changes are72# steering the paper to the opposite of succinct and honesty. For example, if it is only a73# method to solve a specific problem, saying it is a systematic approach to solve a series74# of problem, and using abstract words for the technical terms, does not make the work any75# better. It would only confuse the readers and let them spend more time to dig out the idea.76# I could never fight my advisors on these changes, they would say it is just a style of77# writing, and it brings out the potential of the work. This is like to say, our presented78# technology can do this, can do that, and potentially it could do much more amazing things.79# I totally hate the last part! It just give people a chance to bragging things they would80# never reach. As I find out, quite a few published papers that I believe I understand every81# bit of it, are trying to elevate their content to a level they don't deserve. If I were to82# rewrite them, I would cut all the crap and leave only the naked truth and idea. Many83# researcher picked up this habit in their training with publishing papers, like an untold84# rule. As for my paper, I really wish my work were that good to be entitled to those words,85# but they are not. Academia is a sublime place in many people's thinking, as in mine. I have86# very high standards for the graduation of a PhD. If I kept following my style, and be honest87# with myself, I knew long ago I would never graduate. I am loving it too much to hate it.88# (addition to the simulation)89# Requested by the paper, when visualizing the bar graph for the discrete entropy, adding the90# summation of all discrete entropy to show how the entropy reduction works.9192# 02/28/201893# Remove color grey, beige, and coral from the distinct color set, avoid blending with white94# background. Make the node size larger.959697import pygame98import matplotlib.pyplot as plt99from mpl_toolkits.mplot3d import Axes3D100from trigridnet_generator import *101from formation_functions import *102import math, sys, os, getopt, time103import numpy as np104105net_size = 30 # default size of the triangle grid network106net_folder = 'trigrid-networks' # folder for triangle grid network files107net_filename = '30-1' # default filename of the network file, if no input108net_filepath = os.path.join(os.getcwd(), net_folder, net_filename) # corresponding filepath109110deci_num = 30 # default number of decisions each node can choose from111112repeat_times = 1 # default times of running the program repeatly113114nobargraph = False # option as to whether or not skipping the 3D bar graph115116# read command line options117try:118opts, args = getopt.getopt(sys.argv[1:], 'f:d:r:', ['nobargraph'])119# The colon after 'f' means '-f' requires an argument, it will raise an error if no120# argument followed by '-f'. But if '-f' is not even in the arguments, this won't raise121# an error. So it's necessary to define the default network filename122except getopt.GetoptError as err:123print str(err)124sys.exit()125for opt,arg in opts:126if opt == '-f':127# get the filename of the network128net_filename = arg129# check if this file exists130net_filepath = os.path.join(os.getcwd(), net_folder, net_filename)131if not os.path.isfile(net_filepath):132print "{} does not exist".format(net_filename)133sys.exit()134# parse the network size135net_size = int(net_filename.split('-')[0]) # the number before first dash136elif opt == '-d':137# get the number of decisions138deci_num = int(arg)139elif opt == '-r':140# get the times of repeatence141repeat_times = int(arg)142elif opt == '--nobargraph':143nobargraph = True144145# read the network from file146nodes = [] # integers only is necessary to describe the network's node positions147f = open(net_filepath, 'r')148new_line = f.readline()149while len(new_line) != 0: # not the end of the file yet150pos_str = new_line[0:-1].split(' ') # get rid of '\n' at end151pos = [int(pos_str[0]), int(pos_str[1])] # convert to integers152nodes.append(pos)153new_line = f.readline()154155# generate the connection matrix, 0 for not connected, 1 for connected156connections = [[0 for j in range(net_size)] for i in range(net_size)] # all zeros157for i in range(net_size):158for j in range(i+1, net_size):159# find if nodes[i] and nodes[j] are neighbors160diff_x = nodes[i][0] - nodes[j][0]161diff_y = nodes[i][1] - nodes[j][1]162if abs(diff_x) + abs(diff_y) == 1 or diff_x * diff_y == -1:163# condition 1: one of the axis value difference is 1, the other is 0164# condition 2: one of the axis value difference is 1, the other is -1165connections[i][j] = 1166connections[j][i] = 1167# another list type variable for easily indexing from the nodes168# converted from the above connection matrix variable169connection_lists = [] # the lists of connecting nodes for each node170for i in range(net_size):171connection_list_temp = []172for j in range(net_size):173if connections[i][j]: connection_list_temp.append(j)174connection_lists.append(connection_list_temp)175176# until here, the network information has been read and interpreted completely177# calculate the "holistic dependency"178calculate_h_dependency = False # option for calculating holistic dependency179# this computation takes significant time when net_size is above 50180# the algorithm below has been optimized to the best efficient I can achieve181if calculate_h_dependency:182dependencies = [0.0 for i in range(net_size)] # individual dependency for each robot183holistic_dependency_abs = 0.0 # absolute holistic dependency184holistic_dependency_rel = 0.0 # relative holistic dependency185# Search the shortest path for every pair of nodes i and j.186for i in range(net_size-1): # i is the starting node187for j in range(i+1, net_size): # j is the target node188# (There might be multiple path sharing the shortest length, which are totally fine,189# all the path should count.)190# Some useful tricks have been used to make the search efficient.191path_potential = [[i]] # the paths that are in progress, potentially the shortest192path_succeed = [] # the shortest paths193# start searching194search_end = False # flag indicating if the shortest path is found195nodes_reached = {i} # dynamic pool for nodes in at least one of the paths196# key to speed up this search algorithm197while not search_end:198# increase one step for all paths in path_potential199path_potential2 = [] # for the paths adding one step from path_potential200nodes_reached_add = set() # to be added to nodes_reached201node_front_pool = dict() # solutions for next qualified nodes of front node202for path in path_potential:203node_front = path[-1] # front node in this current path204nodes_next = [] # for nodes qualified as next one on path205if node_front in node_front_pool.keys():206nodes_next = node_front_pool[node_front]207else:208for node_n in connection_lists[node_front]: # neighbor node209if node_n == j: # the shortest path found, only these many steps needed210nodes_next.append(node_n)211search_end = True212continue213if node_n not in nodes_reached:214nodes_reached_add.add(node_n)215nodes_next.append(node_n)216node_front_pool[node_front] = nodes_next[:] # add new solution217for node_next in nodes_next:218path_potential2.append(path + [node_next])219for node in nodes_reached_add:220nodes_reached.add(node)221# empty the old potential paths222path_potential = []223# assign the new potential paths, copy with list comprehension method224path_potential = [[node for node in path] for path in path_potential2]225# if here, the shortest paths have been found; locate them226for path in path_potential:227if path[-1] == j:228path_succeed.append(path)229# distribute the dependency value evenly for each shortest paths230d_value = 1.0 / len(path_succeed)231for path in path_succeed:232for node in path[1:-1]: # exclude start and end nodes233dependencies[node] = dependencies[node] + d_value234# print(dependencies)235dependency_mean = sum(dependencies)/net_size236node_max = dependencies.index(max(dependencies))237holistic_dependency_abs = dependencies[node_max] - dependency_mean238holistic_dependency_rel = dependencies[node_max] / dependency_mean239print "absolute holistic dependency {}".format(holistic_dependency_abs)240print "relative holistic dependency {}".format(holistic_dependency_rel)241# Also uncomment two lines somewhere below to highlight maximum individual dependency node,242# and halt the program after drawing the network.243244# plot the network as dots and lines in pygame window245pygame.init() # initialize the pygame246# find appropriate window size from current network247# convert node positions from triangle grid to Cartesian, for plotting248nodes_plt = np.array([trigrid_to_cartesian(pos) for pos in nodes])249(xmin, ymin) = np.amin(nodes_plt, axis=0)250(xmax, ymax) = np.amax(nodes_plt, axis=0)251world_size = (xmax-xmin + 5.0, ymax-ymin + 2.0) # extra length for clearance252pixels_per_length = 50 # resolution for converting from world to display253# display origin is at left top corner254screen_size = (int(round(world_size[0] * pixels_per_length)),255int(round(world_size[1] * pixels_per_length)))256color_white = (255,255,255)257color_black = (0,0,0)258# a set of 20 distinct colors (black and white excluded)259# distinct_color_set = ((230,25,75), (60,180,75), (255,225,25), (0,130,200), (245,130,48),260# (145,30,180), (70,240,240), (240,50,230), (210,245,60), (250,190,190),261# (0,128,128), (230,190,255), (170,110,40), (255,250,200), (128,0,0),262# (170,255,195), (128,128,0), (255,215,180), (0,0,128), (128,128,128))263distinct_color_set = ((230,25,75), (60,180,75), (255,225,25), (0,130,200), (245,130,48),264(145,30,180), (70,240,240), (240,50,230), (210,245,60), (250,190,190),265(0,128,128), (230,190,255), (170,110,40), (128,0,0),266(170,255,195), (128,128,0), (0,0,128))267color_quantity = 17 # grey is removed, and two others268# convert the color set to float, for use in matplotlib269distinct_color_set_float = tuple([tuple([color[0]/255.0, color[1]/255.0, color[2]/255.0])270for color in distinct_color_set])271node_size = 10 # node modeled as dot, number of pixels for radius272group_line_width = 5 # width of the connecting lines inside the groups273norm_line_width = 2 # width of other connections274# set up the simulation window and surface object275icon = pygame.image.load("icon_geometry_art.jpg")276pygame.display.set_icon(icon)277screen = pygame.display.set_mode(screen_size)278pygame.display.set_caption("Probabilistic Consensus of 2D Triangle Grid Network")279280# shift the node display positions to the middle of the window281centroid_temp = np.mean(nodes_plt, axis=0)282nodes_plt = nodes_plt - centroid_temp + (world_size[0]/2.0, world_size[1]/2.0)283nodes_disp = [world_to_display(nodes_plt[i], world_size, screen_size)284for i in range(net_size)]285286# draw the network for the first time287screen.fill(color_white) # fill the background288# draw the connecting lines289for i in range(net_size):290for j in range(i+1, net_size):291if connections[i][j]:292pygame.draw.line(screen, color_black, nodes_disp[i], nodes_disp[j], norm_line_width)293# draw the nodes as dots294for i in range(net_size):295pygame.draw.circle(screen, color_black, nodes_disp[i], node_size, 0)296# highlight the node with maximum individual dependency297# pygame.draw.circle(screen, color_black, nodes_disp[node_max], node_size*2, 2)298pygame.display.update()299300# # use the following to find the indices of the nodes on graph301# for i in range(net_size):302# pygame.draw.circle(screen, color_black, nodes_disp[i], node_size*2,1)303# pygame.display.update()304# print "node {} is drawn".format(i)305# raw_input("<Press enter to continue>")306307# hold the program here to check the netwrok308# raw_input("Press the <ENTER> key to continue")309310############### the probabilistic consensus ###############311312# for network 100-3313# the closely located 10 nodes to command at beginning of the simulation314# command_nodes_10 = [0,1,2,5,7,10,11,12,13,35] # in the middle of the network315# command_nodes_10 = [22,34,36,50,57,61,72,87,91,92] # top right corner316# command_nodes_10 = [87,91,72,92,61,57,36,50,34,22] # top right corner, sort from boundary inward317# # the closely located 20 nodes to command during the simulation318# command_nodes_20 = [8,9,20,22,24,27,34,36,44,45,319# 50,52,57,61,67,72,77,87,91,92]320# iter_cutin = 5 # the 20 nodes command at this time stamp321322all_steps = [0 for i in range(repeat_times)] # steps taken to converge for all simulations323all_deci_orders = [0 for i in range(repeat_times)] # the order of the final decisions324steps_seed = [] # number of steps for converged simulations with seed robots325for sim_index in range(repeat_times): # repeat the simulation for these times326327print("\n{}th simulation".format(sim_index))328329# variable for decision distribution of all individuals330deci_dist = np.random.rand(net_size, deci_num)331# # tweak the decision distribution for the command nodes332# for i in command_nodes_10:333# deci_dist[i][0] = 1.0 # force the first probability to be the largest334# normalize the random numbers such that the sum is 1.0335sum_temp = np.sum(deci_dist, axis=1)336for i in range(net_size):337deci_dist[i][:] = deci_dist[i][:] / sum_temp[i]338# calculate the average decision distribution339mean_temp = np.mean(deci_dist, axis=0)340avg_dist_sort = np.sort(mean_temp)[::-1]341avg_dist_id_sort = np.argsort(mean_temp)[::-1]342# the dominant decision of all nodes343deci_domi = np.argmax(deci_dist, axis=1)344print deci_domi345# only adjacent block of nodes sharing same dominant decision belongs to same group346groups = [] # put nodes in groups by their local consensus347group_sizes = [0 for i in range(net_size)] # the group size that each node belongs to348color_initialized = False # whether the color assignment has been done for the first time349deci_colors = [-1 for i in range(deci_num)] # color index for each exhibited decision350# -1 for not assigned351color_assigns = [0 for i in range(color_quantity)] # number of assignments for each color352group_colors = [] # color for the groups353node_colors = [0 for i in range(net_size)] # color for the nodes354# Difference of two distributions is the sum of absolute values of differences355# of all individual probabilities.356# Overflow threshold for the distribution difference. Distribution difference larger than357# this means neighbors are not quite agree with each other, so no further improvement on358# unipolarity will be performed. If distribution difference is lower than the threshold,359# linear multiplier will be used to improve unipolarity on the result distribution.360dist_diff_thres = 0.3361# Variable for ratio of distribution difference to distribution difference threshold.362# The ratio is in range of [0,1], it will be used for constructing the corresponding linear363# multiplier. At one side, the smaller the ratio, the smaller the distribution difference,364# and faster the unipolarity should increase. At the other side, the ratio will be positively365# related to the small end of the linear multiplier. The smallee the ratio gets, the steeper366# the linear multiplier will be, and the faster the unipolarity increases.367dist_diff_ratio = [0.0 for i in range(net_size)]368# Exponent of a power function to map the distribution difference ratio to a larger value,369# and therefore slow donw the growing rate.370dist_diff_power = 0.3371372# start the matplotlib window first before the simulation cycle373fig = plt.figure()374# fig.canvas.set_window_title('Unipolarity of 2D Triangle Grid Network')375fig.canvas.set_window_title('Discrete Entropy of the Preference Distributions')376ax = fig.add_subplot(111, projection='3d')377x_pos = [pos[0] for pos in nodes_plt] # the positions of the bars378y_pos = [pos[1] for pos in nodes_plt]379z_pos = np.zeros(net_size)380dx = 0.5 * np.ones(net_size) # the sizes of the bars381dy = 0.5 * np.ones(net_size)382dz = np.zeros(net_size)383if not nobargraph:384fig.canvas.draw()385fig.show()386387# the simulation cycle388sim_exit = False # simulation exit flag389sim_pause = False # simulation pause flag390iter_count = 0391time_now = pygame.time.get_ticks() # return milliseconds392time_last = time_now # reset as right now393time_period = 500 # simulation frequency control, will jump the delay if overflow394skip_speed_control = True # if skip speed control, run as fast as it can395while not sim_exit:396# exit the program by close window button, or Esc or Q on keyboard397for event in pygame.event.get():398if event.type == pygame.QUIT:399sim_exit = True # exit with the close window button400if event.type == pygame.KEYUP:401if event.key == pygame.K_SPACE:402sim_pause = not sim_pause # reverse the pause flag403if (event.key == pygame.K_ESCAPE) or (event.key == pygame.K_q):404sim_exit = True # exit with ESC key or Q key405406# skip the rest if paused407if sim_pause: continue408409# # the 20 nodes start to cut in here410# if iter_count == iter_cutin:411# for i in command_nodes_20:412# deci_dist[i][1] = 1.0 # force the second probability to be the largest413# # normalize again414# sum_temp = np.sum(deci_dist[i])415# deci_dist[i][:] = deci_dist[i][:] / sum_temp416417# prepare information for the decision distribution evolution418# including 1.dominant decisions, 2.groups, and 3.group sizes419420# 1.update the dominant decision for all nodes421deci_domi = np.argmax(deci_dist, axis=1)422423# 2.update the groups424groups = [] # empty the group container425group_deci = [] # the exhibited decision of the groups426# a diminishing global pool for node indices, for nodes not yet assigned into groups427n_pool = range(net_size)428# start searching groups one by one from the global node pool429while len(n_pool) != 0:430# start a new group, with first node in the n_pool431first_member = n_pool[0] # first member of this group432group_temp = [first_member] # current temporary group433n_pool.pop(0) # pop out first node in the pool434# a list of potential members for current group435# this list may increase when new members of group are discovered436p_members = connection_lists[first_member][:]437# an index for iterating through p_members, in searching group members438p_index = 0 # if it climbs to the end, the searching ends439# index of dominant decision for current group440current_domi = deci_domi[first_member]441# dynamically iterating through p_members with p_index442while p_index < len(p_members): # index still in valid range443if deci_domi[p_members[p_index]] == current_domi:444# a new member has been found445new_member = p_members[p_index] # get index of the new member446p_members.remove(new_member) # remove it from p_members list447# but not increase p_index, because new value in p_members will flush in448n_pool.remove(new_member) # remove it from the global node pool449group_temp.append(new_member) # add it to current group450# check if new potential members are available, due to new node discovery451p_members_new = connection_lists[new_member] # new potential members452for member in p_members_new:453if member not in p_members: # should not already in p_members454if member not in group_temp: # should not in current group455if member in n_pool: # should be available in global pool456# if conditions satisfied, it is qualified as a potential member457p_members.append(member) # append at the end458else:459# a boundary node(share different decision) has been met460# leave it in p_members, will help to avoid checking back again on this node461p_index = p_index + 1 # shift right one position462# all connected members for this group have been located463groups.append(group_temp) # append the new group464group_deci.append(deci_domi[first_member]) # append new group's exhibited decision465# update the colors for the exhibited decisions466if not color_initialized:467color_initialized = True468select_set = range(color_quantity) # the initial selecting set469all_deci_set = set(group_deci) # put all exhibited decisions in a set470for deci in all_deci_set: # avoid checking duplicate decisions471if len(select_set) == 0:472select_set = range(color_quantity) # start a new set to select from473# # force color blue for first decision, color orange for second decision474# if deci == 0:475# chosen_color = 3 # color blue476# # elif deci == 1:477# # chosen_color = 4 # color orange478# else:479# chosen_color = np.random.choice(select_set)480chosen_color = np.random.choice(select_set)481select_set.remove(chosen_color)482deci_colors[deci] = chosen_color # assign the chosen color to decision483# increase the assignments of chosen color by 1484color_assigns[chosen_color] = color_assigns[chosen_color] + 1485else:486# remove the color for a decision, if it's no longer the decision of any group487all_deci_set = set(group_deci)488for i in range(deci_num):489if deci_colors[i] != -1: # there was a color assigned before490if i not in all_deci_set:491# decrease the assignments of chosen color by 1492color_assigns[deci_colors[i]] = color_assigns[deci_colors[i]] - 1493deci_colors[i] = -1 # remove the assigned color494# assign color for an exhibited decision if not assigned495select_set = [] # set of colors to select from, start from empty496for i in range(len(groups)):497if deci_colors[group_deci[i]] == -1:498if len(select_set) == 0:499# construct a new select_set500color_assigns_min = min(color_assigns)501color_assigns_temp = [j - color_assigns_min for j in color_assigns]502select_set = range(color_quantity)503for j in range(color_quantity):504if color_assigns_temp[j] != 0:505select_set.remove(j)506# if here, the select_set is good to go507# # force color orange for second decision508# if group_deci[i] == 1:509# chosen_color = 4 # color orange510# else:511# chosen_color = np.random.choice(select_set)512# if chosen_color in select_set:513# select_set.remove(chosen_color)514chosen_color = np.random.choice(select_set)515select_set.remove(chosen_color)516deci_colors[group_deci[i]] = chosen_color # assign the chosen color517# increase the assignments of chosen color by 1518color_assigns[chosen_color] = color_assigns[chosen_color] + 1519# update the colors for the groups520group_colors = []521for i in range(len(groups)):522group_colors.append(deci_colors[group_deci[i]])523524# 3.update the group size for each node525for i in range(len(groups)):526size_temp = len(groups[i])527color_temp = group_colors[i]528for node in groups[i]:529group_sizes[node] = size_temp530node_colors[node] = color_temp # update the color for each node531532# the decision distribution evolution533converged_all = True # flag for convergence of entire network534deci_dist_t = np.copy(deci_dist) # deep copy of the 'deci_dist'535for i in range(net_size):536# # skip updating the 20 commanding nodes, stubborn in their decisions537# if i in command_nodes_20:538# if iter_count >= iter_cutin:539# continue540host_domi = deci_domi[i]541converged = True542for neighbor in connection_lists[i]:543if host_domi != deci_domi[neighbor]:544converged = False545break546# action based on convergence of dominant decision547if converged: # all neighbors have converged with host548# step 1: take equally weighted average on all distributions549# including host and all neighbors550deci_dist[i] = deci_dist_t[i]*1.0 # start with host itself551for neighbor in connection_lists[i]:552# accumulate neighbor's distribution553deci_dist[i] = deci_dist[i] + deci_dist_t[neighbor]554# normalize the distribution such that sum is 1.0555sum_temp = np.sum(deci_dist[i])556deci_dist[i] = deci_dist[i] / sum_temp557# step 2: increase the unipolarity by applying the linear multiplier558# (this step is only for when all neighbors are converged)559# First find the largest difference between two distributions in this block560# of nodes, including the host and all its neighbors.561comb_pool = [i] + connection_lists[i] # add host to a pool with its neighbors562# will be used to form combinations from this pool563comb_pool_len = len(comb_pool)564dist_diff = []565for j in range(comb_pool_len):566for k in range(j+1, comb_pool_len):567j_node = comb_pool[j]568k_node = comb_pool[k]569dist_diff.append(np.sum(abs(deci_dist[j_node] - deci_dist[k_node])))570dist_diff_max = max(dist_diff) # maximum distribution difference of all571if dist_diff_max < dist_diff_thres:572# distribution difference is small enough,573# that linear multiplier should be applied to increase unipolarity574dist_diff_ratio = dist_diff_max/dist_diff_thres575# Linear multiplier is generated from value of smaller and larger ends, the576# smaller end is positively related with dist_diff_ratio. The smaller the577# maximum distribution difference, the smaller the dist_diff_ratio, and the578# steeper the linear multiplier.579# '1.0/deci_num' is the average value of the linear multiplier580small_end = 1.0/deci_num * np.power(dist_diff_ratio, dist_diff_power)581large_end = 2.0/deci_num - small_end582# sort the magnitude of the current distribution583dist_temp = np.copy(deci_dist[i]) # temporary distribution584sort_index = range(deci_num)585for j in range(deci_num-1): # bubble sort, ascending order586for k in range(deci_num-1-j):587if dist_temp[k] > dist_temp[k+1]:588# exchange values in 'dist_temp'589temp = dist_temp[k]590dist_temp[k] = dist_temp[k+1]591dist_temp[k+1] = temp592# exchange values in 'sort_index'593temp = sort_index[k]594sort_index[k] = sort_index[k+1]595sort_index[k+1] = temp596# applying the linear multiplier597for j in range(deci_num):598multiplier = small_end + float(j)/(deci_num-1) * (large_end-small_end)599deci_dist[i][sort_index[j]] = deci_dist[i][sort_index[j]] * multiplier600# normalize the distribution such that sum is 1.0601sum_temp = np.sum(deci_dist[i])602deci_dist[i] = deci_dist[i]/sum_temp603else:604# not applying linear multiplier when distribution difference is large605pass606else: # at least one neighbor has different opinion with host607converged_all = False # the network is not converged608# take unequal weights in the averaging process based on group sizes609deci_dist[i] = deci_dist_t[i]*group_sizes[i] # start with host itself610for neighbor in connection_lists[i]:611# accumulate neighbor's distribution612deci_dist[i] = deci_dist[i] + deci_dist_t[neighbor]*group_sizes[neighbor]613# normalize the distribution614sum_temp = np.sum(deci_dist[i])615deci_dist[i] = deci_dist[i] / sum_temp616617# graphics animation, both pygame window and matplotlib window618# 1.pygame window for dynamics of network's groups619screen.fill(color_white)620# draw the regualr connecting lines621for i in range(net_size):622for j in range(i+1, net_size):623if connections[i][j]:624pygame.draw.line(screen, color_black,625nodes_disp[i], nodes_disp[j], norm_line_width)626# draw the connecting lines marking the groups627for i in range(len(groups)):628group_len = len(groups[i])629for j in range(group_len):630for k in range(j+1, group_len):631j_node = groups[i][j]632k_node = groups[i][k]633# check if two nodes in one group is connected634if connections[j_node][k_node]:635# wider lines for group connections636pygame.draw.line(screen, distinct_color_set[group_colors[i]],637nodes_disp[j_node], nodes_disp[k_node], group_line_width)638# draw the nodes as dots639for i in range(net_size):640pygame.draw.circle(screen, distinct_color_set[node_colors[i]],641nodes_disp[i], node_size, 0)642pygame.display.update()643# # 2.matplotlib window for 3D bar graph of unipolarity of decision distribution644# if not nobargraph:645# dz = [deci_dist[i][deci_domi[i]] for i in range(net_size)]646# ax.bar3d(x_pos, y_pos, z_pos, dx, dy, dz, color='b')647# fig.canvas.draw()648# fig.show()649# 2.matplotlib window for 2D bar graph of discrete entropy650if not nobargraph:651# calculate the discrete entropy for all distributions652ent_sum = 0653for i in range(net_size):654ent = 0655for j in range(deci_num):656if deci_dist[i][j] == 0: continue657ent = ent - deci_dist[i][j] * math.log(deci_dist[i][j],2)658dz[i] = ent659ent_sum = ent_sum + ent660print("summation of entropy of all distributions: {}".format(ent_sum))661# draw the bar graph662# somehow the old bars are overlapping the current ones, have to clear the663# figure first, and rebuild the bar graph, as a temporary solution664fig.clear()665ax = fig.add_subplot(111, projection='3d')666bar_colors = [distinct_color_set_float[node_colors[i]] for i in range(net_size)]667barlist = ax.bar3d(x_pos, y_pos, z_pos, dx, dy, dz, bar_colors)668fig.canvas.draw()669fig.show()670671# simulation speed control672if not skip_speed_control:673# simulation updating frequency control674time_now = pygame.time.get_ticks()675time_past = time_now - time_last # time past since last time_last676# needs to delay a bit more if time past has not reach desired period677# will skip if time is overdue678if time_now - time_last < time_period:679pygame.time.delay(time_period-time_past)680time_last = pygame.time.get_ticks() # reset time-last681682# iteration count683print "iteration {}".format(iter_count)684iter_count = iter_count + 1685# hold the program to check the network686# raw_input("<Press Enter to continue>")687688# exit as soon as the network is converged689if converged_all:690print("steps to converge: {}".format(iter_count-1))691print("converged to the decision: {} ({} in order)".format(deci_domi[0],692list(avg_dist_id_sort).index(deci_domi[0]) + 1))693print("the order of average initial decision:")694print(avg_dist_id_sort)695break696697# record result of this simulation698all_steps[sim_index] = iter_count - 1699all_deci_orders[sim_index] = list(avg_dist_id_sort).index(deci_domi[0]) + 1700if deci_domi[0] == 0:701steps_seed.append(iter_count - 1)702703# report statistic result if simulation runs more than once704if repeat_times > 1:705print("\nstatistics\nsteps to converge: {}".format(all_steps))706print("final decision in order: {}".format(all_deci_orders))707print("average steps: {}".format(np.mean(np.array(all_steps))))708print("maximum steps: {}".format(max(all_steps)))709print("minimum steps: {}".format(min(all_steps)))710print("std dev of steps: {}".format(np.std(np.array(all_steps))))711# # statistics for simulations with seed robots712# print("{} out of {} trials follow command from seed robots".format(713# len(steps_seed), repeat_times))714# if len(steps_seed) != 0:715# print(steps_seed)716# print("\ton average of {} steps".format(np.mean(steps_seed)))717718719720721