9/29/23

Introduction

In my earlier article Can Organizations Think Better Than Their Members? Simulating Aggregate Intelligence I introduced the idea of aggregate intelligence, where networking workers together can in theory combine their intelligence to enable solving problems that no individual member could solve. I built a simulator to configure an aggregate intelligence in different ways and measure its success at solving problems, and showed evidence that the connectivity between members affects the overall intelligence. In this article I use the simulator to explore whether a partially connected or fully connected network is able to solve more problems in the given time (i.e. is “smarter”), and show that the fully-connected network performs better. The implication for organizational design or multi-node AI architecture is that, at least at small scales, a “flatter” organization whose members can easily reach any other member outperforms other structures, all other factors being equal.

Improvements in experimental structure

Since I first introduced the simulator, I have improved it to make it a better experimental tool. Here I describe these changes to assist others in learning to build effective computational simulation experiments - readers not interested in that aspect can skip this section. The improvements fall into two classes: those that improve suitability for handling experiments at scale, and those that separate variables so they can be changed independently.

Improvements for handling experiments at scale

Often the early stages of building out a simulator involve evolving it to better handle managing data across many experimental runs. To improve data management, the simulator now:

Separating variables

In performing experiments it is important to separate all variables so that they can be varied and measured independently to identify which factor or factors cause which effect. Note that in experiments where resources are limited, such as physical experiments or computational experiments that require very large computational capacity, it is possible to vary several parameters at once and then analytically extract which parameters cause which effects through a technique called “design of experiments.” The aggregate intelligence simulation, however, does not require significant resources for it runs, so I can simply vary parameters one by one.

To better separate variables:

Key measurement metrics

When performing computational experiments, the experimenter must define the key metrics to measure. Selecting those metrics requires some thought. The aggregate intelligence simulator is designed to process tasks, so our primary measurement metrics are the number of tasks completed and those remaining in process (”work in process” or “WIP”). Since many process improvement methodologies, such as lean manufacturing and agile, focus on reducing WIP to boost efficiency, it is important to measure WIP accurately in our experiments. In calibration testing, I verified that all work injected into the network could be accounted for (completed tasks + WIP = tasks injected), finding and fixing any defects in WIP accounting. Switching to a pre-determined task injection rate via a worker process insured that all runs for a given configuration had the same number of tasks injected, to more easily identify WIP tracking issues.

Partial vs. Fully-Connected Network Experiments