Project Proposal

Parallel Multi-Agent Path Planning for Coordinated Robotic Manipulation

Team Members: Dylan Sun (bdsun), Jongsun Park (jongsunp)

TITLE

Parallel Multi-Agent Path Planning for Coordinated Robotic Manipulation
Dylan Sun (bdsun), Jongsun Park (jongsunp)

URL

https://bdyldo.github.io/418-final-project/

SUMMARY

We are going to study parallelization strategies for multi-agent path planning in a simplified robotic manipulation setting, where multiple fingers of a robotic hand must move from start nodes to designated goal nodes without colliding. Our baseline will use Conflict-Based Search (CBS) with A* for single-agent replanning, and we will compare several parallel variants to understand how much of the algorithm can be accelerated and what bottlenecks limit scaling.

A central theme of the project is the tension between correctness and scalability. MAPF algorithms must return paths that are collision-free, and an optimal CBS implementation also has to preserve a best-first search order over high-level conflict-tree nodes. These requirements make the problem more complex than simply running many A* searches in parallel, because some parallel choices can improve throughput while also changing the order in which conflicts are resolved or increasing the amount of redundant search.

BACKGROUND

A common robotics problem is coordinating multiple moving components that share the same workspace. In our motivating example, a robotic hand with multiple fingers must move toward a grasp configuration without finger-finger collisions. To simplify the motion-planning problem into something implementable within the scope of this class, we model the workspace as a graph or grid roadmap. Each finger is treated as an agent with a start node and a target node, and the goal is to compute collision-free paths for all fingers over time.

This problem is a form of multi-agent path finding (MAPF). A standard optimal approach is Conflict-Based Search (CBS). CBS separates the problem into two levels. At the low level, it uses single-agent A* search to compute a shortest path for one agent subject to a set of constraints. At the high level, it checks the resulting multi-agent solution for collisions. If a conflict is detected, CBS branches by adding constraints and replanning until it finds a conflict-free set of paths.

[Finger 1 Start/Goal]   [Finger 2 Start/Goal]   ...   [Finger N Start/Goal]
                \             |                       /
                 \            |                      /
                  ------  High-Level CBS Search  ------
                               |
                     Low-Level A* Replanning
                               |
                   Collision-Free Multi-Agent Paths

This problem is appealing for a parallel systems project because it contains meaningful algorithmic structure and nontrivial sources of parallelism. At the same time, the search process has strong ordering pressure: optimal planning prefers expanding the most promising state first, which introduces dependencies and synchronization challenges. This makes the problem a good fit for studying when parallelism helps, when it creates overhead, and what tradeoffs arise between strict search ordering and scalable execution.

The coupling between agents is what makes the problem difficult. A single-agent shortest path may be easy to compute, but once multiple agents share the same graph, one agent's path can invalidate another agent's path at a particular timestep. CBS handles this by adding constraints and replanning, but these constraints are generated dynamically from conflicts discovered during the search. As a result, the future work available to parallel threads depends on earlier conflicts, the order in which high-level nodes are expanded, and the structure of the current set of constraints.

THE CHALLENGE

The difficulty of this project is not simply implementing A* or CBS, but exposing useful parallelism in a search problem that is only partially parallelizable. In sequential CBS, the high-level search repeatedly selects the most promising conflict tree node, resolves one collision, and generates child nodes that require additional low-level replanning. This creates a nontrivial dependency structure because the search order affects both runtime and solution quality.

Several properties of the workload make the problem challenging. First, conflict rates depend heavily on the geometry of the environment and the amount of interference between agents. When conflicts are rare, planning is easier and some parallelization strategies may offer little benefit. When conflicts are frequent, the high-level tree can grow rapidly and replanning costs become dominant. Second, the search may become imbalanced: some CBS branches are cheap to evaluate while others require many low-level searches. Third, the algorithm relies on priority queues, constraint sets, and repeated state expansion, all of which can become synchronization bottlenecks under parallel execution.

From a systems perspective, we want to understand how to parallelize a search algorithm with both independent and dependent work. Some parts of the computation, such as initial per-agent pathfinding and certain child-node replans, can be done concurrently. Other parts, such as selecting globally promising CBS nodes, may resist parallelization or require relaxed ordering. This creates a real tradeoff between preserving optimal search behavior and obtaining useful speedup.

Correctness is also more subtle than avoiding data races. For this project, a correct planner must at minimum produce paths that obey all vertex and edge-collision constraints over time. If we are evaluating an optimal CBS baseline, correctness also includes expanding nodes in an order that preserves optimality. Parallel execution can threaten these properties in several ways: threads may evaluate stale constraint sets, duplicate work on the same conflict-tree region, publish child nodes in a different order, or require synchronization around shared priority queues that removes much of the expected speedup.

This gives us an explicit correctness-versus-scalability question. A conservative parallel version can keep the exact same logical search order as the sequential baseline, which makes correctness and comparison easier but may scale poorly because workers often wait on a global queue or on high-level decisions. A more aggressive version can expose more parallel work by expanding multiple CBS nodes at once or by relaxing priority-queue ordering, but then we must carefully measure whether it still returns valid paths, whether it remains optimal or becomes bounded-suboptimal, and whether the extra search work is worth the parallel speedup.

RESOURCES

We plan to implement the project in C++ using the C++ standard library's threading and synchronization support on GHC and PSC multi-core machines. We are starting from scratch rather than building on an existing MAPF solver, since part of the project goal is to understand the algorithmic structure and implement the parallelization strategies ourselves.

Our starting point will be:

We will use course material on parallel search, synchronization, and performance analysis as guidance, along with documentation and any relevant papers or writeups on CBS, MAPF, and parallel heuristic search that we consult during implementation. At present, we do not anticipate needing specialized hardware beyond access to multi-core CPUs for controlled experiments.

GOALS AND DELIVERABLES

Plan to Achieve

A successful project for us means producing a correct multi-agent planner, implementing at least two meaningful parallel variants beyond the sequential baseline, and carrying out a thoughtful evaluation that explains when parallelism helps and when global search dependencies dominate.

Correctness vs. Scalability

We will treat correctness as a first-class part of the evaluation rather than only reporting speedup. The sequential CBS baseline will serve as the reference implementation for valid and, when applicable, optimal paths. Parallel variants that preserve strict CBS ordering should match the baseline solution quality, while relaxed-order variants will be evaluated separately as scalability experiments that may trade optimality for more available parallelism. In all cases, we will validate that produced paths are collision-free and report any extra search work, duplicated expansions, or solution-quality changes caused by parallel execution.

Hope to Achieve

Fallback if Progress Is Slower

Questions We Hope to Answer

Demo

If time permits, we may show a live visualization of agents moving through the workspace, with conflicts highlighted and paths updated as the planner runs. However, our primary focus is on a strong experimental evaluation rather than a live demo.

PLATFORM CHOICE

A multi-core CPU platform such as the GHC and PSC machines is a natural fit for this workload because the planner consists of many interacting search tasks and repeated graph operations that share memory but do not map cleanly to SIMD or GPU execution. The main issues we want to study—parallel search, synchronization, task granularity, load imbalance, and contention on shared search structures—are all central CPU-side parallel systems problems.

C++ is also a good match because it gives us control over threads, work scheduling, and memory layout while allowing us to build a clean implementation of both sequential and parallel search strategies. Our project is not just about raw throughput; it is about understanding how search structure, coordination overhead, and algorithmic dependencies shape scaling on shared-memory machines.

SCHEDULE

Back to Project Home

Return to homepage