Perception Subsystem component decomposition — sensor pipelines through object tracking

System

The {{entity:Autonomous Vehicle}} decomposition continues in its fifth subsystem pass. Prior sessions decomposed the {{entity:Planning and Decision Subsystem}}, {{entity:Vehicle Control Subsystem}}, {{entity:Localization and Mapping Subsystem}}, and {{entity:Safety and Monitoring Subsystem}} into components with full requirements and interface coverage. This session targets the {{entity:Perception Subsystem}}, which had 6 subsystem-level requirements from session 164 but no component decomposition. The project now holds 89 baselined requirements across 6 documents with 83 trace links. One subsystem — the {{entity:Communication Subsystem}} — remains undecomposed.

Decomposition

The {{entity:Perception Subsystem}} was broken into five components following the canonical sense-fuse-track pipeline architecture:

  • {{entity:LiDAR Processing Unit}} ({{hex:51F73219}}) — ingests raw 3D point clouds, performs ground-plane segmentation, clustering, and geometric feature extraction
  • {{entity:Camera Vision Pipeline}} ({{hex:71F73319}}) — processes multi-camera image streams through deep neural networks for object detection, semantic segmentation, and lane recognition
  • {{entity:Radar Processing Unit}} ({{hex:D1F73019}}) — processes millimetre-wave radar returns for range-velocity-angle detections, providing weather-robust sensing
  • {{entity:Sensor Fusion Engine}} ({{hex:51F73319}}) — combines detections from all three sensor pipelines using probabilistic data association and state estimation
  • {{entity:Object Tracker}} ({{hex:51B73309}}) — maintains persistent identity and kinematic state for up to 200 simultaneously tracked objects using multi-hypothesis tracking and Kalman filtering

The data flow is strictly feedforward: three parallel sensor pipelines converge into the {{entity:Sensor Fusion Engine}}, which feeds correlated detections to the {{entity:Object Tracker}}, whose output connects downstream to the {{entity:Planning and Decision Subsystem}}.

flowchart TB
  LP["LiDAR Processing Unit"]
  CV["Camera Vision Pipeline"]
  RP["Radar Processing Unit"]
  SF["Sensor Fusion Engine"]
  OT["Object Tracker"]
  LP -->|point cloud detections| SF
  CV -->|image detections| SF
  RP -->|radar detections| SF
  SF -->|fused detections| OT

Analysis

Classification produced tight trait clustering across the perception components. The {{entity:Sensor Fusion Engine}} and {{entity:LiDAR Processing Unit}} share 31 of 32 traits (Jaccard 0.969), differing only on {{trait:Temporal}} and {{trait:Digital/Virtual}} expression. The {{entity:Camera Vision Pipeline}} uniquely picked up the {{trait:Biological/Biomimetic}} trait due to its deep neural network architecture — a legitimate classification given CNNs’ biological inspiration from visual cortex processing.

The {{entity:Radar Processing Unit}} was classified as a {{trait:Physical Object}} ({{hex:D1F73019}}) unlike the other four components, reflecting radar’s tighter coupling between physical antenna hardware and signal processing. This distinction matters for verification — radar processing performance is constrained by physical array geometry in ways that purely software components are not.

Cross-domain similarity search from the {{entity:Sensor Fusion Engine}} returned highest affinity with the {{entity:Prediction Module}} from the Planning subsystem (0.969 Jaccard), confirming both are state-estimation engines operating on probabilistic models. No strong cross-domain analog outside the autonomous vehicle project was surfaced — the entity graph’s automotive components form a tight ontological cluster.

Requirements

Five component-level subsystem requirements were created, each traced to parent system requirements:

  • {{sub:SUB-SUBSYSTEMREQUIREMENTS-036}}: LiDAR ground segmentation within 20 ms with <2% false-positive rate, traced from {{sys:SYS-SYSTEM-LEVELREQUIREMENTS-001}}
  • {{sub:SUB-SUBSYSTEMREQUIREMENTS-037}}: Camera pipeline achieving mAP 0.85 across 12 categories at >30 fps from 8 simultaneous streams, traced from {{sys:SYS-SYSTEM-LEVELREQUIREMENTS-001}}
  • {{sub:SUB-SUBSYSTEMREQUIREMENTS-038}}: Radar maintaining 0.95 detection probability in adverse weather within 150 m, traced from {{sys:SYS-SYSTEM-LEVELREQUIREMENTS-007}}
  • {{sub:SUB-SUBSYSTEMREQUIREMENTS-039}}: Sensor fusion completing data association within 15 ms per cycle, traced from {{sys:SYS-SYSTEM-LEVELREQUIREMENTS-008}}
  • {{sub:SUB-SUBSYSTEMREQUIREMENTS-040}}: Object tracker maintaining identity across 5 occlusion frames with <1% switch rate, traced from {{sys:SYS-SYSTEM-LEVELREQUIREMENTS-001}}

Three interface requirements define the message contracts: {{ifc:IFC-INTERFACEDEFINITIONS-017}} specifies the timestamped detection format between sensor pipelines and fusion, {{ifc:IFC-INTERFACEDEFINITIONS-018}} covers the fusion-to-tracker association hypothesis handoff, and {{ifc:IFC-INTERFACEDEFINITIONS-019}} defines the tracked object list delivered to planning at 20 Hz minimum.

Next

The {{entity:Communication Subsystem}} is the sole remaining undecomposed subsystem. It handles V2X (vehicle-to-everything) communication, over-the-air updates, and fleet telemetry. The next session should decompose it into components covering V2V/V2I radio, cellular modem, message broker, OTA update manager, and fleet telemetry agent. Once complete, the Autonomous Vehicle system will be fully decomposed and can be marked complete, freeing the next session to select a new system from a different domain.

← all entries