# AnySurface: The Browser-Based Spatial Operating System ## 1. Core Philosophy: "Reality is the DOM" AnySurface is a Hardware Abstraction Layer (HAL) for physical spaces. It decouples application logic from physical display hardware, making the **Surface**—not the screen or the projector—the atomic unit of computing. > **Mission:** To transform any physical environment into a computational space using ad-hoc, commodity hardware (webcams, projectors, phones) without a central server. --- ## 2. System Architecture ### A. The Primitive: The Surface Object Development targets logical surfaces, not physical pixels. The OS handles the translation between the virtual intention and the physical reality. * **Logical Layer:** `const table = new Surface('sandtable-01');` * **Physical Layer:** The OS maps this logical surface to *n* projectors and *m* cameras. * **Compositing:** * **Stitching:** Merges output from multiple overlapping projectors into one seamless texture. * **Masking:** Prevents "spill" by strictly adhering to the surface geometry (e.g., projecting only on the table, not the floor). ### B. The Runtime: The Ubiquitous Browser The "OS" is a URL. No app stores, no drivers, no compile steps. * **Rendering:** WebGL / WebGPU (Canvas API). * **Input:** `getUserMedia` (WebRTC) for accessing cameras on phones, laptops, and drones. * **Compute:** WASM / TensorFlow.js for in-browser computer vision. ### C. The Nervous System: Acequia Replaces the central server with a decentralized state synchronization layer. * **State, Not Video:** Only lightweight JSON events travel the network. * **Consensus:** Ensures all devices "hallucinate" the same simulation state simultaneously. * **Event:** `{"surface": "wall-1", "event": "touch", "xyz": [1.2, 0.5, 0.0]}` --- ## 3. The "Driver" Layer: AI & Structured Light How the system calibrates itself in an ad-hoc environment (Zero-Friction Setup). ### The Feedback Loop 1. **Project (Active Sonar):** Projectors emit structured light patterns (Gray Codes/Phase Shifting). 2. **Observe (Passive Sensor):** Cameras capture the distortion of these patterns over physical objects. 3. **Solve (The AI):** * **Pose Estimation:** Calculates the extrinsic matrix of all cameras and projectors relative to the Surface. * **Depth & Topology:** Generates a real-time mesh of the room (e.g., sand topography). * **Metric Scale:** Uses the structured light constraints to solve absolute scale, overcoming monocular ambiguity. --- ## 4. The Developer Contract The developer writes for the **Interaction**, not the **Implementation**. | Traditional Display | AnySurface OS | | :--- | :--- | | **Target:** Screen (1920x1080) | **Target:** Surface (Mesh/Geometry) | | **Input:** Mouse (X, Y) | **Input:** Space (X, Y, Z) | | **Calibration:** Fixed Hardware | **Calibration:** Continuous & Self-Healing | | **Logic:** "Draw pixel at 50,50" | **Logic:** "Render fire at Global Coord [35.6, -105.9]" | ## 5. Summary **AnySurface is to Physical Space what the Browser was to the Internet.** It allows distinct, disparate hardware to render a shared, interactive reality by agreeing on a common protocol—where the room itself becomes the computer.