Devices & Cameras¶
Hardware management for live streaming and object detection during sessions.
Concepts¶
| Concept | Description |
|---|---|
| Device | Registered hardware (camera, sensor) |
| Camera | Video capture device |
| Stream | Live video feed via WebSocket |
| Detection | Real-time object detection on frames |
| Clip | Video segment captured from stream |
Device Management¶
useDevices Hook¶
hooks/use-devices.ts
Admin-only device management.
const {
devices, // Device[]
isLoading,
registerDevice, // (data) => Promise<Device>
updateDevice, // (id, data) => Promise<void>
deleteDevice // (id) => Promise<void>
} = useDevices();
API Calls (Admin):
- GET /api/v1/admin/devices - List devices
- POST /api/v1/admin/devices/register - Register device
- PATCH /api/v1/admin/devices/{id} - Update device
- PATCH /api/v1/admin/devices/{id}/delete - Soft delete
Logging:
// All device operations are logged
const registerDevice = async (data) => {
try {
const device = await fetchJSON("/devices/register", token, {
method: "POST",
body: data
});
logger.info("Device registered", { deviceId: device.id });
return device;
} catch (error) {
logger.error("Device registration failed", { error });
throw error;
}
};
Camera Streaming¶
useCameraStream Hook¶
hooks/use-camera-stream.ts
Complete camera and streaming management.
const {
// Camera queries
cameras, // Camera[] - All cameras
connectedCameras, // Camera[] - Currently connected
activeCamera, // Camera - Currently active
activeCameras, // Camera[] - All active cameras
// Detection status
detectionStatus, // { enabled: boolean, ... }
// Streaming state
selectedCameraId, // string
isStreaming, // boolean
frameData, // FrameData - Current frame
// Actions
handleCameraChange, // (cameraId) => void
startStreaming, // (cameraId) => void
stopStreaming, // () => void
handleCreateClip, // (data) => Promise<Clip>
handleEnableDetection, // () => Promise<void>
handleDisableDetection // () => Promise<void>
} = useCameraStream({
organizationId,
sessionId,
serverHostURL
});
API Calls:
- GET /vision/cameras/list - All cameras
- GET /vision/cameras/connected - Connected cameras
- GET /vision/cameras/active - Active camera
- GET /vision/detection/status - Detection status
- POST /vision/cameras/change - Set active camera
- POST /vision/detection/enable - Enable detection
- POST /vision/detection/disable - Disable detection
WebSocket Streaming¶
Frames are streamed via WebSocket:
const WS_URL = `ws://${serverHostURL}:${WS_PORT}/ws/stream/${cameraId}`;
const startStreaming = (cameraId: string) => {
const ws = new WebSocket(WS_URL);
ws.onmessage = (event) => {
const frameData: FrameData = JSON.parse(event.data);
// Update frame state
setFrameData(frameData);
// Decode base64 frame
const frame = atob(frameData.frame);
// Post frame to backend for processing
fetch(`${API_BASE_URL}/api/v1/actions/actions/send`, {
method: "POST",
body: JSON.stringify({
streamName: "webcam",
streamType: "frame",
frame: frameData.frame,
timestamp: frameData.timestamp,
camera_id: cameraId
})
});
};
ws.onclose = () => setIsStreaming(false);
ws.onerror = (error) => console.error("WebSocket error:", error);
setIsStreaming(true);
};
Frame Data Format¶
interface FrameData {
frame: string; // Base64 JPEG
timestamp: number; // Unix timestamp
camera_id: string;
detections?: Detection[];
}
interface Detection {
class: string; // Object class (person, ball, etc.)
confidence: number; // 0-1 confidence score
bbox: [number, number, number, number]; // x, y, width, height
}
Components¶
CameraSelector¶
components/stream/camera-selector.tsx
Dropdown to select camera:
<CameraSelector
cameras={cameras}
selectedId={selectedCameraId}
onChange={handleCameraChange}
disabled={isStreaming}
/>
StreamControls¶
components/stream/stream-controls.tsx
Streaming action buttons:
<StreamControls
isStreaming={isStreaming}
detectionEnabled={detectionStatus?.enabled}
onStart={() => startStreaming(selectedCameraId)}
onStop={stopStreaming}
onEnableDetection={handleEnableDetection}
onDisableDetection={handleDisableDetection}
onCreateClip={handleCreateClip}
/>
VideoCanvas¶
components/stream/video-canvas.tsx
Renders video frame with detection overlays:
Implementation draws: 1. Decoded JPEG frame 2. Bounding boxes for each detection 3. Class labels with confidence
CameraInfoCard¶
components/cards/camera-info-card.tsx
Camera metadata display: - Camera name/ID - Connection status - Resolution - FPS
Session Integration¶
Cameras are used within sessions:
// Session detail page - Streaming tab
const SessionStreamingTab = ({ session }) => {
const { activeCameras, serverHostURL } = getOrgData();
const stream = useCameraStream({
organizationId: session.organizationId,
sessionId: session.id,
serverHostURL
});
return (
<div>
<CameraSelector {...stream} />
<StreamControls {...stream} />
<VideoCanvas frameData={stream.frameData} />
</div>
);
};
Clip Creation¶
Clips are captured segments from the stream:
const handleCreateClip = async () => {
const clip = await createClipMutation({
sessionId,
cameraId: selectedCameraId,
startTime: clipStartTime,
endTime: Date.now()
});
toast.success("Clip created");
return clip;
};
Server Configuration¶
The WebSocket server URL comes from organization metadata:
const org = await authClient.organization.getFullOrganization();
const serverHostURL = org.metadata?.serverHostURL;
const wsPort = process.env.NEXT_PUBLIC_WS_PORT || "7103";
Detection Features¶
When detection is enabled:
- Frames sent to vision API
- Object detection runs on each frame
- Detections returned in frame data
- Bounding boxes rendered on canvas
Detection types may include: - Players - Ball - Field markings - Equipment