Rather than longer times, what about short times? I did some work on fast fading and you can see rapid swings in fade over <5s. That is hard for automated systems to respond to, so you normally respond by increasing the link margin. If you can predict this you could reduce the margin needed. That could potentially be very valuable.
pretty intriguing demo video. how do you ensure your telemetry ingestion happens operationally that will be daunting task. output will be as good as your telemetry any delay or break in data everything bound break.
Great point, telemetry reliability is the biggest hurdle for any mission-critical system. We address the "garbage in, garbage out" risk by prioritizing freshness (our pipeline treats latency as a failure).
We use a
"leaky" buffer strategy (if data is too old to be actionable for a 3-minute forecast, we drop it to ensure the models aren't lagging behind the physical reality of the link),
graceful degradation (when telemetry is delayed or broken, the system automatically falls back to physics-only models i.e. orbital propagation and ITU standards), and
edge validation (we validate and normalize data at the ingestion point, if a stream becomes corrupted or "noisy," the system flags that specific sensor as unreliable and adjusts the prediction confidence scores in real-time).
Are you raising?
Very cool company! Are y’all hiring?
Do you plan to work on orbital weapon systems like Golden Dome?