
With the rise of next-gen wearables and smart home devices, AI systems are beginning to evolve beyond our screens and into the spaces around us. Across the industry, there’s a push to embed intelligence into the physical environments where we live and work. Ambient intelligence is emerging as a core concept in smart homes, enabling everyday devices to passively observe their surroundings and understand where people are, what they’re doing and what they might need — all without explicit user interaction.
While cameras have traditionally been used for monitoring, they raise ongoing concerns around privacy and data governance. Beyond privacy, they also suffer from practical limitations: they rely on good lighting conditions, offer a limited field of view, are vulnerable to occlusion and their data streams can be bandwidth-intensive. Cameras also increase the bill of materials (BOM) for device manufacturers, making them a costly component in large-scale deployments.
In response, researchers in our community are repurposing existing communication infrastructure to passively perceive motion, gestures and context, without relying on visual data. A key enabler for ambient intelligence is wireless sensing. Technologies like Wi-Fi CSI, mmWave and ultrasound radar, combined with edge computing, promise to deliver real-time and privacy-preserving intelligence. As consumer IoT deployments grow, our primary “computer” is poised to dissolve into our environment and would no longer be limited to a single device or screen.
The fusion of passive sensing and on-device AI can capture hidden information in our surroundings, moving us closer to a world where our homes and workplaces adapt to us and add a new layer of intuition to everyday interactions.
Wi-Fi CSI for sensing
Wi-Fi sensing leverages our existing Wi-Fi-enabled devices and their wireless signals to detect motion, presence and activity in the environment, without the need for dedicated sensors. To uncover how invisible wireless signals can capture such minute information, we must understand how Wi-Fi uses radio frequency (RF) signals emitted by a Wi-Fi access point (AP) — the kind installed in our homes by providers like Xfinity or Verizon. These RF waves scatter through the environment and get reflected off various surfaces — static objects like walls and furniture, as well as moving objects such as humans, pets and even home robots. A receiver device, such as a smart bulb or the AP itself, captures channel state information (CSI), which contains characteristics of the wireless channel, providing fine-grained data about how Wi-Fi signals propagate between devices.
The captured CSI data includes amplitude and phase shifts across multiple subcarriers and antennas, which allows us to analyze how the signal is reflected, scattered or blocked by people and objects. The static component of CSI can be learned when there is no motion in the environment, and it serves as a baseline to isolate variations in the channel caused by movement. These variations reveal detailed characteristics of the physical environment, such as movement patterns, breathing rates or even specific gestures.
CSI enables passive, privacy-preserving sensing that can operate through walls and in low-light conditions, making it ideal for smart home, security and health monitoring applications. Wi-Fi-based home monitoring leverages the ubiquity of wireless networks to create a sensing fabric across homes. Unlike cameras, Wi-Fi signals naturally penetrate walls and furniture, allowing motion detection across rooms without requiring additional hardware or wiring.
As noted in recent research, this contactless sensing model can be integrated into commodity devices like routers and smart bulbs, offering a cost-effective and unobtrusive solution for home awareness. In fact, a large-scale deployment spanning over 10 million routers and 100 million smart bulbs has shown that these methods can detect motion with over 92% accuracy in real homes. These systems also integrate seamlessly with our existing Wi-Fi data streams, avoiding any disruption to normal internet usage.
In one testbed, researchers were able to distinguish between a person walking versus standing still with over 90% accuracy using only CSI features. Recent deployments reduce false positives from pets and appliances by learning motion characteristics such as gait and speed. By treating our motion as a biomechanical signal and not just a fluctuation, these systems can distinguish between a walking person, a dog and a vacuum robot. Such capabilities make ambient sensing possible using infrastructure that already exists in homes today.
Radar technologies for smart homes
Another powerful frontier for passive wireless sensing is radar technology, built on a simple core principle: project RF signals into a space and interpret distance and motion from their reflections. This concept, long used in the automotive industry for proximity detection and parking assistance, is now being adopted into indoor environments. Technologies like mmWave, UWB and ultrasound offer radar implementations that can detect presence, gestures and even human vitals.
When RF signals bounce off moving objects, they undergo a Doppler shift that alters their frequency. This change enables radars to detect not just motion, but also its speed and direction. The Doppler effect is especially useful in applications like gesture recognition, breathing monitoring and vehicle tracking, where capturing subtle motion is critical. Unlike basic motion sensors, Doppler radar provides velocity information in real time, enabling sensing experiences with high temporal resolution.
Ultra-Wideband (UWB) radar has a large bandwidth, which enables it to transmit short pulses, enabling high-precision time-of-flight distance and Doppler velocity measurements. UWB is particularly effective for identifying human activity, including respiration and fall detection, while operating with low power consumption and making it ideal for battery-powered devices. Operating in the 3–10 GHz spectrum, UWB typically supports a range of 10–15 meters and performs reliably through walls and common obstructions. It’s often used for indoor localization, human presence detection and monitoring of vital signs.
Millimeter-wave (mmWave) radar operates at higher frequencies, such as 60 GHz, and offers even finer spatial and velocity resolution. It can detect micro-movements like finger gestures or breathing patterns, supports high update rates and can distinguish between multiple moving targets with precision. However, mmWave systems tend to be power-hungry, more expensive to integrate and suffer from poor wall penetration due to high signal attenuation. As a result, they’re best suited for short-range, line-of-sight applications like in-vehicle monitoring or smart living rooms. For example, some smart TVs use mmWave radars to detect presence, optimize visual and acoustic precision, automatically trigger low-power modes or enable gesture-based playback control.
Ultrasound radar uses high-frequency acoustic waves, inaudible to humans, to detect motion, presence and distance by measuring the time it takes for sound to bounce off objects and return. Because sound travels more slowly than radio waves, ultrasound enables accurate ranging with relatively simple hardware and lower processing demands. However, it is highly sensitive to environmental factors like airflow and temperature, and it struggles with non-line-of-sight scenarios where the sound path is obstructed. Smart speakers and thermostats often employ ultrasonic sensing to detect nearby users and adjust settings accordingly.
Privacy considerations: The journey ahead
The end goal of these innovations isn’t just motion detection; it’s developing truly adaptive environments around us. Ambient intelligence refers to spaces that respond automatically and meaningfully to people’s presence and behavior. Imagine that our lights turn on as we move through the house, HVAC systems that adjust based on our location or healthcare tools that monitor our well-being and sleep patterns.
Edge-based wireless sensing enables this vision at scale. These systems work quietly in the background, respect user privacy and blend into the fabric of our connected environments. However, challenges remain: fusing different sensor modalities, maintaining accuracy across diverse households and building layouts and standardizing protocols to ensure interoperability across devices and platforms. IEEE 802.11bf is working towards standardizing CSI-based sensing capabilities.
One exciting direction is multimodal fusion, which involves combining RF sensing with audio, inertial sensors and environmental signals to build more robust models of context. When an AI system can interpret not just movement, but also sound, temperature and habitual patterns, it begins to anticipate needs before they’re even expressed. Alexa already offers hunches, which can learn user behaviors over time and initiate automated routines like dimming the lights as we wind down for the night or toggle ventilation based on occupancy and behavior patterns.
To enable autonomous AI agents in the smart home, ambient sensing must be both invisible and accurate, building environmental awareness without intruding on our lives. As the industry shifts from smartphones to AI-enabled wearables, these systems will also face heightened privacy expectations. Many of these new wearables rely on active microphones and cameras to gather context or visually map their surroundings. While this convergence of user data with smart home infrastructure can enhance ambient intelligence, it also raises serious questions around data security and user trust.
As AI systems evolve to understand not just our language but also our presence, movement and intent, we must carefully consider how that awareness is enabled. Passive wireless sensing offers a compelling alternative, providing contextual and spatial awareness without relying on constant visual or audio capture. It is a solution that is technically elegant, inherently scalable and better aligned with privacy-by-design principles.
From fall detection in elder care and occupancy-based energy savings in office buildings to gesture-based interfaces in augmented reality, wireless sensing is quietly laying the foundation for a more responsive and respectful digital future.
This article is published as part of the Foundry Expert Contributor Network.
Want to join?
Source:: Network World