What Makes a SmartNIC Smart?

There’s just something about buttons. Perhaps it’s the fact that they interrupt an otherwise-smooth surface, marring our sense of the aesthetic. Perhaps it’s that we resent the fact that buttons are functionally static, requiring us to hunt for the tiny little thing that will turn on our brand-new widget. Or perhaps it’s simply that a switch is the ultimate relic in the age of electricity, the first (and, at one time, only) way mankind had to control the flow of mighty electricity.

Whatever the reason, there’s an ongoing war on buttons, as product engineers and industrial designers forever try to remove them from devices. It wasn’t always this way. Once, an array of buttons was considered futuristic. Now it’s considered dated and passé.

For better or worse, buttons are disappearing from, or being virtualized in, our devices, particularly in smartphones. First, the iPhone eliminated the chiclet keyboard so prevalent in the Blackberrys or Treos of the early 2000s, replacing it with a touchscreen. The next to go was the physical home button in the iPhone 7, replaced by a force sensor and a haptics motor—virtualized button clicks. The iPhone X eliminated even that, to be replaced by swipe gestures from every angle imaginable. Google introduced the virtual “squeeze” button in the Pixel 2 that would call up the virtual assistant. And now, the last holdouts – the power and volume keys – appear that they will finally succumb.

Huawei has the Mate 30 Pro. Xiaomi has the Mi Mix Alpha. Vivo has the Apex and the Nex3. At the risk of being branded Luddites, others have commented on the trend toward buttonless phones, lamenting the fact that the buttons are being taken away (iPhone comment on removal of physical home button). Lament itself is not the right reaction. What we should examine is why is there lament at all? After all, shouldn’t the replacement of the archaic power and volume keys by some advanced sensor be a Good Thing?

If we’re to accept the buttonless phone, we must understand what’s needed to create a user experience that’s on par with a physical button. This turns out to be quite difficult; in an evolutionary sense, the button is a like a shark—it has survived this long for a reason.

So why are buttons good? In a nutshell, they can be operated entirely by haptics. A user can find a button by feel and doesn’t require visual identification to operate. Consider how often one operates a television remote in a dark room. Likewise, the acknowledgment is immediate and tactile as well—there is force-feedback as the switch passes the activation threshold and an associated click.

The impact and speed of associated muscle memory can’t be understated. After the first few times, operation becomes automatic. Changing the volume or waking up a phone is like opening a door: There’s no direct cognition of the action. The trend toward buttonless phones, while providing that sleek and futuristic industrial design, can’t come at the cost of the efficient and intuitive user experience afforded by physical buttons.


The first requirement involves location of the virtualized buttons entirely by feel. Obviously, the simplest thing would be to put a divot or a protrusion to indicate where the virtual button is located, but one could then argue why bother removing the buttons in the first place? And in some of these futuristic industrial designs, such as a phone with an all-glass wraparound display like the Mi Mix Alpha, it’s not possible to put tactile fiducials on the glass housing.

In the case of many of the aforementioned “buttonless” phones, the button locations are indicated on the screen, but this fails this first requirement and forces the user to visually identify the button location. One more complex solution would be the introduction of haptic motors at the location of each virtual button. The motor would vibrate the housing as the user’s finger passed over each button region, providing some tactile identification. Though expensive and power-hungry, it does provide a workable solution.

Another possible solution is envisioned by the Huawei Mate 30 Pro—double-tap anywhere, and then slide up and down to control the volume. Because the initial tap can be done anywhere and the slide gesture is relative, this can be accomplished without visual location.

However, finger gymnastics doesn’t lend well to rapid or efficient operation. Imagine a user trying to do double-tap-slide to change the volume while jogging. As much as haptic location of virtualized buttons is the first requirement, there are few viable options.

The second requirement is the haptic feedback of the button click. Here, the existing system haptics driver in the phone can help; the main haptics motor just fires to indicate that the virtual button has been pressed.

A major problem arises, though: By the time a sensor detects that the virtual button has been pressed, notifies the phone’s application processor, and in turn triggers the haptics motor to provide the clicking sensation, it’s far too late. A latency greater than 30 ms results in cognitive dissonance for the user. Providing an ultra-low-latency path to haptics feedback is needed to meet this second requirement with any fidelity. Unfortunately, most buttonless phones today don’t yet have this capability.

Force Sensing

The last requirement is around the sensing of the virtual button, especially around force. A physical button has a spring inside of it to provide tactile resistance; a user must depress with a large enough force to overcome the spring and activate the switch. This is what allows a finger to rest on a button without activation. Furthermore, physical buttons offer an astounding level of reliability—buttons  don’t randomly activate to temperature changes, or sitting in a user’s pocket or purse.

Conceptually, virtualization of the button is equivalent to measuring the deformation of the surface (in this case, the physical housing of the phone) under pressure from the user’s finger. In an ideal world, there would be one sensor underneath the location for each desired button, effectively replacing the physical button with a simple sensor. Unfortunately, many complications emerge with this scheme.

For durability, smartphone housings are extremely stiff, and the user normally applies approximately 200 grams of force to comfortably activate a physical button. For a material like aluminum or glass, this loading translates to a minuscule 0.0001% deformation in the housing. This requires an extremely sensitive force sensor—one that’s highly sensitive to a finger pressing on the housing, but completely insensitive to any other tiny disturbance.

There are three main forms of force detection—resistive, inductive, and ultrasonic—each with their own pros and cons (see figure). The simplest solution is a resistive strain gauge, which employs resistors that modulate under strain. The sensor reacts as the resistors change from the strain, and a minute voltage appears at the output.

This table highlights the differences in force-sensing technologies. (Credit: Jessica Metcalfe)

The largest problem by far with strain-gauge solutions is temperature sensitivity. Since the resistors aren’t perfectly matched, small changes in temperature will also cause minute voltages to appear at the output, and thus false activation becomes a serious problem. Likewise, bending rejection of the resistive strain gauge is poor. If the phone is placed under bending load (for example, being in a user’s pocket or a backpack), it’s indistinguishable from the strain applied by a finger. Again, false activation is a serious problem here; a phone can’t be allowed to arbitrarily reset itself or change the volume by simply being in a pocket.

Inductive sensors utilize a resonant coil whose resonance point is determined by coupling to a nearby piece of metal (in this case, the housing of a smartphone). As the user presses on the housing, the minute gap between the coil and the housing wall changes, modulating the resonant point. Inductive sensors are far better in terms of temperature sensitivity than resistive gauges, but they remain subject to similar concerns around bending rejection. Furthermore, such sensors are difficult to manufacture and integrate, owing to sensitivity to the gap distance between the coil and the housing wall.

Ultrasonics represent the newest form of force sensor. They utilize a propagated ultrasound wave in the housing. As a user touches the housing, the vibrational energy of a transmitted ultrasound wave is coupled into the finger, and hence the receiver will see a modulation in the acoustic signal. This coupling is directly proportional to the force applied; it’s the mechanism that senses force. Ultrasonic force sensors are far more insensitive to bending rejection and temperature, and they’re easy to manufacture.

The difficulty here is that the force measurement is imputed from the ultrasonic coupling. If the user is wearing a glove, or there’s a protective case over the phone, the characteristics of the ultrasonic coupling are completely changed. Even the difference between wet hands and dry hands can be enough to alter the characteristics of ultrasonic force sensors. Sophisticated signal processing and tracking are required to correctly interpret the ultrasonic signal under changing touch conditions.

For buttonless phones to be truly successful in the marketplace, they must provide the same quick, intuitive user experience afforded by physical buttons. It’s far more difficult than simply removing the buttons and replacing them by some form of sensor. Achieving the same reliability and haptic experience requires a multitude of new technologies and careful design of the sensor/haptic subsystem.

Samuel Sheng is Co-Founder and Chief Technology Officer of Sentons.

Source link