The Amazon Bracelet: Is It Like Those Shock Collars Thingies?

December 24, 2020

A pair of Washington Post reviewers tell us exactly how they feel about the recent entry into the fitness-tracker market. Greenwich Time shares, “Amazon’s New Health Band Is the Most Invasive Tech We’ve Ever Tested.” Geoffrey A Fowler and Heather Kelly write:

“Amazon has a new health-tracking bracelet with a microphone and an app that tells you everything that’s wrong with you. You haven’t exercised or slept enough, reports Amazon’s $65 Halo Band. Your body has too much fat, the Halo’s app shows in a 3-D rendering of your near-naked body. And even: Your tone of voice is ‘overbearing’ or ‘irritated,’ the Halo determines, after listening through its tiny microphone on your wrist. Hope our tone is clear here: We don’t need this kind of criticism from a computer. The Halo collects the most intimate information we’ve seen from a consumer health gadget – and makes the absolute least use of it. This wearable is much better at helping Amazon gather data than at helping you get healthy and happy.”

Yes, in addition to basics like heart rate, skin temperature, activity, and sleep, this late entry to the market collects information its rivals do not—body photos and voice recordings. Despite that, it offers surprisingly little in the way of personalized advice. Are its users simply paying for the privilege of feeding Amazon’s machine-learning databases? The reviewers also found that, compared to competitors, the device seems less accurate in its measurements. Furthermore, the band has no display—a corresponding phone app is the only way to receive feedback. It also scores one’s progress in an what appears to be an arbitrary 150-point scale that did little to motivate these reviewers.

And what of that tone-of-voice functionality? Apparently having the AI divine one’s mood is supposed to help the user somehow, but the reviewers found it to be mostly a judgmental downer. Not only that but, like many algorithms, it may have a gender bias problem. We’re told:

“The terms diverged when we filtered just for ones with negative connotations. In declining order of frequency, the Halo described Geoffrey’s tone as ‘sad,’ ‘opinionated,’ ‘stern,’ and ‘hesitant.’ Heather, on the other hand, got ‘dismissive,’ ‘stubborn,’ ‘stern’ and ‘condescending.’ She doesn’t dispute she might have sounded like that, especially while talking to her children. But some of the terms, including ‘overbearing’ and ‘opinionated,’ hit Heather differently than they might a male user. The very existence of a tone-policing AI that makes judgment calls in those terms feels sexist. Amazon has created an automated system that essentially says, ‘Hey sweetie, why don’t you smile more?’”

Perhaps Amazon should go back to the drawing board with this one. That is, if it is as interested in serving its customers as in feeding its algorithms.

Cynthia Murrell, December 24, 2020

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta