When Machine Learning Goes Off the Rails.
Author(s) Babic, Boris; Cohen, I. Glenn; Evgeniou, Theodoros; Gerke, Sara
Publication year 2021
Publication name Harvard Business Review
Material typePeriodical

Products and services that rely on machine learning—computer programs that constantly absorb new data and adapt their decisions in response—don’t always make ethical or accurate choices. Sometimes they cause investment losses, for instance, or biased hiring or car accidents. And as such offerings proliferate across markets, the companies creating them face major new risks. Executives need to understand and mitigate the technology’s potential downside. Machine learning can go wrong in a number of ways. Because the systems make decisions based on probabilities, some errors are always possible. Their environments may evolve in unanticipated ways, creating disconnects between the data they were trained with and the data they’re currently fed. And their complexity can make it hard to determine whether or why they made a mistake. A key question executives must answer is whether it’s better to allow smart offerings to continuously evolve or to “lock” their algorithms and periodically update them. In addition, every offering will need to be appropriately tested before and after rollout and regularly monitored to make sure it’s performing as intended.

Keyword(s)Business enterprises, Algorithms, Executives, Decision making in business, Machine learning
Language(s)English
ISSN0017-8012

Zendy is not available in your region!
Now available in:
Having issues? You can contact us here.