The Global Hunt for the Next Decade’s Fastest-Growing Companies

IT’S A DISCLAIMER FAMILIAR TO ANY INVESTOR, but it’s perhaps truer now than ever: Past performance is no guarantee of future success. Technological changes have accelerated the rate of competition: New challengers are rising faster than ever, and incumbent leaders are falling just as fast. At BCG, our research shows that for large companies, there is now less correlation than there used to be between past and future financial and competitive performance over multiple years.

The tools business leaders use, however, have not yet caught up to this reality. CEOs may pride themselves on being “forward-looking” (and the very best ones are called “visionaries”), yet the metrics commonly used to judge the state of a business (for example, profitability, revenue growth, and stock performance) are inherently retrospective. In other words, CEOs are looking into the proverbial rearview mirror when they really need binoculars.

That’s why last year BCG and Fortune created the Future 50. Our index is forward-looking, in the sense that it aims to measure vitality—a company’s capacity to reinvent its business and sustain revenue growth. Over long periods, the majority of shareholder returns of high performers are driven by such growth. However, consistently delivering growth is especially challenging for larger companies, which can no longer rely on startup-like momentum to sustain their performance. Our goal is to create a new tool for managers to measure and shape growth potential.

Signs of future strength

The Future 50 are the exceptions—the established public companies with the best long-term growth outlook. Our index is based on two pillars: a “top-down” market view of growth potential, and a “bottom up” assessment of a firm’s capacity to deliver growth.

To assess capacity, we focused on four dimensions: strategy, technology and investments, people, and structure. We identified dozens of theories that predict long-term performance, based on research and academic study. Then we tested them, leveraging a wide range of financial and non-financial data. In the non-financial realm, we used natural language processing algorithms to parse companies’ annual reports and SEC filings, searching for indicators of a firm’s strategic thinking on dimensions such as long-term focus; a broader sense of purpose beyond financial returns; and “biological thinking”—for example, embracing complexity and being adaptive. Finally, we used a machine-learning model to test the predictive power of these factors, retaining only those with a demonstrated impact on long-term growth.

Vitality operates over long periods and may not be reflected in short-term performance. That said, early results are encouraging: Since their selection last October, the 2017 Future 50 have achieved average revenue growth of 18% and total shareholder returns of 35%—outperforming both the overall market and growth-focused stock indexes.

Global patterns of vitality

Our 2017 ranking assessed only U.S. companies. This year we expanded our scope to include the largest public companies worldwide. We found a bipolar landscape. The vast majority of this year’s Future 50 are headquartered in two countries: 42% each in Greater China (including Hong Kong) and the U.S.

This distribution may seem extreme, but it is in line with growth trends. Of the fastest-growing large companies over the five years through 2017, 54% were based in China, and 28% in the U.S. Growth-focused investment follows a similar pattern: In the first half of 2018, approximately 80% of venture capital funding went to those two countries, according to Crunchbase.

Vitality is also unevenly distributed by sector. In the U.S. and other developed markets, the vast majority of vital companies are tech players. But in China and other emerging markets, the picture is more varied, thanks in part to rising consumer demand from the growing middle class. While digital leaders such as Alibaba, Baidu, and Tencent rise to the top, there are also three Chinese automakers among our top 50, along with a consumer-oriented Indian bank and a Thai convenience-store operator.

With high potential, high risk

The very attributes that make high growth possible often also increase risk. One cautionary example: While last year’s Future 50 companies are outperforming in the aggregate, three—LendingClub, Gogo, and Macom Technology Solutions—have lost half their market value since publication.

Many high-growth companies are led by founder-CEOs, who face the challenge of preserving culture and momentum while transitioning in the leadership role. Tech giants face trust issues, as users become increasingly sensitive to the social and political implications of digital products and increasing market power. And macro concerns, such as trade disputes, fears of a slowdown, and the impact of government influence on the economy, are more salient than ever. (Such concerns have impacted the share prices of many companies on this year’s list, especially in China.)

To calibrate these risks, we have stratified our ranking. We classified four companies—Samsung Biologics, Tesla, Facebook, and—in a “higher uncertainty zone”: Though they score well in our vitality analysis, each faces circumstances that elevate the risk that their growth could derail.

The Future 50 can’t predict success with certainty, of course. Evolving markets, new competitors, and external forces always have the potential to disrupt trajectories. But we believe this index provides a useful set of binoculars through which to recognize growth potential in volatile times.


To identify the Future 50, BCG examined 1,100 publicly traded companies with at least $20 billion in market value or $10 billion in revenue in the 12 months through the end of 2017. A company’s final score represents its outperformance across the following metrics when compared with peers of a similar size.

50% of a company’s score is based on market potential—defined as its expected future growth as determined by financial markets. This is assessed by calculating the present value of its growth opportunities, which represents the proportion of its market value that is not attributable to the earnings stream from its existing business model.

The other 50% is based on a company’s capacity to deliver against this potential. This score comprises 17 factors, selected for their ability to predict growth over the following five years. These factors fall into four categories:

Strategy: Our A.I. algorithm Our A.I. algorithm uses a Long Short-Term Memory neural network (a natural language processing model that incorporates word order and context) to detect strategic orientation from SEC filings and annual reports. It assesses a company’s long-term focus, commitment to a purpose beyond financial returns, and “biological thinking” (emphasizing for example adaptation, collaboration and ecosystems). We also assess the clarity of a company’s strategy from earnings calls. Finally, we assess the company’s commitment to sustainability from its governance rating from Arabesque, a firm that specializes in ESG data and analytics.

Technology and Investments: A company’s capital expenditures and R&D (as a percentage of sales) measure its investment in the future. Technology advantage is assessed through the growth in a company’s patent portfolio and that portfolio’s digital intensity (share in computing and electronic communication). To account for external innovation, a company’s portfolio of startup investments is compared with the best-performing global venture capital funds.

People: We assess the age of a company’s executives and directors, as well as the share of managers and employees who are female. The value of consistent, focused management is assessed via leadership stability and smaller board size.

Structure: A company’s age and (revenue-based) size are correlated with vitality loss. But three-year and six-month sales growth can be predictive of future growth as signs of revitalization.

A company’s final score represents its outperformance across these factors compared to peers of a similar size (above or below $50 billion in market value), accounting for fact that growth-focused analysis inherently favors smaller companies. Companies in energy, metals, and commodity chemicals sectors were excluded because their growth is highly dependent on exogenous commodity prices. Finally, among companies with a high vitality score, “higher-uncertainty” companies were identified from reported events that commentators believe could materially affect their long-term growth outlook.

Company profiles:
HEAD WRITERS Eamon Barrett, Matt Heimer
CONTRIBUTORS Scott DeCarlo, Ryan Derousseau, Grace Donnelly, Erika Fry, Robert Hackett, Adam Lashinsky, Sy Mukherjee, and Jonathan Vanian

Martin Reeves is a senior partner at management consulting firm BCG and the director of the BCG Henderson Institute.

A version of this article appears in the November 1, 2018 issue of Fortune with the headline “A Global Hunt for the Next Decade’s Champions.”

These New Tricks Can Outsmart Deepfake Videos—for Now

For weeks, computer scientist Siwei Lyu had watched his team’s deepfake videos with a gnawing sense of unease. Created by a machine learning algorithm, these falsified films showed celebrities doing things they’d never done. They felt eerie to him, and not just because he knew they’d been ginned up. “They don’t look right,” he recalls thinking, “but it’s very hard to pinpoint where that feeling comes from.”

Finally, one day, a childhood memory bubbled up into his brain. He, like many kids, had held staring contests with his open-eyed peers. “I always lost those games,” he says, “because when I watch their faces and they don’t blink, it makes me very uncomfortable.”

These lab-spun deepfakes, he realized, were needling him with the same discomfort: He was losing the staring contest with these film stars, who didn’t open and close their eyes at the rates typical of actual humans.

To find out why, Lyu, a professor at the University of Albany, and his team dug into every step in the software, called DeepFake, that had created them.

Deepfake programs pull in lots of images of a particular person—you, your ex-girlfriend, Kim Jong-un—to catch them at different angles, with different expressions, saying different words. The algorithms learn what this character looks like, and then synthesize that knowledge into a video showing that person doing something he or she never did. Make porn. Make Stephen Colbert spout words actually uttered by John Oliver. Provide a presidential meta-warning about fake videos.

These fakes, while convincing if you watch a few seconds on a phone screen, aren’t perfect (yet). They contain tells, like creepily ever-open eyes, from flaws in their creation process. In looking into DeepFake’s guts, Lyu realized that the images that the program learned from didn’t include many with closed eyes (after all, you wouldn’t keep a selfie where you were blinking, would you?). “This becomes a bias,” he says. The neural network doesn’t get blinking. Programs also might miss other “physiological signals intrinsic to human beings,” says Lyu’s paper on the phenomenon, such as breathing at a normal rate, or having a pulse. (Autonomic signs of constant existential distress are not listed.) While this research focused specifically on videos created with this particular software, it is a truth universally acknowledged that even a large set of snapshots might not adequately capture the physical human experience, and so any software trained on those images may be found lacking.

Lyu’s blinking revelation revealed a lot of fakes. But a few weeks after his team put a draft of their paper online, they got anonymous emails with links to deeply faked YouTube videos whose stars opened and closed their eyes more normally. The fake content creators had evolved.

Of course they had. As Lyu noted in a piece for The Conversation, “blinking can be added to deepfake videos by including face images with closed eyes or using video sequences for training.” Once you know what your tell is, avoiding it is “just” a technological problem. Which means deepfakes will likely become (or stay) an arms race between the creators and the detectors. But research like Lyu’s can at least make life harder for the fake-makers. “We are trying to raise the bar,” he says. “We want to make the process more difficult, more time-consuming.”

Because right now? It’s pretty easy. You download the software. You Google “Hillary Clinton.” You get tens of thousands of images. You funnel them into the deepfake pipeline. It metabolizes them, learns from them. And while it’s not totally self-sufficient, with a little help, it gestates and gives birth to something new, something sufficiently real.

“It is really blurry,” says Lyu. He doesn’t mean the images. “The line between what is true and what is false,” he clarifies.

That’s as concerning as it is unsurprising to anyone who’s been alive and on the internet lately. But it’s of particular concern to the military and intelligence communities. And that’s part of why Lyu’s research is funded, along with others’ work, by a Darpa program called MediFor—Media Forensics.

MediFor started in 2016 when the agency saw the fakery game leveling up. The project aims to create an automated system that looks at three levels of tells, fuses them, and comes up with an “integrity score” for an image or video. The first level involves searching for dirty digital fingerprints, like noise that’s characteristic of a particular camera model, or compression artifacts. The second level is physical: Maybe the lighting on someone’s face is wrong, or a reflection isn’t the way it should be given where the lamp is. Lastly, they get down to the “semantic level”: comparing the media to things they know are true. So if, say, a video of a soccer game claims to come from Central Park at 2 pm on Tuesday, October 9, 2018, does the state of the sky match the archival weather report? Stack all those levels, and voila: integrity score. By the end of MediFor, Darpa hopes to have prototype systems it can test at scale.

But the clock is ticking (or is that just a repetitive sound generated by an AI trained on timekeeping data?). “What you might see in a few years’ time is things like fabrication of events,” says Darpa program manager Matt Turek. “Not just a single image or video that’s manipulated but a set of images or videos that are trying to convey a consistent message.”

Over at Los Alamos National Lab, cyber scientist Juston Moore’s visions of potential futures are a little more vivid. Like this one: Tell an algorithm you want a picture of Moore robbing a drugstore; implant it in that establishment’s security footage; send him to jail. In other words, he’s worried that if evidentiary standards don’t (or can’t) evolve with the fabricated times, people could easily be framed. And if courts don’t think they can rely on visual data, they might also throw out legitimate evidence.

Taken to its logical conclusion, that could mean our pictures end up worth zero words. “It could be that you don’t trust any photographic evidence anymore,” he says, “which is not a world I want to live in.”

That world isn’t totally implausible. And the problem, says Moore, goes far beyond swapping one visage for another. “The algorithms can create images of faces that don’t belong to real people, and they can translate images in strange ways, such as turning a horse into a zebra,” says Moore. They can “imagine away” parts of pictures, and delete foreground objects from videos.

Maybe we can’t combat fakes as fast as people can make better ones. But maybe we can, and that possibility motivates Moore’s team’s digital forensics research. Los Alamos’s program—which combines expertise from its cyber systems, information systems, and theoretical biology and biophysics departments—is younger than Darpa’s, just about a year old. One approach focuses on “compressibility,” or times when there’s not as as much information in an image as there seems to be. “Basically we start with the idea that all of these AI generators of images have a limited set of things they can generate,” Moore says. “So even if an image looks really complex to you or me just looking at it, there’s some pretty repeatable structure.” When pixels are recycled, it means there’s not as much there there.

They’re also using sparse coding algorithms to play a kind of matching game. Say you have two collections: a bunch of real pictures, and a bunch of made-up representations from a particular AI. The algorithm pores over them, building up what Moore calls “a dictionary of visual elements,” namely what the fictional pics have in common with each other and what the nonfictional shots uniquely share. If Moore’s friend retweets a picture of Obama, and Moore thinks maybe it’s from that AI, he can run it through the program to see which of the two dictionaries—the real or the fake—best defines it.

Los Alamos, which has one of the world’s most powerful supercomputers, isn’t pouring resources into this program just because someone might want to frame Moore for a robbery. The lab’s mission is “to solve national security challenges through scientific excellence.” And its core focus is nuclear security—making sure bombs don’t explode when they’re not supposed to, and do when they are (please no), and aiding in nonproliferation. That all requires general expertise in machine learning, because it helps with, as Moore says, “making powerful inferences from small datasets.”

But beyond that, places like Los Alamos need to be able to believe—or, to be more realistic, to know when not to believe—their eyes. Because what if you see satellite images of a country mobilizing or testing nuclear weapons? What if someone synthesized sensor measurements?

That’s a scary future, one that work like Moore’s and Lyu’s will ideally circumvent. But in that lost-cause world, seeing is not believing, and seemingly concrete measurements are mere creations. Anything digital is in doubt.

But maybe “in doubt” is the wrong phrase. Many people will take fakes at face value (remember that picture of a shark in Houston?), especially if its content meshes with what they already think. “People will believe whatever they’re inclined to believe,” says Moore.

That’s likely more true in the casual news-consuming public than in the national security sphere. And to help halt the spread of misinformation among us dopes, Darpa is open to future partnerships with social media platforms, to help users determine that that video of Kim Jong-un doing the macarena has low integrity. Social media can also, Turek points out, spread a story debunking a given video as quickly as it spreads the video itself.

Will it, though? Debunking is complicated (though not as ineffective as the lore suggests). And people have to actually engage with the facts before they can change their minds about the fictions.

But even if no one could change the masses’ minds about a video’s veracity, it’s important that the people making political and legal decisions—about who’s moving missiles or murdering someone—try to machine a way to tell the difference between waking reality and an AI dream.

More Great WIRED Stories