Decision Making in the Age of Artificial Intelligence

*spoilers for "The Dark Forest" below*

The aliens know everything. And they're not friendly. 

    This is the dilemma facing humanity at the start of Cixin Liu's "The Dark Forest," the sequel to the Hugo Award-winning "The Three-Body Problem". Omnicient aliens are enroute to Earth, due to arrive in several hundred years. In the meantime, they have forstalled any further scientific progress and are able to monitor all written and auditory communication, by anyone, anywhere, at any time. The only place safe from their monitoring is one's own thoughts. Thus, humanity concieves of a stratigic gambit. Four "Wallfacers" are selected and given access to immense resouces. Each, seperately, is charged to concieve of and place into operation a plan to defeat the aliens. That plan, however, must exist only inside the Wallfacer's head; any external aspect of it must be wrapped in guile because if exposed, the Wallfacer's true plan can be defeated. Therefore any command they give is obeyed, without question, even if it seems to be entirely nonsensical or even wrong; it is assumed that any irregularities are "part of the plan," meant to control the Wallfacer's true purpose. 

    While I would love to rhapsodize about this excellent book, that is not what this blog post is about. Instead, the Wallfacers are a stand in for another entitiy that may soon be doing strategic decision-making: artificial intelligence algorithms. Like the Wallfacers, the reasoning behind the recommendations of these systems may be inscrutable. However, if they are, on average, the best decision-makers, then the logical choice may be to follow their direction, despite a lack of auditability

    In recent years, Artificial Intelligence (AI) has become increasingly important in the realm of strategic decision-making. AI-based solutions are being used to optimize operational processes and develop actionable insights, and now, AI is being used to drive strategic direction.     DeepMind, Google's AI division, has been at the forefront of this new trend, utilizing reinforcement learning (RL) algorithms to create AI agents that can take strategic direction. RL algorithms are able to learn through trial and error, with the AI agent receiving feedback from the environment in order to determine the best course of action. DeepMind has employed this technology to develop AI agents that can play classic board games such as Go, chess, and shogi, as well as complex video games such as Starcraft II and DOTA II.

    In the AlphaGo documentary, the best Go players in the world were in shock when AlphaGo began playing. Its moves were so unconventional and creative that the human players couldn't understand them. AlphaGo's moves were based on a deep learning algorithm that had been taught to play the game by analyzing millions of professional Go matches. This was the first time an AI was able to outperform the best human players, and the moves that it made were so different from what the human players were accustomed to that they couldn't understand them. This was the case because the AI was able to learn from past games and use that knowledge to make moves that were not only effective but also creative, something that human players had never been able to do before.     DeepMind's RL work has also been used to develop AI agents for a variety of business applications, such as energy optimization and supply chain management. Through RL, DeepMind's AI agents are able to take strategic direction from a wide range of factors, including customer preferences, market trends, and economic changes. This allows the AI agent to make more informed decisions, such as which products to launch, where to invest resources, and when to enter new markets.

    When it is deemed profitable to do so, these decisions, which affect millions of lives and billions of dollars, will be turned over to machines, whether or not the models are interperable or not. After all, this is capitalism- if a "black box" AI can run your business better than a CEO, you'd better hand over the reins, or someone else will, and then they will outcompete you.

    This idea becomes even more concerning when you consider other areas that require strategic decision-making, such as military operations. Like business, militaries are driven by competition. Often, the only thing preventing an enemy from using a capability is the fact that you would use it right back. Nuclear deterrence is the prime example of this. But what if one military can estabilish primacy by turning over decision-making to Artificial Intelligence? Would others be force to follow suite? And if this "game" has a first mover advantage, is there a "race" to hand over decision-making to algorithms that we may not fully understand?

    Another interesting question is whether, for the average soldier, this makes any difference? At the end of the day they aren't privy to the private counsels and consultations of generals or politicians- they have to just have a degree of faith that life-and-death decisions are being treated with the care they deserve, and the confidence and will to obey orders. In fact, the average soldier should hope that the best decision-maker is in charge, period, no matter if that decision-maker is a human, machine, or some combination thereof.
    
    At the end of the day, everyone is a bit of a "black box". We can't know everything about the entire confluence of values, and experiences, and conjecture that go into any give decision, just as we can't realistically derive meaning from a map all the weights between neurons on a large neural net. We just have to subject decisions and the agents that make them to a degree of scrutiny, and if they meet that bar, have a little faith.

Comments

Popular posts from this blog

Generating Test Cases for Code Blocks using GPT-4: A Guide