Attention Conservation Notice: Over 2000 words, written for an intended audience of non-economists, describing my thesis, which is supposed to be about Inequality, which you probably care about, but in fact is mostly about algorithms, which maybe only some of you care about. Most of the length is a discussion of rational expectations models which will be old news to economists and slightly bizarre to those who aren’t.
Given that I am currently in the midst of completing graduate school and making the transition to other (potentially better) things, I thought it would be a good time to talk about what I’ve been doing with the past, say, 5 to 7 years of my life. Generally speaking, the process of a PhD thesis, especially in Economics, involves finding a Big Important Topic, then focusing on some specific aspect of that topic and reducing it down to a manageable, well-defined, technical question that can be answered, with some ingenuity and hard work, by a single student over a reasonable time frame, with the hope of adding some small bit to human understanding of the Big Important Topic.
For me, the ‘Big Important Topic’ was Inequality. Some people are poor and some people rich. Why? How much? Who are the rich and the poor and where do they live? How has it changed over time, and when, and how will it change in the future? What can we do about it? What should we do about it? What does it mean for society, politics, the world? These are issues people care about, passionately, as evidenced by recent political movements and the astounding popular impact of work by Piketty and others that attempts to make some progress in documenting and understanding recent and historical changes in inequality. In my thesis, I provide the answer to, approximately, none of these questions.
So, hold up, why not? What’s the problem? Well, the main issue is that figuring out what’s going on is hard. Although the profession is making rapid progress in the area of documenting inequality, and there is some admirable work beginning to untangle various aspects of the the what, how, and why, this latter area is still far from consensus. I was pleased to see this morning a Paul Krugman post acknowledging how little is still known in the area of modeling the distribution of income and how and why it is changing, since this is precisely the shortcoming I’m trying to contribute, in some small way, to solving. How do we look at the historical relationships between all the things that go on in a society and economy (Krugman gives an example of one of these things as Piketty’s theory of ‘r-g and all that’) and the income distribution and go about figuring out what causes what and predicting the relationship in the future?
There are various ways to go about looking at this question. One traditional way is to gather data on the income and wealth over time of (some representative sample of) individuals in the economy and come up with some descriptive model of how these are evolving and how people behave, then aggregate up to see what this implies for the distributions, maybe tweaking the model until the shape and behavior of the distribution and the behavior at the individual level match what we observe in reality. This approach is quite old, and arguably has taught us a lot about what kinds of patterns of behavior are needed to explain the shape of inequality, as documented going back at least to Pareto. In particular, the contributions of, say, Mandelbrot (1961) describing dynamic origins of thick tails in distribution of income hold up reasonably well as an approximate descriptive theory of the origins of upper tail inequality (i.e. the prevalence of the super-rich), though more precise modern measurements yield a slightly more nuanced picture with less stylized and homogeneous descriptions of individual behavior. Will this approach, suitably augmented with more precise modern data, yield the secrets of how we got here, where we’re going, and what we can do about it? I’ll offer a resolute ‘maybe.’
To see what might be missing, I need to take a detour through, essentially, science fiction. A serious issue in the history of economic thought is what I like to call the ‘Foundation’ problem, after the work of science fiction writer Isaac Asimov, in which he envisioned future social scientists so advanced that they can accurately predict all the major events that will happen in a society decades or centuries into the future (clearly these were farfetched works of speculative fiction). A major problem he envisioned in this work was that if you have such a theory, and people know this, it will affect their behavior, causing them to change how they act and invalidating the theory. In the books, he solves the issue by having social scientists keep their work secret, but this isn’t the only approach.
Around the 1960s or so, some economists, worried about writing down self-invalidating theories, began to try another approach: write down a theory which contains itself, which can predict its own impact on society. This is not actually as hard as it sounds, or at least not usually. If you understand the computer science concept of recursion, or the mathematical concept of an eigenvector 1 (or, more broadly, a fixed point), the theories incorporate a simple application of these techniques. Essentially, what models of this type, which have acquired the slightly misleading name of “rational expectations” do is, instead of including a direct description of the behavior of each person, determined from data, they describe what the person wants, what the person is capable of doing, and what they know, including a complete description of the model itself, and then work out what they will decide to do, given the environment around them which is determined in part by what everyone else is deciding to do. People’s desires can then be inferred (in part), by looking at what they decide to do in different situations, and used to predict what they might do when faced with situations they haven’t yet experienced.
In this way, economists hope to predict what happens when new policies are implemented or unprecedented situations arise, with predictions that don’t necessarily rely on people systematically ignoring the predictions of models which tell them what might happen and so how to avoid unfavorable outcomes. For example, say you have a person driving a car in a straight line along a highway. A crude behavioral model might observe this and presume that the car will always keep moving forward. But if there is a cliff straight ahead, a reasonable guess is that the driver will turn or stop before the cliff rather than continue to drive straight through and fall off. In practice, the earliest models of this type were quite crude, predicting that people would always avoid any preventable bad outcomes and so policy could not systematically make people better off. If you see a car going off a cliff, maybe the driver was Evel Knievel and he just loves driving over cliffs, or maybe they were suicidal and really wanted to fall down a ravine, but either way there’s no real point in intervening because the driver is getting just what they wanted. But of course sometimes an economy does fall off a proverbial cliff and based on the screams on the way down it certainly doesn’t seem like the driver is getting what they want. More sophisticated models incorporated a variety of possible impediments to people achieving the outcomes they want, such as ignorance, inability to cooperate, and so on. Sometimes the car’s windshield is fogged up and they don’t see the upcoming obstacle, or the brakes are out, or the driver doesn’t have the reaction time to avoid an obstacle in time, and even if they’re driving carefully, accidents can happen, and these ‘frictions’ are all more plausible guesses than that the driver was just going straight no matter what.
This shift in focus, from crude behavioral rules to an investigation of the decision-making process and associated ‘frictions’ as sources of unfavorable outcomes led to widespread popularity of the rational expectations paradigm among academics across the ideological spectrum, and over time it became standard to incorporate in essentially any economic model. This was not without some dissent, since a case can be made that sometimes people really do just have no clue what they’re doing and behave according to simple rules of thumb. My personal read of this literature is that the criticism is entirely fair as a description of a wide range of human behavior, but that some people, sometimes, make incredibly sophisticated decisions and the tools of rational expectations modeling can help describe, at least approximately, these behaviors which may be extremely hard to extrapolate from simple rules observed in lab experiments. Maybe more pertinently, it can also highlight the cases when extrapolating the simple rules leads to really terrible decisions, in which case one might at least suspect that some subset of people will figure that out and decide to do something different. In any case, whether for good or bad reasons, in many parts of the economics profession, rational expectations modeling appears to be de rigueur, and it does appear to be the main tool used to ensure that our models not only match observed behavior, but also yield reasonable guesses of what might happen in new situations and under new policies.
So, what does this have to do with inequality? Well, the tools used to figure out how an economy behaves under rational expectations are often rather computationally intensive. In fact, this is so much the case that it is often difficult or infeasible to solve such a model unless everybody behaves exactly the same way, or maybe no more than 2 or 3 different ways. Many models are in fact solved under the assumption of a single `representative agent’, to make the problem feasible. This had the side effect, beyond lack of realism, of making inequality completely disappear from macroeconomic models. If you wanted to study inequality and the macroeconomy, you either had to abandon the rational expectations paradigm, simplify the model drastically, possibly by requiring inequality not to change over time, find some impossibly clever special case in which the computational difficulties don’t arise, or some combination of these three. Much high quality work has been done in each of these cases, and economists interested in inequality certainly have put in substantial effort to adapt and learn within the limitations of each approach.
To give an idea of the difficulty of the problem of studying inequality in a rational expectations setting, it has recently attracted the attention of Fields Medalist Pierre-Louis Lions, who has built a framework for describing the evolution of inequality under rational expectations using coupled systems of forward and backward partial differential equations. While these results provide a general framework for describing the problem, so far results have been confined to the deterministic case, in which unpredictable random fluctuations which match the rough patterns over time observed in actual economic data are completely absent. As a result, the framework can only yield qualitative rather than quantitative predictions about how inequality will change, and so is unsuitable for modeling the detailed statistical data on inequality now being gathered by empirical economists.
Finally, we get to my contribution. More or less, I came up with a way to solve models with inequality, rational expectations, and random shocks, in a computationally feasible way. Well, I came up with an algorithm to do so approximately, though with a provably accurate and precise approximation. In practice, this involves taking the description of the system and solving a special sort of eigenproblem, like an eigenvector problem, but over the functions representing the income distribution rather than finite vectors.2 If this sounds familiar to you, you probably took a course in PDEs in college, or in some field that studies applied PDEs, and that is essentially where I found the tools to find a solution. The hope is that this method can then be used to incorporate inequality into all sorts of economic models with variation in all sorts of economic variables, to compare to past variation and potential future outcomes, and evaluate different sources and effects of inequality quantitatively, in a framework compatible with economists’ standard tools for policy analysis. The dissertation also includes some very preliminary attempts to get started on this agenda, but the main contribution is the algorithm, which I hope will be of use to other economists who want to develop and test out their own theories of inequality, and make their own contributions to understanding this Big Important Topic.
Here is a nice intuitive discussion explaining recursion, circular logic, and eigenvectors with some silly applications to the completely unserious topic of moral philosophy.
↩In fact, the results are significantly more general, and apply to any rational expectations model with some distribution changing over time, whether that’s income or wealth or something completely different like individual decisions or beliefs or locations or what have you. More generally, any functions can be used, so differences across firms or assets or cities or social network nodes can all be examined with the algorithm.
↩