AI-Driven Job Disruption in a Constrained Economy

In an overly simplified version of economic growth, productivity improvements are developed that allow for more tasks to be done better with fewer inputs. Some of these inputs that get replaced are people, who need to find a job doing something else. Many of these workers never find a job that paid as highly as their original work, but society has always adjusted. Today, through applications of deep learning and other algorithms, computers are looking like they be proficient at many more tasks. To some people the question of what people will do when an AI takes their job is an alarming question. But other than the typical issues of displaced older workers and some increased difficulty in moving to higher productivity areas due to high housing costs, there wouldn't be much to worry about if the sectors with increasing demand weren’t protected from new entrants.

There is a significant amount of people who are worried that computers and robots are going to take most of the jobs, and that there will be nothing for normal people to do. Some go so far to believe that putting them on a universal basic income plan is the only rational thing to do.

There are a few reasons given for why humans might not find a job:

1. A combination of AI and robots will be doing what humans used to do.

2. There are not enough activities that these workers are willing to do which might promote the welfare of customers who can afford to pay.

3. The workers who lose one job will be zero-marginal productivity workers at other jobs.

Most UBI proponents accept some combination of all three of these explanations. Most of them have some grain of truth, but there are strong counter arguments to each.

1. Robots are expensive and generally constrained to industrial and repeatable tasks. We are far from removing humans from the loop. It is still difficult to get video conferencing and printers to work consistently without IT assistance.

2. Many who make over $200k run their household at a deficit, which suggests that $60k GDP per capita economies are nowhere near conquering economic scarcity. (Given the existence of status games that require money, it might be that economic scarcity will never be truly conquerable)

3. Robots are expensive, and precision robots that can multitask are even more so. A moderately skilled person who can be guided by a computer will be the most viable choice in many situations for decades to come.

Amusingly, Bill Gates is among one of the AI worriers. Perhaps he is thinking back to his role in bringing spreadsheet software to market in the 80’s and 90’s. The proliferation of this software really did destroy jobs, the number of bookkeepers, accounting and auditing clerks in the US economy fell drastically. However, they were replaced quite quickly by more accountants, auditors, management analysts and financial managers. Making one part of the economy more efficient creates new possibilities that were cost prohibitive before new technology drove down costs.

It is tempting to say that AI worriers have it all wrong. That automated trucks will lead to more much more jobs for people supervising truck convoys and working at places that become economically feasible once the movement of goods becomes much cheaper. And maybe drastically cheaper transportation will create even more specialized retail experiences. It is more fun point out to the biggest worriers that they have witnessed the rise of a category of jobs under their very noses, that of the influencer and their various enablers.

But the Pollyannish approach downplays some realities. Productivity improvements leave some people behind. The historical Luddites didn’t find better jobs. And when society has gotten richer, the type of new services demanded changed. Richer societies aren’t buying that many more clothes, personal consumption expenditures on clothing were near 10% of US GDP in 1929, today they are closer to 2%. In place of sectors that have gotten more productive, we see consumers wanting to spend more on things like education, healthcare and housing.

It will not be simple to increase economic output in these areas. Each of these sectors has numerous artificial barriers to entry, especially relative to the high quality of services that advancing technology could enable in a competitive market.

The main barrier to entry in most non-housing sectors is occupational licensing. Occupational licensing has basically replaced union job protection in the private sector. In the 1950s, about 35% of the private sector workers were in unions. Today, that number is closer to 7%. Instead, an equivalent amount of the workforce has been created occupational licensing protections. These protections now cover over 30% of the workforce, up from 5% in the 1950s. And their impact will retard productivity much more than private sector unions in the US ever did. Union workers may have destroyed Detroit’s auto dominance, as the union's strict rules slowed process improvements and increased labor costs at the Big Three. But occupational licensing rules are not just making a few companies uncompetitive, they are making entire sectors less competitive.

Occupational licensing sometimes requires periods of apprenticeships to existing professionals. Often, a prerequisite to getting the license is participating in our inexplicably expensive and slow education system. For some reason, Michigan requires security guards to have three years of education and training while most other states tend to require 11 days or less. Healthcare is one of the more extreme examples. The typical physician has at least eight years of post-secondary education. The students with the best test scores will often go into practices such as dermatology and radiology, where the hazing of the residency doctors is minimal, and the post-graduation lifestyle and pay is highest. Doing well on tests doesn’t prepare these students for thinking about how soon basic machine learning models will do their jobs better than them.

When thinking about how to meet healthcare needs, the question of increasing supply is usually ignored in favor of increasing spending across the board. In the rare cases when supply is addressed, the focus is usually on increasing enrollment in medical schools and residency programs (which is good!) or allowing underutilized doctors to help people outside their market via telemedicine (also good!), but not increasing the scope of what a nurse or technician guided by today’s more advanced computers (and soon, augmented reality) is allowed to accomplish without additional oversight from doctors.

We are entering a world where computers will be able to help a random conscientious person do 80% of the job of various experts. In this type of environment, the pressure to retain occupational licensing requirements will become strongest just as the need for restrictions are disappearing. This misguided response will be the one thing that keeps society poor, as the sectors of healthcare and education will continue to take a larger share of the household budget in the same way that housing does for people who live around centers like San Francisco and NYC where the supply of housing is artificially constrained.

When I get Lunch, Part II: The Tasty-Unhealthy Scale

This is the follow-up to When I Get Lunch.

In my more formative years, the show How I Met Your Mother was an amazing sitcom.  When the show was still in the phase where it was somehow more often than not ridiculously funny, in season 3, episode 5 (I had to look that up), Barney introduced us to the Crazy-Hot scale.

The basic idea is that there are sometimes direct tradeoffs between two variables, in this case, mental stability and attractiveness in the dating market. If someone is above the line, it means that along these two dimensions, their relevant positive trait outweighs their negative trait in the category under consideration. The higher one variable is, the higher another must be to make it a viable option.

Having been happily absent from the dating market for many years, I have taken to applying this trade off when considering food. The first relevant variable is how the food tastes. Like attractiveness, the tastiness of any given food item is subjective, though with considerable consensus around different food types. The directly relevant tradeoff for how tasty any given food type is how healthy it is. There is considerable debate and disagreement about what people consider healthy. Judging by its popularity in supermarkets, many consumers still look for “fat free”, while that is a label that tells me to look elsewhere. Either way, it’s far too often the case that things which taste good are unhealthy for us, especially when we consider the quantity of unhealthy food that we are tempted to consume.

The basic chart looks as follows:

In this case, salmon sashimi is comfortably above the line while in my case, cake is comfortably below the line. While it would be nice to standardize this metric, it’s going to vary unavoidably from person to person. My taste bias is generally towards spicy food and oriental cuisine. My health concerns are generally centered around avoiding excessive carbs and sugar.

The line is not always stable. If a woman becomes pregnant, that salmon sashimi which seemed like a great way to get protein and omega-3 now looks like a high-risk food item where the small chance of trichinosis or other parasites is too much of a risk, regardless of how tasty it is. Someone who loves pizza, but who goes on a low carb diet, will start seeing pizza as laying on the opposite side of the line compared to when they were not on a diet.

The fictional Barney Stinson named his trade-off line the “Vicky Mendoza line” his fictional ex-girlfriend whose cosmetic and behavioral changes caused her to fluctuate between the zones.

The closest food analogy might be the egg yolk.

At a young age, scrambled or hardboiled eggs would be served in the morning. These eggs were not particularly tasty, but they were seen as relatively healthy. Then, doctors became more concerned with the cholesterol in egg yolk. Eventually, I discovered the wonder that is soft boiled eggs, particularly in ramen noodles (Ramen is probably somewhere in the upper right of my chart, if I don’t eat most of the noodles I can pretend it is on the upper side of the line). And scientists have started to come to the conclusion that identifying bad things in the body and tying them directly to things that are eaten, like cholesterol, might be oversimplifying our body’s very complicated mechanisms. Dietary cholesterol from eggs is generally not thought to be as bad for the general population as it was twenty years ago.

With Thanksgiving coming up, it is worth thinking about why America’s feast meal traditionally involves some of America’s most mediocre food. My relatives are generally great cooks, but Thanksgiving is a category where almost every single dish is either on the line or below the egg-yolk line. It’s inaccurate to say that rituals don’t evolve over time, but they do evolve more slowly than the rest of culture. While some families have modernized their various parts of their Thanksgiving food, it is still a window into what food was like a generation ago.

Turkey – The main dish consists of a meat that is seemingly only popular now and during Christmas, its defining factor is its ability to put you to sleep. The only real turkey enthusiast seems to be Michael Dukakis.

Mashed Potatoes – Basic starches that are sometimes made edible with enough salt, butter and gravy.

Mashed Sweet Potatoes – There is some potential here, if not for the marshmallows that drive this dish firmly to the left of the egg-yolk line.

Stuffing – Made properly, these are better tasting carbs, made even better with gravy.

Gravy – The only thing that makes it worthwhile to think about eating the above dishes.

Cranberry Sauce – This tangy collection of sugar is adored by many people in the thrall of nostalgia.

In fact, outside of pie enthusiasts, most of the appeal of the Thanksgiving meal appears to come from nostalgia. Before condemning me as un-American, look into your heart, you know this is true. Or better yet, look at popular restaurants across America. There are very few chain restaurants whose thesis is that they should serve Thanksgiving food year-round. And those that do, like Pluto’s, survive mostly due to the popularity of their salads and simple meat options. It takes a lot of self-brainwashing to convince yourself that salads are anywhere other than the lower left side of the tasty-unhealthy chart – except for the ones at risk of being on the wrong side of the line in the lower-right due to a heavy reliance on tortilla chips, fried meat or an unhealthy dressing.

A two by two matrix often oversimplifies things. Maybe the most important trade-off isn’t between hotness and craziness, but between social status and wealth. The nice thing about the tasty-unhealthy framework is the axes can contain most of the complicating variables. The hungrier you get, the better certain foods taste. Or maybe almost all food is seen as unhealthy at meal time if you have decided that intermittent fasting is important. Someone who discovers they suffer from Celiac disease should relocate everything that has gluten to the far right of the chart. Maybe you don’t usually eat cake but make exceptions for friend’s weddings – you can incorporate the desire to be properly sociable into the tasty side of the scale. Or just throw it out during cheat days or cheat events.

The important thing is having a framework that is generalizable enough to handle new information and new constraints properly. The best approach would be if we were capable of brainwash ourselves into only liking healthy food and constrain our diet to the upper left side of the quadrant, but that is easier said than done. It may be most useful in avoiding borderline foods when we know it’s not good enough for its health levels. So when someone asks you why you don’t like pizza, you can tell them “Yes, I like pizza. But I don’t like it enough considering how unhealthy it is for me.” You can even draw it out, using slightly better handwriting than my own. But this is advice on how to think of the trade-offs between enjoyable and healthy food, not on how to lose friends and bore people.

Ignoring the Lives Technological Progress Doesn't Save

Everyone knows the story of thalidomide and the deformed babies that were born as a result of approval in European countries. There were delays in approving the drug in the United States. The director at the FDA in charge of the approval was worried about nerve damage that wasn't proven to be a real problem, but she became a hero for keeping this drug off the US market once the birth defects were linked with the drug. Fewer people know about the deaths caused by the FDA's delay of approving beta-blockers for preventing the risk of a second heart attack. Despite trials showing its effectiveness in the mid 1970's and approval in Europe, it wasn't until 1981 until timolol was approved for use in preventing a second heart attack. In the press release, the FDA announced that it could save 17,000 lives a year. So by their numbers around 100,000 people died as the FDA dragged their feet. 

Now, medicine is complicated. There are many examples of delays of useful treatments, there are many more ineffective treatments that were properly blocked and there are some treatments that should never have been approved. Recently, a study found that guidelines suggesting the use of beta-blockers in non-cardiac surgery are resulting in significant excess strokes and deaths. The problem is that the deaths caused by actions are taken far more seriously that the deaths caused by inaction and delays. Our moral calculus does not seem to recognize that obvious missed opportunities to save a life are almost as bad as other causes of death, even in obvious cases like the example above where many people who had already had a heart attack died of a second one because the FDA had yet to approve beta-blockers as a treatment.

That brings us to self-driving cars. Every year in the past decade over 30,000 people in the United States have died in car crashes on public roads. To the extent that many of these accidents are caused by driver error, replacing more fallible human drivers with an electronic system has the potential to prevent many of these deaths. Recently a pedestrian was killed in Arizona and the company suspended tests in all cities. To the extent that Uber is suspending testing to fix an obvious bug that caused this mishap this is what we should expect. But the extent to which Uber or any other company has to suspend operations after an accident only for political optics then we should recognize that the knee-jerk political response is pushing back the time when humans will not only be free to devote many hours of commuting time to other activities, but tens of thousands of lives each year are lost because human drivers still dominate the roads. And that's just counting the deaths that occur within the United States.

We must not expect new technology to start out an order of magnitude better than the old technology. When human lives are on the line we must have higher standards, but those standards should not be unrealistically higher that the status quo. If they are even equally as safe as current human driven systems, then ensuring a legal and regulatory framework under which self driving cars can operate will allow for real time improvement and iteration. When we get to a point that self-driving cars are ubiquitous it will save tens of thousands of lives and give billions of hours of free time back to commuters.

Be careful about demanding too much perfection too soon. Demanding perfection now causes delays, and our moral calculus is too ready to ignore the tens of thousands of lives lost every year that effective automated driving is delayed.

When I Get Lunch: Part 1

I think a lot about food. I haven't yet put my food related thoughts down on paper and sometimes the only way to progress in understanding is to lay everything out on paper. This technically isn't on paper either, unless someone decides to print out this page, but it's close enough. I'll start by covering a book that has helped me develop a large part of my framework for how I think about picking out and eating at restaurants. One more caveat before I start. This is not a competent book review. Competent book reviews engage with a book within five years of having actually read the book. This is more of a reflection on a book that I did at one point read and whose concepts have continued to shape my thinking about food over the years.

That book is Tyler Cowen's An Economist Gets Lunch: New Rules for Everyday Foodies. It's one of the few books that I've bought a couple times, as I always end up giving it to a friend or acquaintance (I prefer not to engage in the myth that I'm letting someone borrow a book, that's just a set up for minor friction and disappointment if they forget, damage or lose it). It is a combination of an overview of the economic and social forces that made good food in the United States almost disappear and a framework and advice for finding the best tasting food in general. 

The economic history portion is interesting, it delves into the dynamics behind how prohibition killed off the best restaurants & provides an interesting hypothesis as to how the child-centric culture in the era of Baby Boomers oriented our food culture towards fatty, sweet foods and away from more interesting flavorful options.

The history is fun, but more relevant is Tyler's advice for finding good food (Those in the DC area have the option to skip much of the work of applying his framework and access his recommendations directly). He uses a combination of economic and cultural understanding to determine when he is paying for good food and when he might be paying more for the other goods or services bundled with food. Some tips in his own words can be found here.

I will paraphrase the six points from that article:

1. Order what sounds the least appetizing, it is on the menu for a reason. Avoid what sounds safe if it is not the restaurant's specialty, that's just there to mollify people who prefer the familiar.

2. Be suspicious of restaurants that are social scenes, especially if it is not a new restaurant that has an extra incentive to create a reputation for its food quality.

3. Low rent places in suburban strip malls will have more interesting varieties of food. Places with higher overhead will be less able to take risks, and when they do provide good food it will generally have a matching high price. Food trucks share relevant qualities with suburban strip malls. 

4. Ask other people for advice, particularly people between the age of 35 and 55 who become excited by the topic of giving advice on where to eat. Searching online for the best particular dish will give better outcomes than searching for general cuisine.

5. Cheap labor leads to good food at low prices, expensive labor & superfluous employees means that the customer is going to be paying for the experience. 

6. Food cuisines that Americans are comfortable with will often be biased towards American taste-buds. A Pakistani restaurant will generally be more authentic than your average Indian restaurant. Americans feel more comfortable wandering into an Indian restaurant and the restaurateurs have learned that Boomer-type customers are happier with dishes that are blander and slightly sweeter than is traditional. Thai food is another category that is often targeted at Americans, while Vietnamese food has yet to catch on and is less likely to be overly fried and sweet in an attempt to lure Americans who do not have a preference for interesting food.

Besides these tips the book also explains many of the forces driving where good food is made. It is the demand for high quality food that creates a good supply. The only Chinese restaurant in a very white town might be have a highly capable chef, but if his customers prefer bland, sweet food he will learn to give them what they want. Place this same chef in the middle of a metropolitan Chinatown and it could be well worth a visit to his restaurant. Therefore, if you talk to the chef and express interest in non-Americanized fair, you might be able to convince him to make you a good meal at the mediocre place. And when we hear the common rule of thumb to make sure that people of the same ethnicity as the food are eating in a restaurant we are trying to avoid a dynamic where food is made both more bland and sweeter in an attempt to attract a Boomer-ish type of clientele. 

Consumer demand driving restaurant behavior is also the reason why areas with significant tourist traffic will have more questionable restaurants, as the main skill required of restaurants in those areas is to convincing people to drop by just once. This is why it is easier to accidentally find a good restaurant in San Francisco's Sunset and Richmond district than in its more tourist filled Chinatown.

Most of Tyler's advice on food and the parts of his framework that I have internalized, probably including insights which I might not directly remember as coming from said framework, have served me well. But most rules do not work all of the time and his suggestions are suggestions and not laws for a reason. I remember right after reading the particular piece of advice about trying things on the menu that sound disgusting I tried pig blood curd congee in a restaurant attached to a 99 Ranch (essentially a suburban strip mall with a large flow of Asian customers). I was rather disappointed. Although now that I think of it, even that dish was generally pretty good once I took out the pig blood curd. But every so often things that sound disgusting actually are.

One area that was touched on but incomplete is how to find good food using online sources. He recommended avoiding generalizations and using specific searches even if that isn't what we are interested in. His approach does work, searching for "pig blood" in San Francisco on Yelp actually highlights a few restaurants I know to be quite decent. But when we look at the Yelp or Google for reviews and scores the process of interpreting them is not very direct. Most of the US review systems ask users to give only one all encompassing score, so many other factors outside of the food quality are being measured.

For people who are trying to find the good food at reasonable values using these services, there are some basic heuristics outside of looking at the number of stars that will help interpret the results.

Sometimes an almost cheap ethnic restaurant gets dinged for its relative price, not the quality of its feed. If the biggest problem is that it is slightly more expensive than its category suggests it should be, remember your opportunity cost. The $16 ramen, where half the customers are raving about it and the other half are complaining about it for costing too much for ramen, will still be much cheaper than other dinner options. (If this seems particularly high, it should be noted that I live in the San Francisco Bay Area).

By that same token, the places with glowing reviews from people who are amazed at large portion sizes should be treated with a grain of salt.

When a restaurant has a bad score due to people complaining about service and the goal is to find good food, not to impress someone with a flawless evening out, then that restaurant deserves significantly more consideration. 

Relatively expensive places, those which fall under the $$$ and $$$$ categories, often have ratings that are biased upwards. Many of the people reviewing them did not have to pay for the restaurant, and even those who did often forget that the baseline experience for these restaurants should be good food and great service. They do not adjust their baseline expectations higher. Many of these places actually have great food worth trying out, but don't trust the average score. Some of the most mediocre meals out have occured because I was being hosted by someone who fully relied on the heuristic of "If I spend enough money and go to a place that is popular I won't get a bad result."

Ratings are also biased upwards when the owner replies to every or almost every review, positive or negative. Real negative reviews are scared away by the prospect of having a real person react to public criticism. Be wary of these places, not only are they skewing what might otherwise be a useful indicator, they may be more concerned with perception than reality.

And just to make sure that the absolute basics are covered, learning about the specialty of the place is important. There are many places with large menus where it is a mistake to pick anything but a couple items. Also in the obvious column is looking at the most recent reviews for signs of deterioration in food ratings or ownership changes. And if you know the cuisine well then the pictures might be the most helpful content.

There are other concepts beyond analyzing reviews that are quite helpful.

Many times the health inspection score will be easily available when looking up options online. For those who like to travel, remember that a mediocre health inspection score in the United States would be a great health inspection score in a developing country. Unless you know of people getting sick at the restaurant don't be as put off by a mediocre health inspection score as your instincts tell you to be.

While traveling, it is useful to understand which countries do another's cuisine well and which ones don't. Getting Kebab is Germany is a no-brainer, but in Korea it may be mediocre. Thai food in Vietnam is much better than sushi or Korean. (My perception of bad Korean might be due to trauma, one restaurant put ketchup in a bottle that was supposed to contain chili paste for the bibimbap.) US style Mexican food doesn't seem to travel well past the states that are touching Mexico's border. In Japan, some cuisines can be reproduced even better than in their home country, at other places there will inexplicably be mayonnaise on everything. But the general rules still apply, a local population with high standards which will eat at whatever restaurant makes the highest quality version of the food they want is needed for quality food. The more metropolitan an area, there more likely there will be a significant population of people with standards for the cuisine you want. So it's more likely you will be able to find a good meal of most cuisines in major metropolitan areas. But if you took a multi-hour bus or train ride to get your location then sticking with the local cuisine will be your best bet.

Another important concept is time arbitrage, doing things when others aren't. The basic time arbitrage related to eating out is going out for lunch rather than dinner. If you are able to visit a good place during lunch hours the time arbitrage allows for obtaining good food at much cheaper prices. Some Korean places have such a significant price difference that even though they slightly increase the portion size it feels like a ripoff to visit them for dinner. The exception to this rule of thumb would be the places that are designed to make most of their revenue from lunch, like the nicer sitdown lunch places in a downtown area which might not even be open for dinner.

Time arbitrage extends beyond just getting lunch, some places impose higher costs on customers by making them wait in lines. These lines act as advertisements to the restaurant, and if people are willing to wait for a restaurant that isn't a social scene of one sort or another then that is generally a good sign (Remember that urban brunch places are all social scenes). If you have the flexibility, going out for early or late dinner on a weekday, as long as you are also avoiding traffic, would be the best way to experience these restaurants.

By this same token, the place that is open 24 hours or until 4am when other places close earlier will often be serving worse food at a higher price. I have found this to be very true with most Korean places within San Francisco. But there are also definite exceptions to this rule, there are a few 24 hour burrito places in San Diego that you should visit regardless of the time.

The only place where it seems that Tyler's take was wrong, at least as I remember it, was on sushi. In Tyler's estimation, the only way to be certain of getting a better meal was to spend more money, as this would be more directly correlated with high quality fresh fish and perhaps the skill of the chef. Part of the reason this is wrong is that some expensive places have built up brand names as the place to go for business dinners and these have dropped in quality significantly. And sometimes there are restaurants which are far less expensive than the ultra-expensive places who are ordering similar quality fish from the same wholesalers. Figuring out which sushi places are ordering from these wholesalers and which ones are ordering from the same places providing the selection at your local Japanese supermarket can help a discerning eater determine where to find really good sushi at reasonable prices (Note: The prices may still be unreasonable compared to other cuisines).

The restaurant scene is an inefficient marketplace. If you want good food there are plenty of ways to get better food at a better price than what most others are getting. But if what you want is close to a completely normal social dining experience then the inefficiencies get smaller. You will be choosing to also pay for the atmosphere, the service, the view, etc. The best you can do is use these techniques to make sure the food is also good. When it comes to finding good food at reasonable costs, it helps to be a little weird.

That's it for Part 1. Eventually there will be a Part 2, where I will introduce a framework focusing on addressing the tradeoffs that often occur between matters of taste and matters of health.

Schemes, Illustrated

A lot of people are calling bitcoin and other blockchain tokens* Ponzi schemes or pyramid schemes. This is inaccurate, what is happening in the token economy is a new structure with significantly different incentives for participants.

*For the sake of simplicity, I will use the term token and coin interchangeably. This is far from the most blatant generalization that I will make in this piece. 

Ponzi Scheme

One centralized entity attracts investors by pretending returns are higher than they are. The first few investors might get more than their money back as proof that the fund is healthy, but those funds were not made by the firm, they are funds stolen from later investors.

In the end, only the perpetrator of the Ponzi scheme comes out ahead.

Incentives for participants to recruit others: Social upside from sharing how to make money. Little direct incentive.

In recent times: Madoff’s investment scandal was a classic Ponzi scheme. People believed he had a way to make consistent returns in up or down markets, but he really used new investor money to pay out fake returns to old investors.

This wasn’t just a one map shop, there were firms called fund of funds with an incentive to introduce their clients to Madoff’s Ponzi scheme as they would make a percentage of their client’s returns. But these funds did not think there was a Ponzi scheme, though many believed that Madoff was making his money illegally.

Total size: At the Madoff’s fund peak investors believed they had $65 billion invested in Madoff’s fund, but the total amount investors put in was closer to $20 billion dollars.

Legitimate form: None?

Ongoing Ponzi Schemes: There is also a famous Russian Ponzi scheme, MMM, which after having collapsed in Russia it has inexplicably become popular in developing economies. It is primarily a Ponzi scheme in which money is put into the system and is paid out only when newcomers put in more, but the scheme has set up some incentives to recruit other people that cause some to label it a pyramid scheme.

What to look for: There may be many small ongoing frauds of this nature. Investors should always do due diligence when a scenario seems too good to be true. Understanding the strategy and making sure the fund has a respectable auditor can help avoid some of these schemes. More generally, funds trading very illiquid asset classes in which they are the main participants might act as accidental Ponzi schemes. If investments from one source are driving up the market value of the fund, and fees are being paid out on this increased value, then the scenario may act exactly like a Ponzi scheme.

Pyramid Scheme

In a pyramid scheme, the person at the top benefits from everyone below him in the scheme. The people below the top person are incentivized to recruit other people below them to increase their payout.

Incentives for participants to recruit others: Incentives are siloed, someone recruited by the top guy doesn’t care if a person he did not recruit is successful, he shares no gains in the scheme’s success except to the extent that he is directly introducing new people into the scheme.

In recent times: Bill Ackman’s fight against Herbalife has been one of the most publicized looks into whether or not a company is a pyramid scheme. His presentation can be found here. As of January 7, it still has a market capitalization of $6.1 billion.

Legitimate form: Multilevel marketing companies, in which companies recruit consumers to both sell products and recruit other people to sell products have varying levels of legitimacy. To the extent participants make most of their money by selling goods that generate significant consumer surplus the company can be said to be a legitimate MLM company. Some people suspect that Bill Ackman’s mistake in going after Herbalife was in not understanding the benefits from communities enabled by Herbalife’s MLM system. Despite some questionable practices and relatively high prices the MLM system is creating significant consumer surpluses.

Total Size: The top ten multilevel marketing companies, from Amway to Tupperware, had a combined 2017 revenue of a bit over $40 billion dollars.

What to look for: The more money that is sourced from recruiting additional people and selling to those recruits, the more likely it is to be pyramid scheme.

Token Scheme:

The blockchain tokens that we have seen recently have a significantly different dynamic from Ponzi and pyramid schemes. In a Ponzi scheme, it’s only the guy pulling off the fraud that really benefits from attracting more people. In a pyramid scheme, each silo is separate, and people below someone in a pyramid scheme do not typically get any advantage from that person becoming more successful.

Incentives for participants to recruit others: In a token scheme, everyone is paddling in the same boat. Early adopters take ownership of space on the ledger, represented by tokens, and lobby everyone to buy more. Wealthy latecomers, rather than being stuck at the bottom of the pyramid, can choose to take on a more significant stake if they invest the capital. And when that capital is invested and drives up the price, the existing holders all win. Each of the newcomer’s tokens has the exact same status as the tokens of the early adopters, and everyone is expected to work together to push up the value of the systems that they have bought into. 

When individuals start to think that the value of the network is going to fall and internal collective actions cannot prevent it they are incentivized to be the first to jump ship without letting the others know they have decided to defect until after they have exited.

The incentive structure enabled by token schemes are very powerful. They can be used to incentivize people to all contribute towards a central project, like the people building on Ethereum and other projects where many talented people are trying to develop useful decentralized system. Token scheme incentives are also the driving force behind many obvious scams.

This type of environment created by token schemes forms cult-like behavior. The early believers are told to “HODL” (buy and hold) and not sell any of their tokens into the market, helping drive up the price as new entrants are only able to purchase token that are being flipped. The general community reaction to critiques of any of the token schemes is to accuse the critic of spreading FUD (Fear, uncertainty and doubt), like the way a priest might accuse a scientist of heresy. And a market that has gone up over 1000x has only strengthened these dynamics.

The analogy to stocks: Many observers view blockchain assets as analogous to stocks. The might interpret a coin’s market capitalization, the amount of coins outstanding multiplied by the most recently traded price, and view it through the lens of a company’s valuation. In the case of the healthiest blockchain projects, there are companies and individual engineers putting in lots of work to develop and update the codebase or build products on top of the existing infrastructure. The token itself retains its fundamental value so long as the projects that will eventual create value remain on top of the token’s blockchain ecosystem.

While a decentralized blockchain generally does not convey ownership of real assets the way that owning Exxon’s stock conveys partial ownership of oil producing assets around the world, there is a closer analogy to technology companies. Technology companies derive most of their value from their intangible assets and not their real assets. Investors in technology companies worry that they will lose their best employees who are creating value within the company or that employees who leave might create or enable competitors, so they incentivize the employees with stock options and hope their employees believe that their compensation combined with helping the company achieve its mission is enough to keep working. In the case of blockchain, the main thing keeping technologists working on a token that is not directly tied to a company is their personal token holdings in place of options, and the goal of achieving their project's vision in place of their company's impact.

But unlike stock, tokens do not have the same type of end game. A company like Whatsapp can be acquired for $19 billion, there is no equivalent liquidity event that can reward token holders. Liquidity events only occur when new money enters the system and token holders chooses to leave instead of letting new money push the price up to even higher levels. While they do not have an endgame, tokens have a much quicker mid-game, as many blockchain programmers are able to cash out in the seven to eight figures thanks to the extreme price appreciation of the major tokens. Start ups have historically been wary of letting employees cash out life changing amounts too early in the development process, as the employees may be more tempted to work on their own projects when they have sufficient capital to choose to work on anything they want. But even if some contributors drop out to due to options from their wealth creation, this scale of wealth creation attracts new workers looking for similar upside. Some companies have started vesting tokens to employees over time in way similar to how companies grant employees stock options. The longer-term stability of these projects is only threatened once token price appreciation slows down significantly, and that’s a situation that tech companies without traction also face. The bigger risk for the ecosystem is that if obviously bad tokens are rising with the good tokens, then the good tokens can later fall when the price of bad tokens crash.

Thinking about token market capitalization can be misleading, and not just because acquisitions aren’t as feasible in token-space. Blockchain tokens can be almost permanently lost in ways that stock is not lost. The lost tokens are still counted towards the market capitalization even when their permanent absence from the market is one of the factors driving the price of tokens higher. Despite being potentially misleading, there does not seem to be a better readily available metric or name. There are significant differences when talking about a company’s market capitalization and it should not be confused with a coin’s market capitalization. That Ripple’s company is valued at a couple billion dollars when its coin holdings are supposedly worth well over $50 billion is illustrative of that difference.

The analogy to currency: Many people think the main use case for blockchain is decentralized currency, or a decentralized store of value. In the case of traditional fiat currencies, the demand for money is likely to stay within an order of magnitude of the expected range as each currency is tied to the economic activity of a geographic region. The local state collects their taxes in their own currency, ensuring demand. Over the long term the question is if the supply of money will get out of control as it did at times in the Roman Empire and the Weimar Republic or more recently in Zimbabwe and Venezuela.

For blockchains like Bitcoin, the targeted supply at a given date can be forecasted with relative accuracy. The bitcoin outstanding will increase by about 4% in 2018. The total amount will never be more than 21,000,000. It is the demand that is the unknown factor. Morgan Stanley estimated that hedge funds put around $2 billion into the token economy in 2017. The amount from institutions and people not tracked by Morgan Stanley was likely much larger. 2018 may be even larger, but it is an open question if the money flow can continue to increase through 2019 and 2020. If the miners sell every bitcoin they mine in 2018 and everyone else holds bitcoin, there needs to be around $10 billion in new investment at current prices to keep prices stable. For reference, a calculation done at the start of 2017 would have found that $700 million dollars would be needed to keep bitcoin at approximately the same price. Continuously growing demand is dependent primarily on continued high prices and benign neglect by governmental agencies who may be concerned at the law breaking behavior enabled by various blockchain networks. The networks currently using the protocols could be as stable as Google, but many could just as easily be Friendster or Myspace.

For fiat currencies, the demand is certain and future supply drives price uncertainty. For blockchain, the supply of a specific blockchain is known, the demand is unknown. There may be uncertainty on the supply side. Alternative blockchains and forks effectively add blockchain assets to the ecosystem. Bitcoin has forked quite a few times, and to the extent that investors deploy money to forks, such as Bitcoin Cash or Bitcoin Gold, the supply of blockchain assets can also be said to be increasing. However, this interpretation is not how the market reacted to forks. The price of Bitcoin on announced forks generally stayed level or increased, as if the forks merely increased demand and did nothing for supply. Many people even started equating forks with dividends. This dynamic of assuming the increased supply from forks and additional blockchains being accretive in value to existing blockchains does not seem sustainable.

Current use cases: Avoiding capital controls, black market exchanges, cybercrime enablement, grey market money storage

Additional use cases: Corporate gift cards for products under development, Security-ish tokens, automated contracts, online gambling, cryptokitties

Size: Currently over $800 billion across the publicly tracked tokens. (It would be very interesting to find estimates for total capital committed)

What to look for: First, there are the obvious scam signs. A whitepaper might include significant plagiarized work, promotional material making false or illegal claims and people who attached to the project being attached to other types of scams are all giant red flags.

Sometimes things aren’t quite scams, just unlikely to be good investments going forward. The most obvious example is the companies who change their name to something blockchain related hoping to attract fast money. More subtle case involve scenarios where the token itself is incidental to the process seen as valuable (as seems to be the case with Ripple), or companies selling a blockchain token it retains full control over to do a task that a distributed database would do the job better. In the case of Doge coin, a token designed as a joke, the market capitalization hit $1 billion despite it being abandoned by its creator.

Just as legitimate MLM companies adopt aspects of a pyramid scheme to make something useful, there are some token schemes that have the potential to create real consumer surplus. Some of these projects might even be designed to facilitate legal activity in ways that are more efficient than what trusted institutions are currently doing. And some of those projects might even have a legitimate reason that their tokens should appreciate in value outside of a speculative mania.

When it comes to understanding the dynamics behind blockchain tokens, even the blatantly useless or fraudulent ones, it is important to realize that they cannot be fully understood by analyzing Ponzi or pyramid scheme dynamics. They are token schemes that have their own unique and much more powerful dynamic.

Some Brief Thoughts on Net Neutrality

Net neutrality is not something people who prefer a free market economy would support apriori. It goes against the idea that businesses should be able to compete how they see fit. If a business wanted to lay fiber and give away cheap internet on the condition that their social network, streaming service or Amazon referral links are used, then in theory that should be okay. But we don’t have a competitive open market with many different players. The internet service providers have relationships with municipality governments that help them deploy fiber to their residents. Sometimes the municipality keeps the ISP’s competitors out due to relationships between local officials and ISP lobbyists, other times it is due to bureaucratic indifference or incompetence. It takes a concentrated effort to avoid the accumulation of cumbersome rules and regulations that cause the high costs and slow deployment times that will keep ISPs out of an area that already has a competing provider. Either way, ISPs often achieve monopoly power over their local markets. A 2013 report from the US Department of Commerce shows very oligopolistic competition at best for providers at speeds over 25 Mpbs, and almost no competition above 100 Mbps.

Given time and an incentive to increase their revenue quarter after quarter, companies will attempt to extract additional rents from their position as the gatekeeper consumers and online businesses. Net neutrality rules force ISPs to act as dumb pipes to the internet will prevent these local monopolies from using their government-backed power to pick winners and losers on the rest of the internet.  In our status quo of regional internet service provider monopolies and oligopolies, some form of net neutrality is a desirable compromise.

The 2015 Title II Order by the FCC which was an implementation of net neutrality in the United States is seen by many as a regulatory overreach. It wasn’t because their general framework was that far off, but it was quite a stretch to independently label ISPs as public utilities under authority used to regulate telephone monopolies in the 1930s. This type of significant rule change without congressional input is worrisome for those who want the responsibility of lawmaking to fall upon an elected legislative body. We shouldn’t look to the FCC to implement these rules on their own, the rules should be spelled out by a congress that is concerned with protecting interstate commerce from companies who are empowered by barriers to entry put up by local governments.

One thing that might get lost in this process is the implementation of internet service providers with unique business models. It would be best if room for experimentation was left open in areas where there is already significant competition between multiple providers at high speeds. Then we might see if consumers will be willing to forgo their dumb pipe options in return for a lower cost service. This is the last thing that major tech incumbents want. This approach might allow emerging tech companies buy their way into customer scale, or it might force to major tech companies to spend more to entrench their positions. Either way, the ISPs might be able to figure out ways to make a little more profit while their customers find additional consumer surplus. The key is that the ISP is trying out these innovative business plans in scenarios where the consumer has other reasonable options.

Home Invasion Incentives

Criminals are generally pretty stupid. Especially the type who go after property theft directly. They face a high chance of getting caught and spending years in jail, and their take is generally pretty small. Invading Mariah Carey's house only netted them assets valued at $50,000. These assets were in handbags and accessories so they will be lucky to get anywhere close to half of that value on the black market. If they had found the jewelry they might have made quite a bit more, but most of the assets held by the wealthy can not be easily appropriated.

There is a reason movie stars are being targeted instead of financiers and entrepreneurs. The exact reason depends on how stupid these criminals actually are. The stupidest reason is that movie stars are widely known to be rich. A slightly better reason may be that movie stars are more likely to have assets in tangible goods such as expensive jewelry in their homes. And perhaps the criminals prefer to only invade empty homes, so society's collective stalking of celebrities allows the criminals to identify periods when no one is home.

As mentioned above, targeting celebrities for tangible assets can make some sense. Old money, entrepreneurs and financiers have most of their assets in stock, registered bonds, property titles and the like which cannot be easily transferred without the consent of owner and the implicit consent of the rest of society. Because even if a transfer is forced, these transactions can be reversed and would be easy to track. Some of the more eccentric wealthy might have precious metal or cash lying around, but it's difficult to know ahead of time which people are eccentric in that manner. If there is any valuable art which might be snatched, the criminal needs to consider that the art is valuable precisely because it is unique and again it cannot be transferred for anywhere close to full value. It will always be known as a stolen piece, art stolen as far back as the Holocaust is still being recovered today. Other valuable assets like fine wine might appear to be a liquid asset*, but they too can be tracked. For the celebrities that some criminals are targeting, there is no way the thieves are ever getting access to the financial assets they bought or their residuals and royalties, but the celebrities are at least known as someone who keeps around expensive handbags and jewelry.

Fortunately for criminals, there are now many people holding immense amounts of wealth that is liquid and hard for governments to track. Currently, that is the main active use for blockchain technology. Some sophisticated criminals are already taking advantage of this, with the rise of ransomware. Ransomware is a computer virus that encrypts a user's data and does not grant access to it until a certain amount of cryptocurrency is sent to the attackers. Researchers have tracked most of this activity back to a Russian cryptocurrency exchange.

While crypto assets requires some expertise to properly dispose of, it provides exactly what local criminals looking for high risk/high reward opportunities should want. If someone owns millions of dollars in crypto those assets can usually be transferred relatively anonymously. This is a higher risk proposition, in which the criminal needs to kidnap and threaten the holder of crypto assets directly. But it is striking that crypto has created for local criminals what was before only a pipe-dream, effective access to significant amounts of their victim's wealth. It might be compared to initiating a wire transfer, but a wire transfer has significantly more complications with regards to who is contacted to initiate it and how to evade safeguards in the financial system. The promise of bitcoin is that the process is simpler, faster and easier to manage.

Criminals are stupid, but they learn eventually. Home invasion robberies are rare, but incentives matter. At least some people have decided to burglarize the homes of celebrities who already have security measures to protect themselves from stalkers for much less upside.

This is just a long winded way of telling my friends who talk a lot about their ownership of crypto assets online to be sure to periodically remind everyone that they keep most of it in cold storage locked up in their bank's safe-deposit box or other safe place away from their homes and family. Because incentives matter, even for stupid criminals.


Unintended Consequences of Regulations: Contracting Companies

recent New York Times article tells the tale of two janitors. One at Eastman Kodak in the early 80's had their education paid for and later worked her way up to CTO, later going on to be a senior executive at other companies. The other is at Apple today and doesn't even directly work for Apple. She has minimal benefits and no obvious path for career advancement. That's in large part because she works for a company that contracts out her labor and not for Apple directly*.

The narrative the Times advanced is that companies chose to focus on their core competency and have outsourced other work. Most modern tech companies have decided to focus on their most productive employees and are letting contractors deal with the supposedly interchangeable employees working on low skill problems.

What many people don't see is that this is not an economic relationship that has sprung out of thin air. A lot of it can be interpreted as a company's logical response to the regulatory environment. There are at least three different areas that incentivize companies to keep low skilled workers at arms-length, only hiring them through secondary employers. There are laws around unionization, legal risks from the Equal Employment Opportunity Commission and the Affordable Care Act has made the choice even more clear from a cost perspective.

At first glance, it's obvious why laws about unions would cause companies to choose to contract their labor. For companies that are innovating quickly, becoming unionized raises the spectre of what happened to Detroit auto companies. The long term accumulation of rules and resistance to automation that would cost jobs helped cause the Big Three to fall far behind their more nimble competition from Japan. As Steve Babson puts it in "Working Detroit: The Making of a Union Town"

"The absence of union factory rules gave Japanese management the flexibility to change workloads and reassign jobs without opposition but it also left workers with little protection against speed-up of favoritism."

Today's tech companies need to move fast and keep up with a quickly changing technology landscape, so their instinct is to avoid unionization. How do they do this? First, they spoil their workers more than any union ever did. Companies provide meals, social events, gym memberships and some tech companies even do laundry for employees. This is a multi-purpose policy, not only do these perks discourage talk of unionization, they are also meant to encourage employees to spend more time at work and make them less likely to leave the company even when they are being paid significantly under their market price.

But keeping the higher skilled employees is only part of the equation. The second is keeping employees that are likely to unionize outside of the company. Low skilled workers are most likely to form unions. Unions might possibly strike and upset the sympathetic high skilled employees. The laws empowering the National Labor Relations Board grants union employees specific protection. But those protections stop when they target employers other than their own. From the NLRB FAQ:

"A union cannot strike or picket an employer to force it to stop doing business with another employer who is the primary target of a labor dispute."

After looking at this rule, it should be obvious why companies would want to avoid giving employment status to a class of workers that are unionized or might decide to form a union in the future. By hiring these workers through another employer, they are shielding themselves from the more inconvenient aspects of labor law.

Next we have the Equal Employment Opportunity Commission. The EEOC does not only look for employers who are discriminating in hiring based on age, race, sex, etc. They will also punish treatment of employees based on these factors. If a corporation wants to treat some employees as first class employees with more benefits and another group as second class, they would have significant legal risk when the second group consists of employees that are more likely to belong to a protected class. (And the Times piece does mention how there are legal requirements that employees are offered the same health insurance and 401(k) benefits.) The EEOC will be collecting pay data by sex, ethnicity and race. Under this scenario, it will look a lot better for everyone involved to continue contracting out the low paid jobs.

On top of all of this we have the Affordable Care Act's impact on full-time employment. The ACA puts the burden of providing healthcare fully on full-time employers while leaving part time employers and contract workers almost completely off the hook. The question of who should be responsible for healthcare payments is a political question, but it should be applied equally to all types of employment relationships. Two part-time employees working 20 hours a week should cost an employer the same as a single full-time employee working 40 hours a week. That's not the case today, the 40-hour workweek employee is more expensive. And rather than be the bad guys who are hiring part time workers to avoid paying for health insurance, it is easier for corporations to pay another company to hire and manage workers in this manner.

When designing and implementing policy, it's hard to account for the long-term impact. In this case, the incentives created by multiple regulatory bodies have combined to raise the risk and cost of full-time low-skill employees. They are highly incentivized to hire another company to shield them from a direct relationship. Laws that were originally intended to protect people and keep society integrated might be serving to bifurcate it even further.


*On top of the low wage she pays inordinately high rent to live in the area. A significant part of the struggle of low wage workers in the Bay Area comes from the lack of affordable housing. Most people should realize by now that this is an artificial hardship imposed on the poor by voters who think they are preserving the character of their neighborhood, preventing sprawl or protecting the environment. In reality, the main concern of voters is boosting the value of their house after they have locked in the amount of taxes they have to pay thanks to California’s Proposition 13.

Bill Gates: Pulling up the Ladder

There is a social phenomena called Pulling up the ladder. The basic idea is that when a person or group has success in a certain way, they advance policies that prevent people from having the opportunity to succeed in a similar manner.

Pulling up the ladder is seen in NIMBYism. Established residents who built their houses in a neighborhood pull up the ladder when they decide that extra rules and permits are needed and that no one else should be allowed to build a home in their neighborhood as easily or as cheaply as they did. Pulling up the ladder is seen when practitioners promote occupational licensing that forces new entrants to go through thousands of hours of training and low paid apprenticeships, but grandfather in current practitioners who do not have to follow these burdensome rules. 

Bill Gates provides us with an egregious example of pulling up the ladder in a recent interview with when he advocates for the taxation of any robots that replace human jobs.

Certainly there will be taxes that relate to automation. Right now, the human worker who does, say, $50,000 worth of work in a factory, that income is taxed and you get income tax, social security tax, all those things. If a robot comes in to do the same thing, you’d think that we’d tax the robot at a similar level.


There are many ways to take that extra productivity and generate more taxes. Exactly how you’d do it, measure it, you know, it’s interesting for people to start talking about now. Some of it can come on the profits that are generated by the labor-saving efficiency there. Some of it can come directly in some type of robot tax. I don’t think the robot companies are going to be outraged that there might be a tax. It’s OK.

First, anyway you cut it, this is just really bad policy. Companies utilizing robots should be taxed with the same rules as every other company. Tax theory is not something that can be covered in a short blog post, but there are two frameworks that make it easy to see why this is a bad idea.

One framework favored by some economists is that taxes should be designed to force people and organizations to internalize negative externalities. This is called a Pigouvian tax. If a certain behavior is not good for society, then raising its price will reduce the harms to society while generating revenue. This type of approach is exemplified by those pushing for a carbon tax. Carbon is identified as harmful, and raising the price of release carbon into the atmosphere will likely result in less carbon released into the atmosphere (improperly implemented, it also incentivizes more manufacturing in countries that do not have carbon taxes). Cigarette, alcohol and sugary drink taxes are often justified under a similar approach, as consumption of unhealthy goods can create significant costs for the healthcare system*. The flipside to this logic is obvious. Taxes should also encourage, not discourage, desirable behavior. A tax designed to explicitly raise the cost of research and investment relative to other activities would be a very bad idea.

Another framework is that taxes should avoid unnecessarily distorting economic behavior. Even economists who promote Pigouvian taxes would agree that a tax that changes behavior without purposefully targeting a desired externality is a poorly designed tax. Taxes that can be avoided with additional work from lawyers and accountants are particularly inefficient. Resources devoted to getting around the tax may make sense for individuals and companies, but are a deadweight loss to society. The value-added tax, other than encouraging exports to countries without value-added taxes, is both easy to enforce and does not skew behavioral incentives significantly. It is therefore widely used across the developed world outside of the United States.

So it’s pretty obvious why trying to apply an additional robot tax on companies would be a bad idea. Measuring whether a robot is taking a job is not a simple task, and companies will be incentivized to make the use of automation unrelated to any losses of jobs. The tax would cause significant amounts of new inefficiency as companies on the verge of automating significant tasks would spend lots of money trying to figure out how to minimize or avoid the robot tax. 

But even worse, taxing the implementation of robots in existing companies would slow down the adoption of new technology. If implemented only in the United States, the robot tax would also cause U.S. companies to fall behind places in the world where the implementation of robots was not discouraged by taxation. If the United States wanted to permanently relinquish its status as the country at the forefront of the production-possibility frontier, this tax would be a great way to start.

What makes Bill Gate's suggestion particularly egregious is that the source of his wealth came from the widespread distribution of labor saving technology, software! The following is from the BLS summary of Occupational changes during the 20th century.

The growing use of computers and other electronic devices, which simplified or eliminated many clerical activities, caused the post-1980 decline. Automated switching and voice messaging affected telephone operators; personal computers, word-processing software, optical scanners, electronic mail, and voice messaging, secretaries and typists; accounting and database software, bookkeepers; ATM’s and telephone and online banking, tellers; and computerized checkout terminals, cashiers.

Imagine what would have happened to Microsoft if every time it sold a spreadsheet tool it had to pay a tax for the number of bookkeepers it replaced. Or if all users of Microsoft Office were taxed for the secretaries and typists that were no longer needed. If the United States applied the policies Bill Gates now suggests now to software in the 1980's, the digital revolution would have been strangled in its infancy and Bill Gate would not be in the position he is in today. Specifically taxing companies that implement cutting edge labor saving technology is silly. It wouldn't have been a good idea then and it's not a good idea now. 

Bill Gates has led a unique life over the past few decades so he might not realize this, but our society could still be a whole lot richer. It is still far from wealthy enough to implement any reasonably sized universal basic income. Safety nets, retraining and regulatory reform have important roles to help workers displaced by our fast evolving economy. But the last thing we want to do is implement taxes that slow down innovation or encourage innovative companies to locate its business outside our borders.

So please ignore Bill Gates when he blithely suggests that anyone using robots to replace labor should have to pay extra taxes. Not only is he advocating for pulling up the ladder behind him, listening to him would impoverish our future. Our society cannot afford to treat labor saving innovation like a negative externality to be taxed.

*Somewhat morbidly, unhealthy behaviors that shorten life expectancies actually seem to reduce total costs on the healthcare system.

Donald Trump Brings Regulatory Budgeting to the US

On January 30th, the White House issued a new executive order targeting regulatory reform. The rule has been promoted, by both the White House and the media, as the rule that forces regulatory agencies to get rid of two rules for every new rule they issue.

Both Canada and the UK have used one-in, one-out rules, with the UK switching towards implementing two for one. The statement below is from a policy paper by the Conservative and Liberal Democrat Coalition government in December 2012 as they switched one One-In, One-Out (OIOO) to One-In, Two-Out.

It is clear that the OIOO rule has delivered a profound culture change across government as demonstrated not only by the continuing increase in deregulatory measures but also in the high number of Departments in credit at the end of the OIOO period.

Building on this culture change, Government is now pressing Departments to deregulate further and faster to free up business from unnecessary red tape and deliver growth. From January 2013, our rule is doubled to One-in, Two-out (OITO). The Red Tape Challenge will continue to be an important vehicle for Departments to reduce regulatory burdens and identify OUTs.

Given the UK’s experience, it makes sense that Trump is starting with two for one. He is not a man of half measures.

But the number rules are not the most important aspect of this policy. A single rule could have a miniscule impact, and repealing two of them might not matter if the new rule imposed gigantic costs. The executive order does not overlook this:

In furtherance of the requirement of subsection (a) of this section, any new incremental costs associated with new regulations shall, to the extent permitted by law, be offset by the elimination of existing costs associated with at least two prior regulations

Trump essentially told the agencies that he is capping the costs they are imposing on the economy. They can reduce costs by being more efficient, or by prioritizing their rule making around the most important problems. He is giving every agency a regulatory budget.

The idea of a regulatory budget is simply applying the concept of a budget to regulations. If applied correctly, government agencies are only be allowed to impose a certain amount of cost on society regardless of their net benefit. As agencies are eager to remain relevant and fix the greatest problems of the day, they will be incentivized to remove or remake some of the costlier rules. The benefits of regulatory policy are not ignored, they are taken into account by the amount of cost each agency is allowed to impose on the economy. Trump’s executive order is designed so the regulatory budget of agencies will be increased or decreased, as needed.

During the Presidential budget process, the Director shall identify to agencies a total amount of incremental costs that will be allowed for each agency in issuing new regulations and repealing regulations for the next fiscal year.  No regulations exceeding the agency's total incremental cost allowance will be permitted in that fiscal year, unless required by law or approved in writing by the Director.  The total incremental cost allowance may allow an increase or require a reduction in total regulatory cost.

While the order looks sound, implementation will be key. First, there is the question of how these costs are measured. The direct cost of paperwork and compliance are not the only costs imposed by regulations, there are also indirect costs that are difficult to take into account but which are often larger than the direct costs.

Second, there is the question of prioritizing the right rules to cut. Right now, this order has implemented partial regulatory budgeting, probably the best that can be done without support from Congress. The costs that the agency can impose, along with the number of rules, has been capped. And while we know the total number of rules issued by each agency, we do not yet know the costs and benefits associated with each rule. Only some rules have been analyzed, and many of those analyses are out of date. Agencies enforcing many costly rules with few real benefits should be made to cut much more than two for one, while agencies providing significant benefits in excess to their costs might be given increase in their regulatory budgets. This is an approach favored by Cass R. Sunstein, one time head of Obama’s Office of Information and Regulatory Affairs.

It is important to remember that regulatory budgeting isn’t some new crazy idea. Canada and the United Kingdom have applied regulatory budgeting. In British Columbia, the Deregulation Office used a regulatory budget approach to cut the number of requirements to 55% of their 2001 level. This is made easier in Canada and UK because the ruling party of coalition in a parliamentary system can change the law. In the US, when an agency gets rid of an outdated and costly rule, they may run into legal obstacles if that rule was legislatively mandated.

It would be best if the agencies were operating under the explicit orders or Congress and the Executive branch to create real change. This isn’t just about rolling back inefficient regulations, it’s also about making the government work far more efficiently. It’s a good start.