25259400413_7f9d5063dc_k

Too big to fail?

Moral hazards occur when the potential outcome of taking a risk is disassociated from who will bear the cost of a risk.  Moral hazard is often caused by information asymmetry; the risk taker has more information than the person or organization that will bear the cost of a risk. Even though we assume in many cases perfect information or harp on the need for communication, information asymmetry is a common occurrence. Too big to fail is a form of moral hazard in which the organization may take larger risks with the potential the larger returns because they know they will not be allowed to fail. Another example of a scenario that can lead to a moral hazard is the principal-agent relationship. In the principal-agent relationship, the agent generally has more information than the principal. In this scenario, the principal can not spend all of his or her time monitoring the agent so that they have complementary or the same amount of information. There has to be a relationship of trust. In some scenarios, the agent might have the incentive to act inappropriately by taking more risk than the principal might deem acceptable. For example, the project manager position can often be construed as an agent-principal relationship. There are times that I’ve seen where a project manager who is behind will curtail testing so that they can catch up and deliver on time. This is a moral hazard.

There are numerous moral hazards that can occur in the software development and maintenance environments. Some of the scenarios that generate the potential for significant moral hazards are:

  • Too Important to Fail Projects: Information technology is littered with projects that are too important to fail. Participants on projects that are too big or too important to fail could assume that someone will step in if the project gets into trouble. For example, I have personally been involved with bank mergers with announced cutover dates multiple times in my career.  If those dates were not met everything from the organization’s stock to the management team would have been sunk. I saw evidence of the impact of a missed date when a merger cutover had to be postponed due to an external event, the stock of both organizations plunged 50% in two days and the CIO and his staff were gone in less than 30 days. On more than one occasion decisions were made to beg for more resources or cut corners to make the dates when times got tough.
  • Software Teams Insulated From Business Risk: In many organizations, requirements are developed and provided by the business.  The requirements or user stories are provided to a development team as a transaction.  Once the team obtains the requirements they begin the process of development.  Because requirements are viewed as a transaction, the team falls into a principal-agent trap where there is asymmetry in information.  The team can’t easily link their development knowledge with business risk.
  • Specialization: Separating some types of related work can generate the potential for moral hazard. In many organizations, development and maintenance functions are staffed separately.  The software is developed and then tossed over the wall to the maintenance personnel. In scenarios in which developers are not responsible for fixing defects, they may take shortcuts which benefit development, but not maintenance.  A similar argument can be made for separating planning and estimating from development functions.

The financial crisis of the last decade can be traced at least partially to information asymmetry and moral hazard. Innovations or continuous changes can impact information asymmetry.  Concepts such as Scrum can level the information playing field by embedding the business in day-to-day project decision making.  However, when non-business proxy product owners are substituted we end up back in the same position of potential moral hazard.