Monthly Archives: July 2014

Border Crisis Highlights Lack of Effective Security Metrics

In the midst of the crisis on the southwest border involving the housing and free passage of hundreds of thousands of illegal immigrants, the White House continues to peddle the claim that President Obama has bolstered security of the U.S.-Mexico border, citing as evidence the increased number of apprehensions of illegal immigrants and the unprecedented number of border patrol agents employed by U.S. Customs and Border Protection (CBP).[i]

Even disregarding the current crisis, the argument is laughable: 88 percent of the increase in border patrol agents took place even before Barack Obama took office.[ii] This blatant propagandizing provides a useful exposé into the (double) standards of the Obama administration: An economic anemia persisting five years after the transition of presidential power can be forever blamed on George W. Bush, yet enhancements to border security for which Bush was largely responsible are actually the accomplishments of his successor. As Pat Condell would say, “If not for double standards, they wouldn’t have any standards.”

Momentarily forgetting about the proper allocation of credit for border security enhancements, we can focus on a more pertinent problem: The southwest border is actually very unsecure.

This problem is obscured by the common floating of a few misleading statistics. In addition to the numbers on border patrol agents and border apprehensions, supporters of the President’s immigration policies often note the zero-to-negative growth rate in the population of illegal immigrants within the United States in 2011 and 2012 (though the growth rate has recently trended positive again).[iii]

Those who push these numbers commit a couple basic logical errors. First, they assume that because net growth of the illegal immigrant population is practically zero, the border is secure. But the number of people who illegally enter the country has no bearing on border security. The border is secure only when we have the capability to keep people out. The fact that people are choosing not to immigrate does not mean that, should they change their minds, they would be unable. U.S. Customs and Border Protection (CBP) attributes a considerable portion of the increased percentage of apprehensions to the decline in immigration which accompanied the 2007-2009 recession.

What is CBP’s actual capacity to apprehend illegal immigrants? Before the Department of Homeland Security (DHS) issued changes to the methods for measuring border security in 2011, CBP used a gradient of security classifications to describe the security of the southwest border (see Table 1). The Border Patrol considered a particular stretch of the border to be under “operational control” if its security fell among the top two designations: “controlled” or “managed.” These designations were determined based upon the amount of resources and surveillance capabilities CBP has for a particular sector, and how useful those resources are in deterring or stopping illegal entries.

Table 1: Border Patrol Levels of Border Security 
Levels of Border Security Definition
Controlled Continuous detection and interdiction resources at the immediate border with high probability of apprehension upon entry.
Managed Multi-tiered detection and interdiction resources are in place to fully implement the border control strategy with high probability of apprehension after entry.
Monitored Substantial detection resources in place, but accessibility and resources continue to affect ability to respond.
Low-Level Monitored Some knowledge is available to develop a rudimentary border control strategy, but the area remains vulnerable because of inaccessibility or limited resource availability.
Remote/Low Activity Information is lacking to develop a meaningful border control strategy because of inaccessibility or lack of resources.

Source: GAO analysis of U.S. Border Patrol ORBBP documents.

A 2011 study by the Government Accountability Office, in examining the methods and results of the U.S. Border Patrol, found that of the 2000-mile southern U.S. border, only 873 miles were under “operational control,” and of those 873 miles, only 129 miles (15 percent of the border) were “controlled,” meaning that the Border Patrol has the ability to detect and apprehend all illegal immigrants upon entry. For the rest of the 873 miles, CBP was only able to apprehend illegal immigrants after entry (sometimes 100 miles or more away from the border). For most of the southern border, apprehensions upon entry range from difficult to impossible.[iv]

Since 2011, under direction of the DHS, CBP has abandoned the operational control metrics for assessing border security, and are currently in the process of developing new metrics. In the interim, CBP has used the number of border apprehensions as the standard of measurement for border security. As alluded to earlier, however, this method does not take into account our actual ability to repel entrants, and is too heavily influenced by other factors, such as the United States’ economic health, which partially determines how many potential immigrants attempt a crossing in the first place. In addition, CBP has not yet developed objective goals or targets that would indicate effective control on the border, and this lack of reliable measurements seriously “limits DHS and congressional oversight and accountability.”[v]

Aside from their relative obscurity and lack of accountability, however, the interim metrics’ main problem is that they are simply ineffectual: The GAO found that “studies commissioned by CBP have documented that the number of apprehensions bears little relationship to effectiveness because agency officials do not compare these numbers with the amount of cross-border illegal activity.”[vi] This is generally because, as apprehensions increase along one portion of the border, cross-border activities increase in other areas.[vii]

If the President and his DHS want to regain some semblance of credibility, they should reinstitute border security measurements for CBP based on well-defined goals, rather than sheer inputs or activities. Operational control was a good metric, but no matter what they ultimately choose, it should allow for congressional oversight and accountability to Congress and the DHS. Finally, despite the obfuscation on this issue, we should take this case of bureaucratic mishandling as a renewed impetus to secure the border.

[i] The Whitehouse. (2014). Border Security. Retrieved from:

[ii] United States Border Patrol. (2013). Border Patrol Agent Staffing by Fiscal Year. Retrieved from:

[iii] Pew Research Center. (2013). Population Decline of Unauthorized Immigrants Stalls, May Have Reversed. Retrieved from:

[iv] Securing Our Borders – Operational Control and the Path Forward: Hearing before the Subcommittee on Border and Maritime Security of the Committee on Homeland Security, House of Representatives, 111th Congress. (2011). (testimony of Richard M. Stana). Border Security: Preliminary Observations on Border Control Measures for the Southwest Border. Retrieved from:

[v] What Does a Secure Border Look Like?: Hearing before the Subcommittee on Border and Maritime Security of the Committee on Homeland Security, House of Representatives, 113th Congress. (2013). (testimony of Rebecca Gambler). Goals and Measures Not Yet in Place to Inform Border Security Status and Resource Needs. Retrieved from:

[vi] Ibid.

[vii] Ordonez, K. (2008). Securing the United States Mexico Border: An On-Going Dilemma. Homeland Security Affairs. Retrieved from:


Net Neutrality Politics: Moving Us Away from a Free, Open Internet

Net neutrality is an issue of rather esoteric beginnings—not many but a few technocrats and policy experts knew even remotely anything about it. With the help of corporate sponsors, however, it has now garnered time in the national spotlight. Unfortunately, as with most ideological fads, net neutrality’s popularity has expanded far more rapidly than peoples’ understanding of it, and a few critical myths persist which require elucidation.

But first, a brief background on what net neutrality means:

The term “net neutrality” refers to a principle under which all different types of content on the internet are sent and delivered at equal speeds. This means an e-mail from your grandmother will download to your computer at the same rate as a Netflix video. A non-neutral internet, by contrast, could entail that some content gets transferred at elevated speeds, necessarily slowing the rest. Content producers and end-users tend to be in favor of net neutrality because they benefit from a vast diversity of contents, and no one wants to run the risk of having their preferred content throttled. On the other hand, opponents tend to consist of internet service providers, such as DSL, cable, and satellite companies, who believe that tailoring their networks to fast-track certain types of content may lead to better end-user experiences and cost savings.

Today, net neutrality has become more than a mere principle, having manifested in several legislative acts and proposed administrative regulations over the past eight years. Each of these would, to varying degrees, restrict by force of law the business practices of internet service providers.

Like the proponents of most government regulations, net neutrality supporters will often wrap their advocacy in the public interest, the protection of some disadvantaged group, and/or the promotion of economic efficiency. Touchy-feely catchphrases such as “keep the internet free and open” and “all bits are created equal” abound, along with the assertion that net neutrality will bolster marketplace competition by relieving the burden of startup costs on bandwidth-intensive tech companies, and by preventing ISPs from arbitrarily censoring (competitors’) content on their networks.

While the proponents of this proposed government regulation seem to concede the benefit of competition in a marketplace—a refreshing sign—they nonetheless fail to see the contradiction created by invoking it. Free-market adherents correctly recognize net neutrality as a hindrance to marketplace competition, rather than a facilitator. Innovation is a key part of competition in any marketplace, yet the net neutrality regulations imposed by the Federal Communications Commission, recently struck down in January, would have stifled innovation among internet service providers. As Larry Downes noted in November:

In all, the FCC’s Open Internet order itself cataloged a dozen major non-neutral technologies, protocols, and business arrangements that have long been necessary parts of the Internet. Sensibly and of necessity, the agency granted exceptions from the rules for each and every one of them, recognizing that the “open” Internet, at least from an engineering standpoint, was anything but. For the Internet to continue functioning at all, the rhetoric had to give way to reality.

But there was no way for the rules to preemptively grant similar permission to any future network optimization technologies, other than to caveat all of the rules with exemptions for “reasonable network management.” That term couldn’t be defined, however, meaning that any future innovations will require FCC approval before large-scale implementation.[i]

In a world of rapidly growing internet traffic,[ii] a moratorium on innovation and experimentation on network management practices could spell higher costs and a far lesser quality of service for end-users and content providers alike.

This seems like a terrible tradeoff, since even an absence of government net neutrality regulations would not preclude the possibility of internet service providers adopting net-neutral business practices; and if consumers demanded such practices, they could simply switch from one ISP to another. The same is true for content providers—not only the giant companies like Facebook, Netflix, and Amazon; smaller companies and (yet to exist) startups may also switch between ISPs if they believe that their content is being discriminated against. This would be a system of true market competition.

In response to this fact, net neutrality advocates are quick to point out the abject lack of competition in the broadband internet market, which is a valid concern. The FCC has reported that of the 132 million households in the United States, only 47 million (roughly 35 percent) have access to four or more video programming distributors (i.e., cable, satellite, and telephone companies); cable companies alone have a market share of 56 percent among these distributors, and of the roughly 1,100 cable companies in the United States, the top five of them (in market share) account for nearly 82 percent of all video programming subscribers.[iii] Given that all of these companies also provide broadband internet services to many of their customers, the competitive outlook for the broadband internet market looks incredibly weak.

Acknowledging the interminably uncompetitive nature of the internet service market would seem to undermine the free-market competition argument against net neutrality. Since Comcast and similar companies so effectively control their respective markets, there is virtually no recourse for a dissatisfied customer. This cornering of the markets removes the normal incentives which spur companies to improve services and cut costs.

For most people, unfortunately, this is where the debate ends. While many will concede the benefits of market competition for internet service, they now dismiss those benefits as immaterial, since an effective monopoly exists in the largest internet markets. Now the only available option they see for ensuring fair or neutral business practices is through government-imposed net neutrality regulations. But this requires the erroneous assumption that the monopolistic structure of internet markets is a natural, otherwise unending state which only government power can mitigate. Because this completely disregards the question of how the internet market reached its current structure, it precludes the possibility of treating the underlying disease, rather than a mere symptom.

What we should instead do is ask, “Why is there an effective monopoly in internet service markets?”

Basic economic theory informs us that monopolies can only endure as long as no smaller, competing companies enter the market to provide the same (or better) service at a true market-equilibrium price. So why have so few companies entered the markets and upended the entrenched giants?

There are a number of up-front costs associated with starting a cable company and/or entering a cable market. Investment in building the initial cable infrastructure is one of these costs, but another significant, yet often unmentioned cost is that of acquiring cable franchises. In most states, cable companies must obtain a cable franchise from each and every municipality in which they want to do business. Large companies can easily expand into new markets because they have copious amounts of cash with which to pay the licensing fees, but for smaller/startup companies, the licensing requirements present an insurmountable barrier to market entry. Encouragingly, 21 states have passed cable franchise reform bills, meaning that cable companies need obtain only one license to operate within the entire state. In the 29 remaining states, however, cable companies must still work through the old, inefficient system. As usual, we see a monopoly that only persists with the help of government supports.

Evidence indicates that the entry of competitors into previously uncompetitive cable markets does reduce cable prices and provoke efforts from the incumbent cable companies to improve services. In response to entry by AT&T, who offers video services over internet protocols through telephone lines (and thus is not subject to cable franchise requirements), Comcast of Santa Rosa, CA, rushed to deliver “new features [video-on-demand, more channels] in Santa Rosa because rival AT&T has started offering its own digital TV service.”In Houston, similarly, Comcast pledged to offer more “linear and high-definition channels, video-on-demand titles and digital phone features” over the following two months, again in response to AT&T’s encroachment. [iv] A Bank of America study also observed basic cable price reductions of between 28 and 42 percent in areas of Virginia, Texas, and Florida where Verizon rolled out its FiOS video service.[v]

The solution is clear: If we can reintroduce competition into the internet service industry, we can entirely abrogate the need for government neutrality regulations while simultaneously improving consumer choice and quality of services, and lowering prices. Instead of wasting time on net neutrality, which will only stifle competition and innovation, governments at all levels should work to reform (with the goal of eventual repeal), the onerous cable franchise requirements which bar entry from new competitors. That will be a crucial step toward having a truly free and open internet.


[i] Downes, L. (2002). What Verizon’s Net Neutrality Challenge Is Really About. Forbes. Retrieved from:

[ii] Cisco. (2014). Cisco Visual Networking Index Predicts Annual Internet Traffic to Grow More Than 20 Percent (reaching 1.6 Zettabytes) by 2018. Retrieved from:

[iii] Federal Communications Comission. (2013). Fifteenth Report. Retrieved from:

[iv] Singer, H.J. (2007). The Consumer Benefits of Telco Entry in Video Markets. Retrieved from:

[v] Bank of America Equity Research. (2006). Battle for the Bundle: Consumer Wireline Services Pricing.