Predictive analytics and “big data” are emerging as important new tools for diagnosing and treating patients. But as data collection becomes more pervasive, and as machine learning and analytical methods become more sophisticated, the companies that traffic in health-related big data will face competitive pressures to make more aggressive claims regarding what their programs can predict. Already, patients, practitioners, and payors are inundated with claims that software programs, “apps,” and other forms of predictive analytics can help solve some of the health care system’s most pressing problems. This article considers the evidence and substantiation that we should require of these claims, focusing on “health” claims, or claims to diagnose, treat, or manage diseases or other medical conditions. The problem is that three very different paradigms might apply, depending on whether we cast predictive analytics as akin to medical products, medical practice, or merely as medical information. Because big data methods are so opaque, its claims may be uniquely difficult to substantiate, requiring a new paradigm. This article offers a new framework that considers intended users and appropriate evidentiary baselines.
Criticisms of market outcomes often rest upon a notion of ‘market failure,’ meaning that the market has failed to align incentives and knowledge to produce an optimal outcome. Rejoinders to classic market failure arguments have taken several forms: that there are institutional or contracting solutions to various forms of market failures, that optimality is not a reasonable goal for real-world economic activity, or that government may fail as well. Similarly, Wittman (1995) and others have argued that concepts of government failure are equally problematic as the ordinary forces of political competition may render politicians sufficiently accountable to achieve realistically defined standards of efficiency. Even thinkers like Buchanan imagine that constitutional design may allow politics to fend off its tendency to become a zero-sum game. Both concepts are problematic in a world of entangled political economy in which market and government activity are interconnected. We argue that it is time to abandon both ‘market failure’ and ‘government failure,’ and instead focus on problems of institutional mismatch when the rules governing interaction are ill-suited to the problems that agents confront.
Centralized oversight of agency policy making and spending by the President’s Office of Management and Budget is a hallmark of the modern administrative state. But tax regulations have almost never been subject to centralized review. Scholars and policymakers have provided various incomplete justifications for accepting tax policy from centralized review, including concerns about politicizing tax administration, resource constraints within OMB, and a perception that tax is somehow different from other types of regulatory policy in ways that matter for the desirability of centralized review.
This Article undertakes a holistic analysis of the advantages and disadvantages of centralized review of tax regulations, as well as the challenges arising from such review. I conclude that none of the reasons offered in the past for a default rule of no review is sufficient in light of the normative benefits of centralized review. The analysis here brings to the fore multiple functions of tax regulations: some rules are focused on shaping private behavior, whereas others focus on raising revenue or redistribution. I make the case that, as in other (non-tax) contexts, centralized review is a good fit for analyzing the portions of regulations that shape private behavior. For these tax rules, centralized review can facilitate productive coordination across agencies, increase political accountability, introduce analytical rigor through cost-benefit analysis, and potentially frustrate capture of the regulatory process by interest groups. As for rules that raise revenue and redistribute: the devil is in the details. This Article outlines the limitations of current centralized review conventions, and sketches some possible modifications that would make centralized review more beneficial for such rules.
The procedures and practices that shape tax regulations are particularly relevant at this moment. The major tax legislation Congress enacted at the end of 2017 included numerous broad delegations. Thus, Congress is relying on the executive branch to develop tax regulations that will reshape the tax system significantly. Recognizing the strengths and weaknesses of centralized review as applied to tax policy will help to establish consistent and productive oversight of the tax regulatory process.
There is wide agreement that existing approaches to valuing noneconomic losses from personal injury lack coherence. “Health‐utility” measurement—an approach developed in health economics for valuing health outcomes in public health and medicine—holds considerable promise for bringing greater rationality and consistency to assessments of injury‐related noneconomic loss. However, the feasibility of creating utility measures that are suitable for use in personal injury compensation has not been demonstrated. This study takes that step. We surveyed more than 4,100 members of the general public in Australia to assess people’s preferences for a variety of nonfatal “health states.” The health states were selected to reflect harms commonly seen in claims to compensation schemes for transport and workplace accidents. We then followed established methods for transforming the survey responses into a “severity weight” for each health state. We show how these severity weights can be used to define tiers in a schedule for guiding noneconomic damages determinations. We also discuss the strengths and limitations of the approach, and consider implementation challenges.
Regulation and Poverty: An Empirical Examination of the Relationship between the Incidence of Federal Regulation and the Occurrence of Poverty Across the StatesMay 9, 2018
We estimate the impact of federal regulations on poverty rates in the 50 US states using the recently created Federal Regulation and State Enterprise (FRASE) index, which is an industry-weighted measure of the burden of federal regulations at the state level. Controlling for many other factors known to influence poverty rates, we find a robust, positive, and statistically significant relationship between the FRASE index and poverty rates across states. Specifically, we find that a 10 percent increase in the effective federal regulatory burden on a state is associated with an approximate 2.5 percent increase in the poverty rate. This paper fills an important gap in both the poverty and the regulation literature because it is the first paper to estimate the relationship between these variables. Moreover, our results have practical implications for federal policymakers and regulators, because the increased poverty that results from additional regulations should be considered when weighing the costs and benefits of additional regulations.
via Regulation and Poverty: An Empirical Examination of the Relationship between the Incidence of Federal Regulation and the Occurrence of Poverty Across the States by Dustin Chambers, Patrick A. McLaughlin, Laura Stanley :: SSRN
Welfarism is the principle that the goodness of a social state is an increasing function of individual welfare and does not depend on anything else. As Gregory Keating convincingly argues in the lead article for this symposium, welfarism cannot account for important normative differences among different types of welfare losses or costs. Welfarism entails that all welfare losses and gains — regardless of their source — are to be rendered fungible and then compared within a cost-benefit analysis (CBA) of the welfare changes. According to Keating, liberal egalitarian principles such as equal freedom or self-determination normatively distinguish bodily injuries from harms to liberty and economic interests. Bodily integrity and related forms of security are necessary conditions for the meaningful exercise of liberty, and that normative difference must be fairly accounted for by legal standards that govern significant risks threatening human health and safety. Hence Keating concludes that liberal egalitarian principles rule out CBA for setting such safety standards.
This paper updates the cost-per-life-saved cutoff, which is a cost-effectiveness threshold for life- saving regulations, whereby regulations costing more per life saved than this threshold level are expected to increase mortality risk on net. Two competing methods of deriving the cutoff exist: a direct approach based on empirical observation and an indirect approach grounded in economic theory. Both methods build from the assumption that changes in income lead to changes in mortality risk. The likely mechanisms driving this relationship are discussed, with support from recent empirical studies. The indirect approach is preferable in that it avoids the problems of endogeneity of health status and income found with the direct approach. The cost-per-life-saved cutoff value at which regulations increase mortality risk is estimated to have a lower bound value of $75.4 million and an upper bound value of $123.2 million, with a midpoint value of $99.3 million. This cutoff value range is compared with cost-effectiveness estimates for a series of recent policies, including several state expansions of the Medicaid public insurance program in the first few years of the 21st century, an early version of the “travel ban” executive order that restricted refugee admissions into the United States, and nine recent air pollution regulations from the Environmental Protection Agency. The paper concludes that the mortality risk test is an important and underutilized tool in the policy analyst’s toolkit, both as an overall test of regulatory efficacy and as an integral component of calculations of net risk effects of policies.