Digital health is exciting, innovative and forward-looking. Investors have invested billions of dollars anticipating dividends from the next phase of the Big Data revolution: collecting and synthesizing millions of data points that, in turn, could feed into artificial-intelligence-enhanced diagnostics; improve monitoring during pharmaceutical clinical trials; promote everyday fitness tracking; and so much more.
But companies entering this space must conceive of themselves the way old-fashioned healthcare companies do—and minimize enforcement risks that arise under the False Claims Act (FCA). The FCA is a Civil War—era statute originally targeting defense contractors that delivered deficient goods or services to the government. The FCA creates liability on several grounds, most typically for submitting false claims for payment to the federal government.
Congress substantially revised the FCA in 1986 and again in 2009 in ways that have given government purchasers—such as the Medicare program—significant enforcement leverage. In 2020, the US Department of Justice recovered US$2.2 billion in FCA judgments and settlements—of which, US$1.8 billion (81.8 percent) came from the healthcare industry.
Both the structure of the FCA and a recent criminal/civil settlement—as detailed below—demonstrate some of the risks against which a digital-health company needs to protect itself.
Traps for the unwary
Companies entering digital-health markets should be aware of at least four traps that the FCA creates for the unwary.
First, some targets of FCA enforcement never submit any claims at all (much less "false" claims) to federally funded healthcare programs. Their liability arises from FCA provisions extending to anyone who causes a third party to submit a false claim to a government payor. Digital-health enterprises might not bill the government directly, but many of their immediate or downstream users/customers do.
Second, the FCA differs from common-law fraud in at least two significant ways.
One way is that claims can be legally as well as factually false. Factual falsity is straightforward. If your digital-health service or product does not actually do what you promise it will, that could make a claim for payment for that service or product factually false. A claim can also be legally false—that is, when the submitting party falsely certifies that it has complied with statutes or regulations (including an implicit requirement of compliance with the federal Anti Kickback Statute) but has not actually done so. In other words, there is a very broad range of reasons a claim could be considered "false" under the FCA.
Another distinction from common-law fraud is that a "false" claim under the FCA depends on the defendant's knowledge, but is not limited to actual knowledge of falsity. The government also can prove liability by showing that a defendant has acted "in reckless disregard of the truth or falsity of the information." For example, if a defendant does not have a compliance program adequate to identify potential FCA violations, the government may argue that the defendant has shown "reckless disregard."
Third, damages under the FCA differ significantly from standard fraud or breach-of-contract cases. In successful FCA cases, the government may recover up to three times the difference in value between what the government actually received and what it expected to receive. In the case of a claim tainted by a kickback, this may be the entire value of the product. The government additionally can recover per-claim inflation-adjusted penalties that currently range between US$11,665 and US$23,331.
Finally, the FCA deputizes any individual with knowledge of alleged false claims to sue defendants on the government's behalf and "under seal." This means, at least initially, that the defendants do not know the claim exists and the whistleblower's identity is secret. Most typically, individual whistleblowers are insiders of the company alleged to have committed the violation, and the FCA's promise to pay them 15 percent to 30 percent of any judgment or settlement provides a powerful incentive to report alleged wrongdoing.
An object lesson for digital-health companies: Practice Fusion
In the forthcoming Part II of this client update, we will identify key areas that digital-health companies should consider when conducting a risk assessment. For now, we illustrate why the additional compliance effort is worthwhile: the case of Practice Fusion, a cloud-based, electronic health record (EHR) software company.
Databases that create, manage and analyze the underlying data in EHRs may comprise the most obvious application of digital-health technology. Yet the DOJ's scrutiny of Practice Fusion—the first-ever criminal action against an EHR vendor—provides a cautionary tale.
In January 2020, Practice Fusion entered into a deferred prosecution agreement (concerning kickbacks) and a corresponding civil settlement (concerning FCA allegations) with the DOJ. In entering into these agreements and resolving the government's claims, the company agreed to pay US$145 million.
Practice Fusion's platform had several purposes. On the one hand, the platform promised great benefits to providers, and patients, by allowing users to mine EHR data about treatment outcomes. Practice Fusion, however, also solicited input—and payments—from a manufacturer of extended-release opioids. (Although the government's filings referred to this manufacturer as "Pharma Co. X," a separate October 2020 plea agreement reveals that the manufacturer was, in fact, Purdue Pharma L.P.) The purpose was to design EHR alerts to promote, in turn, physicians' prescription of and switch to the pharmaceutical manufacturer's products—conduct that the federal Anti Kickback Statute prohibits. The government was able to enforce the FCA against Practice Fusion on the basis of legally false claims that healthcare providers (not the EHR company itself) presented the government for the reimbursement of certain opioids.
The Practice Fusion case suggests several concrete questions that every digital-health company needs to consider about the clinical niches it hopes to support (and profit from). Our healthcare system promotes for-profit enterprise. Our healthcare laws, however, also want to ensure that only clinical considerations inform the delivery of healthcare. Even if only one purpose of a given transaction is to induce referrals for a product or service (even one that is medically necessary), the parties to that transaction could still incur liability.
When entering into transactions with healthcare providers or device or drug manufacturers, companies need to ask,
- What is the purpose of the transaction?
- Will the government be asked to pay for a good or service informed by the use of our product?
These are among the first questions that digital-health companies should ask themselves. In Part II of this alert, we will present additional questions every digital-health company should ask to best promote compliance and minimize the risk of government scrutiny.
This publication is provided for your convenience and does not constitute legal advice. This publication is protected by copyright.
© 2021 White & Case LLP