m spiveyThe launch of the federal Health Insurance Marketplace three years ago came with its share of issues and controversy – but today, the ship seems to have been righted, according to some providers and the head of the Marketplace itself.

Health Insurance Marketplace CEO Kevin Counihan lauded a host of success stories leading up to the June 9 “Marketplace Year 3: Issuer Insights & Innovation,” a live-streamed forum organized so providers nationwide could share anecdotes and best practices.

“Three years in, the Health Insurance Marketplace is a competitive, growing and dynamic platform – a transparent market where issuers compete on price and quality, and people across the country are finding health plans that meet their needs, and their budgets,” Counihan wrote in a recent post on the Centers for Medicare & Medicaid Services (CMS) Blog.

Presenters for the forum included issuers from across the country, ranging from major commercial insurers to integrated health systems to regional carriers and others. They were invited to describe innovations regarding paying for high-quality care, working with doctors and clinicians to encourage coordinated care, and using data analytics to find patients, engage them in improving their health, and provide the services that meet their needs.

“Increasingly, the Marketplace is also serving as a laboratory for innovations and strategies that are helping us build a better health care system. Before the (Patient Protection and) Affordable Care Act (PPACA), individual market insurers competed in large part by finding and only covering the healthiest, cheapest consumers,” Counihan wrote. “Today, everyone can buy coverage, regardless of health status, and issuer competition centers on quality and cost-effectiveness. As a result, issuers in states across the country are finding innovative ways new ways to provide quality, cost-effective health care.”

Counihan outlined three areas in particular where progress has been made:

Value-Based Payment Design

Aetna set a goal to have 75 percent of its spending go through value-based contracts by 2020, Counihan noted in his blog post. Today, the company has more than 800 value-based contracts in 36 states.

“Under Aetna’s national value-based care network, providers are transforming their practices and improving the patients’ experience. For example, they are identifying at-risk patients earlier, engaging patients in care decisions, coordinating care more effectively, and providing new hospital case managers to explain discharge instructions and new medications to patients,” he added. “Not only are the value-based contracts improving quality, they’re paying off in reduced costs. Aetna is seeing medical costs come in 8 percent below what would otherwise be expected in areas with these contracts.”

Additionally, Counihan noted, Blue Cross Blue Shield of Massachusetts has a payment model called an Alternative Quality Contract: it pays doctors and clinicians based on the quality, efficiency, and effectiveness of their care. A study performed by the New England Journal of Medicine recently found that the program saved money while also resulting in patients receiving better care than similar patients in other states.

Coordinated Care

The University of Pittsburgh Medical Center (UPMC) Health Plan in Pennsylvania has leveraged early collaboration between providers and care coordination teams, leading to measurable success, Counihan said. These coordination teams are made up of nurses, social workers, and community health workers who can visit while the patient is in the hospital, coordinate their care as they leave the hospital, and, depending on the individual’s needs, check up on them at home.

“(Additionally), Intermountain Healthcare in Utah has placed behavioral health specialists within primary care offices. While it costs more up front, they’re finding that it reduces inpatient behavioral health admissions enough to lower overall costs in the long run while improving patients’ lives,” Counihan wrote. “They’re calling this effort a ‘Total Accountable Care Organization,’ or ‘TACO.’ It’s a healthcare system that cares for the physical health and behavioral health of its members, while tailoring its long-term supports and social service offerings for people with significant health needs.”

Using Data Analytics to Improve Patient Care

Blue Cross Blue Shield in Florida recently closely analyzed its prospective Marketplace customers, and from this the company learned that their new market wouldn’t look the same as their pre-PPACA individual market – and that there would be more variety in health issues across communities. Based on the research, the company created plans for the different needs of unique communities, using “place of delivery” care models to bring together nurses, analysts, pharmacists, social workers, and other experts into inter-disciplinary teams that focused on improving care for high-risk populations in particular communities.

“Horizon Blue Cross Blue Shield in New Jersey used its consumer analytics to identify the uninsured markets in their area, and launch a targeted marketing strategy to reach those uninsured residents,” Counihan added. “With ad placements outdoors, on public transit, and through social media, as well as mail, digital and email outreach, it reached communities that other insurers hadn’t. For example, it saw opportunity in the large number of Latino residents who were uninsured.”

With a Spanish language marketing campaign, the company helped grow its Hispanic membership from 8,000 to 30,000 members.

“These are just a few of the new ideas and innovative strategies that are being used – they’re what make me so confident in the future of the Marketplace,” Counihan wrote. “And as this market continues to grow and mature, we’ll see even more stories of success as issuers in every state find new ways to provide reliable, quality, person-centered coverage for Americans and their families for years and decades to come.”

For more insurance about the Health Insurance Marketplace, go online to https://marketplace.cms.gov/.

 

About the Author
Mark Spivey is a national correspondent for VBPmonitor.com.

Contact the Author 
This email address is being protected from spambots. You need JavaScript enabled to view it.

Comment on this Article
This email address is being protected from spambots. You need JavaScript enabled to view it.

The Physician Compare website has been in existence since 2010, listing basic information such as names, addresses and gender about physicians and other health professionals (e.g. nurse practitioners and physician assistants). It’s no coincidence that, also in 2010, the Centers for Medicare & Medicaid Services (CMS) started providing financial incentives for physicians to provide data about their clinical activities, electronic record usage, and patient satisfaction. These two initiatives are now coming together in the form of public reporting of self-reported quality and other data about medical groups and physicians.

On June 16, 2016, MLN Connects held the first-ever national provider call on the topic. The call was just under 90 minutes in length, with over an hour devoted to questions and answers from medical practices across the country. At the end of the call there were as many outstanding questions as answers, and the audience was repeatedly encouraged to provide comments on the issues still in rulemaking. The deadline for public comments is June 27, 2016 (go online to https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Value-Based-Programs/MACRA-MIPS-and-APMs/MACRA-Feedback.html to read the proposed rules and comment).

Here are top three things I believe that medical practices should know about Physician Compare at this point:

    1.   If you don’t tell your own story about the quality of your care, CMS is going to tell it for you. Medical practices have a brief window of time each year in which they can review their reported quality data and make corrections before it is published on Physician Compare (it was the month of October in 2015; we encouraged practices to review their data here). This annual review process is established per regulations and is set for the next two years. During the question-and-answer period of the national call, a practice administrator asked whether CMS could provide the PQRS Feedback Report on a quarterly basis instead of annually.

    “How do we know where the heck we are?” she asked. The caller was advised to comment on the proposed rule in hopes of influencing this issue for future years. There is no doubt that more frequent feedback would be helpful for medical practices, but how will they know whether the feedback data is correct without tracking their own data internally? It’s important for medical practices to be able to tell their own story about their clinical quality and to back that story up with their own records. Practices that are not prepared to do this will relinquish control of their quality reputation to CMS (and third-party payors) as public reporting becomes more prevalent.

    2.   You have six months to prepare for “competitive ranking.” The current data on the Physician Compare website is based on metrics reported by medical practices in 2014. Only 20 measures were selected for public reporting that year, so many practices that reported quality data have no specific results showing on the website this year (although the website is supposed to give them credit for reporting, one caller said that her practice reported, but it was not indicated on the website). For those practices with publicly reported measure data, their success rate in the completing the measure is represented on a five-star scale, with each star representing 20 percent. The presenters said that the 2015 data will be publicly reported in “late 2016,” and it’s not clear which measures will be included, but all of the PQRS measures are eligible. By “late 2017,” the plan is that 2016 data will be reported (the data that practices are reporting this year), and a benchmark will have been established for each quality measure reported. This will be the basis for competitive ranking. The “ABC methodology” has been chosen to establish the benchmark and drive the ranking system. It was described on the national provider call as follows:

            •    Featuring a well-tested, data-driven methodology;

            •    Establishes top performers; and

            •    Provides a point of comparison.

    The problem is that the presenters on the national provider call could not clearly explain the formula. When one practice asked for an explanation, the presenter finally admitted that it was hard to explain the formula “without a white board.” The bottom line here is that top-performing medical practices will be identified by a complex statistical formula, and the government intends for that subset to get the five-star rating. The presenters stated that the intent is for “the difference between a five-star rating and four-star rating to (be) statistically significant.” It’s also reasonable to expect that these ratings will be tied to payment differentials under the Merit-Based Incentive Payment System (MIPS) program.

    What is a practice to do? First, understand that the quality ratings are not an end in and of themselves; they are a means to an end that is expected to create value for consumers. Whether the measures actually result in value remains to be seen, but clearly, medical practices that want to protect their reputations and garner the highest fee-for-service reimbursement must focus on performing as well as they possibly can on their quality measures. See point No. 1: track your data internally and work to improve your performance as measured by the quality metrics your practice has chosen. In essence, medical practices started competing against this to-be-determined benchmark in 2016 and have the remaining six months of the year to work on improving performance and the related data.

    3.   Basic information about your practice may be incorrect on the Physician Compare website. One practice called in with a concern that, while they had 13 practice locations over a wide geographic area, only their main office address was showing on the Physician Compare website. They were rightfully concerned about the confusion this would cause for existing and potential patients. The information on the Physician Compare website is based on the Provider Enrollment, Chain, and Ownership System (PECOS) framework, and only those health professionals approved in the PECOS system will be listed on the website. There were follow-up remarks about the difficulties that the medical practice community has had with PECOS and the frustration with the PECOS process overall. However difficult that may be, though, it will be more and more important for medical practices to grind through the process of ensuring that their information is correct.

Many large, sophisticated health systems and medical groups have been working on their quality data for many years in preparation for these developments. Small and medium-sized medical groups are now struggling to catch up and compete. These organizations typically own basic electronic health records and practice management systems. The basic PQRS and diagnosis coding being provided to CMS by these very practices is the data that these smaller organizations must begin to use in order to thrive – and perhaps simply to survive.

There was some discussion by one caller about avoidance strategies, e.g. solo physicians planning to change their tax identification numbers frequently to avoid reporting (the government never reports first-year data). To this, the presenters said simply that the consumers will draw their own conclusions about the absence of data. With organizations like Consumer Reports on the job, these strategies seem likely to backfire.

About the Author

Jennie L. Hitchcock is an advisor to healthcare organizations in the areas of regulatory compliance, clinical documentation and coding, risk management, mergers and acquisitions, and operations. With extensive administrative and advisory experience, Jennie possesses a broad understanding of the healthcare industry as well as real-world experience. She holds a bachelor’s degree in organizational behavior and is currently pursuing her fellowship with the American College of Medical Practice Executives. Jennie is a Certified Medical Practice Executive (CMPE) and a Certified Coding Professional – Physician (CCS-P). She serves as president of Compass International Resources, Inc., which provides staffing services in the government sector and consulting services in the healthcare sector.

Contact the Author 
This email address is being protected from spambots. You need JavaScript enabled to view it.

Comment on this Article
This email address is being protected from spambots. You need JavaScript enabled to view it.

g johnstons doranPQRS – Days of Old

The Centers for Medicare & Medicaid Services (CMS) developed various quality initiatives with a strategic vision of improving the quality of patient care and outcomes, along with reducing healthcare expenditures. The Physician Quality Reporting System (PQRS) program was intended to play a role in these improvement efforts through the collection of meaningful quality performance data. What began as a voluntary program in 2007, by which physicians could earn incentives at the peak of up to 2 percent for reporting quality data, became required in 2013, meaning that physicians who did not successfully participate would be penalized. Today, in 2016, there are no incentives available; rather, physicians who fail PQRS reporting are subject to a -2-percent penalty.

The PQRS program has matured since its inception, as the requirements for eligibility have significantly increased in complexity over time. Additionally, PQRS is now tied to the Value-based Modifier (VBM) program, whereby the utilization of physicians’ PQRS reporting will have a direct impact on their VBM outcomes. Successfully navigating these programs is not an easy feat in itself, and in recognition of radiologists’ lack of representation, we feel that this article represents a timely overview in understanding how radiology practices can successfully meet quality performance reporting requirements.

PQRS for Radiologists in 2016

Reporting Mechanisms

Claims

From a historical perspective, radiologists have been one of the most active consumers of claims-based reporting due to the fact that there are measures still in existence for radiologists that are accepted via claims. The success rate for reporting claims, essentially those that have met the minimum criteria to be eligible to earn an incentive, has been declining over time. Claims reporting requires providers to report on nine measures across three quality domains, and if patients receive an evaluation and management service, providers also must include one cross-cutting measure for at least 50 percent of the Medicare population that meets the denominator criteria for the nine selected measures. CMS has been working toward phasing out this claims-based PQRS reporting option; in fact, in 2016 there are only 77 measures available for claims reporting across all specialties. One disadvantage of claims reporting is that it does not provide the opportunity for providers to review their overall status before committing measures to CMS. One positive aspect of claims reporting is the convenience offered by the ability of providers to receive interim PQRS progress reports through the CMS portal. Lastly, of note, claims-based reporting is not available to those practices that choose to report as a group practice instead of as an individual provider.

PQRS Data Registry

Utilizing a PQRS data registry is a more effective way for radiologists to report their PQRS measures. Although the success rates are higher in comparison to claims-based reporting, they also have been slowly declining over time. Again, the data registry option requires providers to report on nine measures across three quality domains, and if patients receive an evaluation and management service, providers also must include one cross-cutting measure for at least 50 percent of the Medicare population that meets the denominator criteria for the nine selected measures. A second option available for individual reporting allows providers to report a measures group. A measures group is a collection of related measures for which providers only have to report on 20 patients in total, 11 of whom must be Medicare fee-for-service patients (the remaining nine may encompass any other payor). The measures group specific to the radiology practice is titled Optimizing Patient Exposure to Ionizing Radiation (OPEIR). There are six measures within this group, but as a word of caution, providers sometimes have difficulty meeting Measure No. 364, which requires the ability to search for prior computer tomography (CT) studies using a secure, authorized, media-free, shared archive.

One advantage of using a PQRS data registry is the ability to review overall performance prior to final submission to CMS. This aspect is particularly important to assisting in successful participation in the VBM program, elaborated upon within the VBM implications section of this article. Unfortunately, not all PQRS data registries are created equally. Providers should take adequate time and care to truly understand what features and functions are available for each respective registry in consideration. CMS has published a list of approved registries for 2016 on the CMS PQRS registry page, where registries must distinguish which measures they support, as well as if they accommodate individual and/or GPRO reporting options.

Qualified Clinical Data Registry

A third option that can be very effective for radiologists’ PQRS reporting is a relatively new type of registry as of 2014, known as a qualified clinical data registry (QCDR). As with the others, the QCDR option requires providers to report on nine measures across three quality domains, with one cross-cutting measure, if patients receive an evaluation and management service. For this option, at least two of those measures must be outcomes measures for at least 50 percent of the entire patient population that meets the denominator criteria for the nine selected measures.

The QCDR option holds a slight advantage over the standard PQRS data registries in that it supports a complement of radiology-specific, non-PQRS clinical measures in addition to a subset of the PQRS measures that may be mixed and matched between measure types. For example, providers may simply report on all clinical measures, a complement of both, or PQRS measures alone if there is sufficient volume. Radiologists generally may find these clinical measures more appropriate and pertinent to their practice, often making it much easier to find the proper measures to meet the reporting requirements. Another distinct advantage under the VBM program is that QCDRs allow for peer-to-peer comparison throughout the performance year to date, which can offer a significant sense of where performance lies and payments are impacted. There are two radiology QCDRs currently offering their services in 2016, The American College of Radiology (ACR) and SaferMD. The ACR National Radiology Data Registry (NRDR) will be expanded upon in the next section of this article.

EHR

An option also exists to report measures through providers’ certified electronic health record (EHR) for specific measures reported to CMS. The certified EHR option also requires providers to report on nine measures across three quality domains, and one cross-cutting measure, if patients receive an evaluation and management service, for at least 50 percent of the entire patient population that meets the denominator criteria for the nine selected measures.  However, each measure must have at least one Medicare fee-for-service patient included. This option is not highly utilized by radiologists; in fact, only 79 radiologists reported their PQRS measures through their certified EHR. It’s also important to consider that the capability to manage individual and group reporting options will vary by vendor.

GPRO Web Interface

Lastly, providers that choose to report as a group through the Group Practice Reporting Option (GPRO) Web interface utilize a set of predefined measures that are specific to this particular reporting option. In other words, providers do not have the ability to select their own measures. CMS identifies a sequential range of patients for which providers must answer for each quality measure that may apply to the particular patient. Unfortunately, the current compliment of measures for 2016 does not include any radiology-specific measures. Group practices should be reminded that electing to use this option requires formal registration through the CMS website under the GPRO option by June 30. The completion of this process will require the group to identify the means through which they intend to report their PQRS measures, for instance through a PQRS data registry, a QCDR, or the GPRO Web interface. Group practices should also keep in mind that this option is primarily used in very large academic medical centers where providers are collected under a single or small subset of TINs (Tax Identification Numbers).

Selecting Measures

Applicable Measures

There are a number of PQRS measures available to radiologists in 2016. Figure 1, although not an all-encompassing list, displays the most commonly selected measures, to serve as a starting point when beginning to select those measures most applicable to each practice. Each clinical setting, however, may have different measures that may apply to them outside of those listed here. Beyond the individual measures related to radiology, there is also a reminder of the OPEIR measures group available, as discussed. When reporting on individual measures, providers should revert to the individual reporting method requirements in terms of the number of measures they are required to report. Additionally, if a practice provides evaluation and management services, there are a minimum of 100 extra measures that may apply.

johnston060516 1 

Planning a Reporting Strategy

In planning a reporting strategy, the first step requires determining whether to report as individual providers or as a group practice. Practices must ensure that each of their providers is capable of meeting the reporting requirements. Individual providers’ reporting outcomes stand on their own performance, whereas group practice reporting outcomes are determined by collective performance.

A practice next must decide the reporting mechanism based upon what best suits its workflows. Before moving forward to the measure selection process, the first thing a practice should understand is which measures apply. A good starting point would be to run a CPT count by physician according to each measure’s denominator definition. Depending on the mechanism of reporting, a practice should refer to the appropriate measures document to understand the narrative of the measure, including reporting frequencies, inclusions, exclusions, and restrictions. Another point of comparison is the “single source” file put forward by CMS, providing a list of CPT and ICD codes applicable to each measure that would assist in identifying and isolating measures relevant to the practice. When opting to use a QCDR, providers should visit the website to review their available options, and to check which non-PQRS clinical measures and PQRS measures are supported.

The last point that is sure to impact those radiologists with a more limited scope of service is called the Measures Applicability Validation (MAV) process. This applies to providers that report fewer than nine measures across three domains, utilizing claims or registry reporting options. CMS will flag these providers or practices that fail to meet the minimum requirements and evaluate if any other measures or domains apply. In these cases, CMS would apply a MAV cluster, or a collection of related measures, whereby if one measure is reported, it is anticipated that all measures within that cluster will be reported. While it is important to recognize that MAV only applies to claims and registry reporting mechanisms, more importantly, the rules differ between the two. Where applicable, Figure 2 represents the MAV cluster that each individual radiology measure belongs to across claims and registry reporting mechanisms. It is advocated for providers to select more than nine measures in order to have an opportunity to put forward their best-performing measures at the end of the performance year, as long as the minimum requirements are met.

johnston060516 2


American College of Radiology National Radiology Data Registry (NRDR)

The NRDR is a repository of clinical data registries, through which providers can find multiple registries contained within their particular offering. The registries can serve various purposes, which means in addition to assisting in the meeting of PQRS requirements, they can assist with providers’ meaningful use, licensure, and clinical quality performance.

  • The Dose Index Registry (DIR) consists of six measures that are approved for PQRS and allow for performance information to be directly transmitted from the imaging equipment to the data registry, which makes it a very convenient way for providers to gather a significant amount of clinical data for reporting.
  • The General Radiology Improvement Database (GRID) consists of six PQRS-approved measures pertaining to process and outcome indices. This registry focuses on capturing a variety of information, including comparable turnaround times, patient wait times, and incident rates, to name a few.
  • The Lung Cancer Screening Registry (LCSR) consists of three PQRS-approved measures and allows a practice to track and report on all of their lung cancer screening patients; it also qualifies them to receive the Medicare CT lung cancer screening payment. Practices should note that within this registry, there is a built-in 12-month lookback period, meaning that reporting through the registry in 2016 is really reporting on cases presenting in 2015. This is due to the fact that it requires taking incremental steps to fully capture the clinical picture of this screening event.
  • The National Mammography Database (NMD), similar to the DIR, consists of six PQRS-approved measures and offers an opportunity to collect performance data in an automated way. NMD specifically offers practices an opportunity to track mandated mammography data and compare performance to other facilities and providers nationally. Providers should take caution, however, as these measures may be condition-specific and increase the difficultly of meeting the measures’ clinical denominator criteria.
  • The Colonography Registry (CTC) consists of three PQRS-approved measures and provides benchmark capabilities that allow for comparison with peer participants. Finally, the ACR anticipates having the Interventional Radiology Registry (IR) available in 2017. This particular registry will support improvement through structured reporting templates, and at least one approved PQRS measure, with expectation of several more.

image004

Figure 3 provides an example of the level of detailed data made available when using the ACR. This particular example is from the DIR for one exam type and one specific measure reported forward for PQRS purposes. This data display shows where the client’s performance, represented by the red line, lies within the total population for the segment being reviewed, represented by the box. Looking at the entire DIR, the 25th percentile is roughly around 600 percent and the 75th percentile is somewhere around 800 percent, with the mean and mode indicated by the blue dot. When compared across peers, they are performing better, but not exceeding the level of performance that would impact their ability to access funds under the VBM program. Access to such powerful information could allow practices to understand their shortcomings early enough to take action where necessary.

VBM Implications for Radiology Practices

As of 2016, the VBM program has been fully implemented, meaning that practices of all sizes are subject to VBM quality tiering and the associated payment impacts. PQRS reporting will now have a direct impact on practices’ VBM outcomes, whereby failure to successfully participate will automatically elicit a two-pronged reduction in Medicare reimbursement. In focusing on the quality component CMS calculated measures, derived from the patient attribution process, radiologists can gain a better understanding of how patients may be found within a practice’s cost calculations. Generally, radiologists will have patients attributed for one reason but not necessary another.

Radiologists will rarely have patients attributed to their practice for providing the plurality of primary care services for calculation of per capita costs, and the slight possibility of this occurring might apply mostly to interventional radiologists that bill for evaluation and management services (and for patients who have no other services performed from any other provider that particular year). However, radiologists at large might expect seeing patients attributed to them for providing plurality of Part B services for triggered hospital episodes for calculation of Medicare spending per beneficiary costs. This would occur when certain hospital admissions to an inpatient setting are triggered for inclusion and radiologists have submitted the highest dollar amounts for Part B medical allowable fees for that inpatient episode. There is a natural hierarchy that occurs in terms of how these patients are attributed, and radiologists fall third in line if they are performing a large number of expensive exams. These are important features for radiologists to both consider and review on the Quality Recourse and Use Report (QRUR). Figure 4 breaks down where the patient attribution process funnels into a radiologist’s cost attribution.

As a result, and as a key point to keep in mind with respect to these calculations, radiologists may often not have certain costs included in their VBM calculation because patient data was unavailable. If there is insufficient volume, less than 20 cases or even no patients, then these costs are not included in the calculation of the VBM score. Radiologists will be assigned an “average” cost score where there is no data available for calculations. This small component is just one of many factors that contribute toward fully understanding the VBM program and the impact it may have on your practice. For a detailed overview of the program in its entirety, the VBPmonitor white paper, Optimizing VBM Quality Tiering for Physicians, is also a very useful resource.

image006

The Sunset of PQRS, VBM and MU

It is important to recognize that 2016 is the final year for the current quality reporting programs PQRS, VBM, and MU, so a new light will be dawning on the future of such reporting. Though these affected programs will not be completely eradicated, their existence will be brought together under one roof. The introduction of the Merit-based Incentive Payment System (MIPS) and Alternative Payment Models (APMs) creates an encompassing formula for scoring performance and creates a framework for rewarding providers for furnishing better care, smarter spending, and ultimately the delivery of more positive patient outcomes.

About the Authors

Gloria Johnston is an accomplished healthcare executive with extensive experience leading people, improving and expanding healthcare operations, and providing consultative services to healthcare systems. Gloria has held leadership positions at several prestigious academic medical centers. She holds MBA, BSN, and AS degrees and is a credentialed health information professional.

Stephanie Doran is a health information management (HIM) consultant and project manager for HealthAdvanta, a health information and technology company. She is a graduate of Temple University, where she earned a bachelor’s degree in health information management and was honored with the Health Information Management Professional Excellence Award for her distinguished development and achievement.

Contact the Authors

This email address is being protected from spambots. You need JavaScript enabled to view it.

Comment on this Article

This email address is being protected from spambots. You need JavaScript enabled to view it.  

Executive Summary

As the healthcare delivery network continues to evolve from volume-to-value, healthcare providers are paying ever more attention to the health and cost of a population close to home—their own employees.

Value-based care is predicated upon aligning incentives amongst all stakeholders: providers, payors, patients and employers. In the case of employee health plans, hospitals, health systems and other healthcare delivery organizations actually play three of those four key stakeholders. They are simultaneously the payor, the employer and the primary provider of healthcare services. This reality creates a set of unique challenges and opportunities. In fact, many provider organizations are using their self-funded plans and their employees as learning laboratories for value-based care, exploring issues such as network structure, benefit design, patient responses to wellness and care management programs, provider responses to clinical coordination across the care continuum, and the critical search for cost reduction.

To explore these and other questions, as well as establishing a baseline understanding of how provider organizations manage their employee health plans today, Valence Health, in conjunction with the American Society for Healthcare Human Resources Administration (ASHHRA), conducted a recent national survey of more than 150 healthcare organizations.

Among the survey’s key findings were:

Human resources and finance executives continue to turn their attention to bringing employee health plans in line with other value-based endeavors focused on patient populations. Respondents listed narrow networks, domestic utilization and medical management as key concerns to align costs and outcomes.

As expected, over 75% of respondents self-insure their employees’ healthcare. Of those that do not currently self-insure their employees’ health, 25% indicate it is somewhat or very likely that they will switch to a self- insured approach next year.

Of those who self-insure their employees, 54% looked to a traditional payor to administer their self-insured plan, while 36% look to an independent third-party administrator (TPA). Interestingly, those working with independent TPAs were significantly more satisfied, with 50% of those respondents being very satisfied, compared to just 34% who were very satisfied using a payor for administration.

Cost remains the number one driver for healthcare providers when selecting an administrative partner for their self-funded plan.

When evaluating their administrative partners, provider organizations are least satisfied with medical management services.

KEY FINDING 1:

Healthcare providers are using their employees to explore multiple aspects of value-based care

When asked an open-ended question regarding what challenges or opportunities they would like to learn more about in relation to their self-funded employee health plan, respondents hit on all four aspects of value-based care: Wellness and patient engagement (20.4%); utilization and site selection (18.5%); cost (14.8%); and plan design (13.0%). It is clear that providers are thinking about not only optimizing their employee plan, but also how to leverage the learning and infrastructure from the employee plan to serve their broader patient populations under value-based models. 

valence060616 1KEY FINDING 2:

The vast majority of provider organizations provide health insurance via a self-funded plan

Like most organizations with more than 500 employees, the majority of the provider organizations surveyed self-insure their employee base. Fully 77% of all respondents indicated that they bear the financial responsibility for the healthcare provided to their employees. Providers employ a variety of risk management strategies to minimize their financial exposure, with the major levers being domestic utilization and well-designed stop-loss insurance. The former ensures that providers really only bear the marginal cost of incremental care delivered within their network, versus the full charged costs of outside network care. The latter can be designed to alleviate the actuarial risk that providers feel is beyond their balance sheet to support.

Reflecting a larger trend across all employers, it appears that a significant portion of healthcare provider organizations who are not currently self-insured for their employees’ health are moving to do so. A quarter of all respondents not yet self- insured think it is somewhat or very likely that they will switch to that approach next year. 

valence060616 2

KEY FINDING 3:

Not as many providers look to traditional payors for administrative services as anticipated

In a surprising finding, the survey revealed that the number of providers who looked to traditional payors (e.g. Blue Cross/ Blue Shield plans, Aetna, Cigna, UnitedHealthcare) for administrative assistance to support their plans was much less than expected. Only 54% of respondents partner with such organizations. Historically, traditional payors have used adjustments to their commercial reimbursement rates as incentives for a hospital to select them as the administrator of the self-insured plan. When combined with satisfaction ratings indicating that those providers working with independent TPAs were significantly more satisfied than those working for payors, the possibility arises that providers may be recognizing that controlling their own destiny is more important, and ultimately more valuable, than any incentives traditional payors might provide. 

valence060616 3

KEY FINDING 4:

The vast majority of respondents indicated that they review their plan administration partners every two or three years, and that cost remains a key driver

Underscoring the rapidly evolving nature of value-based care, provider organizations indicated that they regularly review their administrative partners and approach to employee health insurance. Five percent of respondents review their approach annually, 43% every two years, and 28% every three years. When they do look to choose a new approach or partner, cost is still “king”—as 59% of respondents said administrative cost was the most important decision factor. 

valence060616 4

KEY FINDING 5:

When considering the performance of various administrative functions, medical management received the lowest rating, while the ability to drive in-network utilization received the highest rating

Just as overall satisfaction saw respondents more favorable towards independent TPA partners, a breakdown of individual functions saw independent TPAs more highly-rated on five of seven key functions. Interestingly though, medical management, which is core to value-based care efforts, was clearly identified as the area with the least satisfaction. Perhaps this indicates that providers, as those directly responsible for delivering and coordinating care, set a high bar for partners in this area. By contrast, the ability to help drive in-network utilization was well-rated, regardless of how the plan was administered. 

valence060616 5

Survey Methodology

This online survey was sponsored by Valence Health and ASHHRA, and took place in Q1 2016. In total there were 185 partial and complete responses. Respondents were primarily HR executives, together with some financial executives. All respondents were from US-based provider organizations. 

About ASHHRA

About ASHHRA Founded in 1964, the American Society for Healthcare Human Resources Administration (ASHHRA) is a personal membership group of the American Hospital Association (AHA) and has more than 3100 members nationwide. ASHHRA leads the way for members to become more effective, valued, and credible leaders in health care human resources. As the foremost authority in health care human resources, ASHHRA provides timely and critical support through research, learning and knowledge sharing, professional development, products and resources, and provides opportunities for networking and collaboration. ASHHRA offers the only certification distinguishing health care human resource professionals, the Certified in Healthcare Human Resources (CHHR). Visit www.ASHHRA.org for more information.

About Valence Health

Valence Health provides value-based solutions for hospitals, health systems and physicians to help them achieve clinical and financial rewards for more effectively managing patient populations. Leveraging 20 years of experience, Valence Health works with clients to design, build and manage customized value-based models including clinically integrated networks, bundled payments, risk-based contracts, accountable care organizations and provider-sponsored health plans. Providers turn to Valence Health’s integrated set of advisory services, population health technology and value-based services to make the volume-to-value transition with a single partner in a practical and flexible way. Valence Health’s more than 900 employees empower 85,000 physicians and 135 hospitals to advance the health of 20 million patients. For more information, visit www.ValenceHealth.com 

Contact the Author

This email address is being protected from spambots. You need JavaScript enabled to view it. 

f cohenj stephens

In a perfect world, you would show up to work, see a patient, submit a claim, and be paid. Patients would always arrive on time for their scheduled appointments, bringing with them valid insurance, a filled-out health questionnaire, and family history information, being ready to go when they arrive at the front desk. Your charges for the services rendered would reasonably reflect not only the cost, but the true value of what you did, and you would actually be paid the amount you billed. It goes without saying that, unfortunately, this is not even close to the world in which we live.

In the real world, a huge amount of competition exists for the limited dollars available to pay for healthcare services, and with recent quality and outcome measures added to the mix alongside dwindling reimbursement, it’s harder than ever for providers to simply stay in the black.

When delivering presentations on this topic, we often begin by asking attendees: “what would you say is the most important responsibility of a medical practice?” The inevitable, nearly unanimous response (which may have even crossed your mind while reading this) is: “provide quality care to our patients.”

This is a noble objective and should definitely be high on the list, but in our opinion, it’s not achievable unless the actual most important obligation is met first: the obligation to maintain profitability. Unless you are part of a military or government-mandated practice, or some other form of deficit-funded facility, you’ll reliably go out of business if you aren’t profitable. If that happens, no one wins; your patients will no longer have access to the quality care you provide, and you’ll no longer be practicing medicine the way you really want. In short, to take good care of your patients, you must first take good care of your practice.

Profitability: A Simple Equation

Profitability actually is a pretty simple algebraic equation: revenues divided by expenses. To increase profitability, you need to increase the numerator (revenues) or decrease the denominator (expenses), or do some combination of both. Let’s examine each of these options in a little more detail.

First, let’s look at expenses. You always have the option to reduce staff pay and benefits, but what happens when you lower these expenses to below market value? You end up with a high attrition rate (i.e. people quitting) or at the very least, a disgruntled staff. In either scenario, the quality, continuity of care, and office morale usually declines, resulting in a less-than-ideal situation for the practice.

This unadvisable option is further complicated by the recent revisions to the U.S. Department of Labor’s overtime rule enacted on May 18. This rule effectively increases the annual salary threshold for exempt employees (employees not eligible for overtime pay) from $23,660 to $47,476, which means practices will be forced to increase their salaried employee pay to the new minimum threshold to maintain exempt status or start paying for more overtime.

Some practices turn to layoffs or reducing work hours in an attempt to control payroll expenses, but as many have discovered, changing staffing levels without changing the way they do business ends up being more expensive in the long run because of mistakes, missed opportunities, and rework. Though it’s important to monitor and control your payroll expenses, it may be even more important to examine other factors discussed in this article that are likely impacting your state of profitability.

How about reducing volume? Well, because we have not yet figured out an efficient way to get paid based on outcomes, volume continues to rule with regard to revenue, so when you reduce volume, you reduce revenue. The bottom line? Quality is expensive, and volume is necessary.

If you can’t do much to make a significant dent in your expenses, then shouldn’t you just focus on increasing revenue via increased patient volume or charge amounts? Well, in theory, yes, but as we mentioned earlier, that may be easier said than done. Unlike any other industry that comes to mind, healthcare is one of the few business models in the U.S. for which, with the exception of concierge providers, the amount you charge bears little resemblance to the amount you’re eventually paid. Even if you file nothing but clean claims, you’ll inevitably be paid less than you should.

Here’s why: say you’re scheduled to see a patient, but even before the office visit occurs, you have to verify the patient’s insurance coverage, and that it will pay for the visit or procedure in question. After all, patients rarely pay for their own care. Finally you see the patient, and nearly 1,600 decision points later, one or more procedure codes are selected from a possible list of 150 or so evaluation and management codes and possibly 15,000 other HCPCS codes. Add to this the 300 or so possible code modifiers and you end up with around 976 billion possible combinations from which to choose.

Your job, should you accept it, is to pick the right one. And to get paid, you need to correctly match these procedure codes with one or more of the 69,823 diagnostic ICD-10 codes. Once you make it through these murky waters, the claim goes to a payor with a charge that often seems irrelevant, because the payor has a predetermined payment amount. It derived this amount from one fee schedule out of a thousand that it has distributed between different markets and products. When the claim either goes unpaid or underpaid, there is a set of codes that comes from a list of five group codes, 250 reason codes, and 650 remark codes. This means there are 976 billion possible reasons for your claim to be unpaid or underpaid, and you are responsible for knowing which one is the right one. And if you didn’t do this in the beginning of the process, you likely will have some portion to collect from the patient, who may have to make a choice between paying you and taking the kids to Disney World. In the most recent National Health Insurer Report Card (created by the American Medical Association, or AMA), it was reported that the payor ends up paying nothing for nearly one out of every five claims filed. Of the remaining four claims that are paid, approximately 80 percent are paid incorrectly.

Negotiating better contracts would certainly be beneficial, but many practices suffering from “Eeyore syndrome” (named for the gloomy, anhedonic donkey in the Winnie-the-Pooh books) are unlikely to pursue this process because they haven’t assembled or reviewed the data to support their position and don’t want to spend time arguing with carriers for scraps. For example, conducting a cost analysis would allow a practice to determine which procedures have a higher cost-to-collection ratio, enabling it to potentially negotiate carve-outs for those procedures. The bottom line is that increasing revenues using traditional methods often costs too much time and resources to make any net gain.

Key Points

  • Efficiency suggests the ability to do something well or reach a specified goal without wasting resources.
  • Lean Six Sigma has emerged as the “horse and carriage” of process improvement for medical practices.

Efficiency is Key in Healthcare

The idea of profitability being tied to linear algebra is actually quite archaic, and not very applicable in a complex system, which healthcare certainly is. See, in a linear system, a one-to-one relationship exists between the components, and it’s usually pretty easy to manage. In a complex system, we have a many-to-many relationship, and the idea of complexity itself is tied to these interrelationships between players.

Yet even with all of the complexities on the nonclinical side of practicing medicine, there remains a beacon of light. And the word of the day (or decade) that describes this beacon is efficiency. It’s a word that all of us know and probably use routinely, but far fewer of us actually recognize the power of this weapon in the battle against declining profitability (and even fewer know how to wield it effectively). For our purposes, efficiency is the ability to do something well or reach a specified goal using the least amount of resources possible. It’s also the ability to achieve the same results with fewer resources or achieve more or better results with the same amount of resources.

Here’s a real-life example. A subclinical staff member escorts a patient to the exam room where, inter alia, she spends three minutes verifying answers to questions the patient completed on your questionnaire while in the waiting room. Once this is completed, the physician enters and repeats the same process, spending another three minutes verifying answers to the same questions already reviewed by the subclinical staff.

We recently asked a physician about the redundancy of the process, and he said that a speaker at a conference told him that this process would result in fewer errors with regard to the answers provided by the patient – a noble rationale that speaks to the heart of quality of care, and, on the surface, seems like sound advice. In subsequent questioning, however, we discovered that the practice never measured the error rate before engaging in this redundant step, nor did it measure the error rate during the time it engaged in this step. So while the goal (reduce errors on intake forms) was commendable, the practice didn’t know whether a problem existed in the first place and had no way to know whether this additional step improved either the safety or quality of patient care.

We conducted a test on this process and couldn’t find a single occurrence in which a patient answered the questions differently for the physician than he or she did for the subclinical staff member. We did, however, find instances in which the patient responded to the subclinical staff member differently than they had on the questionnaire they completed in the waiting room (a finding we attributed primarily to patients misinterpreting questions on the questionnaire that were later clarified by the subclinical staff). As a result, we advised the physician that, by eliminating the step in which he repeated the questioning, the practice could save three minutes per visit without negatively affecting the quality of care.

The physician replied, “my problems here go way beyond three minutes a visit.” This statement is true if you only see a few patients a day, but this practice saw 80 patients a day, which translates to 240 minutes (or four hours) of wasted time per day. In an ideal scenario, we could easily convert those four hours into value, but in our real world, because a base set of constraints exists in any process, we would likely only be able to convert around 25 percent, or one hour, of that time. So if this practice sees about four patients per hour, with an average revenue of $116 per visit, this results in an additional $92,800 in revenue per year. And that’s just from one modification to one process. Think about the revenue opportunities you might be missing out on simply because of a few obscure steps in your workflow.

Differential Diagnoses

When we talk about process improvement, we’re starting with the premise that efficiency is key to our model, and that increasing efficiency means reviewing your processes at the high level of granularity discussed above. An interesting relationship exists between anecdote and macrocosm, compared with analytics and microcosm. As demonstrated in our duplicate verification example, it’s like sayings go: “you can’t judge a book by its cover” and “the devil is in the details.”

To us, the most fascinating thing about process improvement within a medical practice is how it has a clear clinical counterpart: differential diagnoses. In a typical scenario, a patient presents with a chief complaint (“I don’t feel well”), and it’s the provider’s job to figure out just what is wrong and what to do to make the patient better. The problem with “I don’t feel well” is that it encompasses hundreds of possible conditions, and it often takes a bit more than a quick physical exam to get to the root cause. Thankfully, diagnostic testing and evidence-based medicine databases provide the physician with the data (or evidence) he or she needs to separate the wheat from the chaff.

In process improvement, we engage in this same process, except instead of the clinical aspect of medicine, process improvement focuses on the business side. Why would we even want to have a process focus? Well, many good reasons exist, but the three we tend to stick to the most are:

  1. We want (or need) to have a real understanding of how the process actually works. For example, the check-in process alone, when properly mapped out, is much more complicated and has many more steps than most folks see.
  2. We need to have a better understanding of the relationships between steps and how and where handoffs of responsibility occur between staff members.
  3. Our ultimate objective must be to make better decisions that result in solving problems.

Three Foundational Concepts

At the heart of the diagnostic process is a trilogy of concepts that are part and parcel of nearly every scientific experiment: classification, correlation, and cause and effect.

Classification involves the idea of discovery of current-state analysis. Where are we? What do we do? What do we look like to outsiders? What are our strengths and weaknesses? With regard to specific processes, we might ask about our capacity or specifically how something works, or the flow of traffic through the practice.

Correlation is a mathematical concept used to measure the strength of a relationship. True correlation measures the direction, the magnitude, and the characteristic of that relationship, and it is a critical tool when it comes to prioritizing projects.

For example, you may find that an increase in your practice’s Medicare mix correlates to a reduction in revenue. But how strong is that correlation? Stronger than, say, the correlation between revenue and denials? In an organization that does not have the benefit of unlimited resources, prioritizing issues based on the strength of the relationship is very important.

Finally, causal analysis is a concept we use to understand how to solve a problem. Without understanding cause and effect, without being able to get to the root cause of a problem, it’s all but impossible to solve it, or at least to solve it intentionally.

Using the above example, even if you see a strong correlation between Medicare mix and revenue, before you take any action, such as limiting the number of Medicare patients you see, it’s critical to know whether the increase in Medicare patients caused the reduction in revenue or whether the reduction in revenue was due to some other variable that you didn’t consider. Without this final step, you end up only treating the symptoms, and doing so ultimately will result in similar problems later.

Although we don’t have the space here to expand these concepts into their associated process improvement tools, suffice it to say that your practice would benefit from time spent on process mapping, value stream mapping, and the fishbone diagram, each supported respectively by the three concepts discussed previously. These three concepts form the foundational infrastructure of all process improvement projects.

Six Sigma, Lean: Two Major Models

From a high-level perspective, two major process improvement models, Six Sigma and Lean, are the most often-recognized within this paradigm. The former is an outcrop of the old quality improvement models from the 1950s and 1960s, contributed by the automotive industry.

If you study the literature, you’ll find that many articles refer to Six Sigma as a business management system rather than just a quality control model. Six Sigma, as the name implies, was founded on the idea that, in any process, one can experience a variation of six standard deviations from the mean (hence the term sigma) and still be in control, or still meet specifications. Obviously, this was designed as a manufacturing model, and even today, at least in our experience, Six Sigma is very difficult to implement in transactional and service-based businesses.

The crux of Six Sigma is that, in any process you perform, you should be able to experience fewer than 3.4 undesirable outcomes per million trials (or opportunities). We don’t know of many practices that see a million patients a year. And even if yours did, Six Sigma works on orders of magnitude larger than that, and as such, as a standalone, pure discipline, it is not very applicable to the medical practice.

The second model, referred to as Lean, focuses on reducing waste: wasted movement, wasted resources, wasted energy, wasted space, etc. As a pure model, it not only is more applicable to healthcare, but is now becoming widely accepted as a business management model.

Lean Six Sigma Emerges

Lean has its weakness in the area of statistical analytics, but Six Sigma has strengths in that area. So, just like “hey, you got peanut butter in my chocolate,” Lean Six Sigma (LSS) has emerged as the “horse and carriage” of process improvement for medical practices. This model actually represents a continuum of sorts: a focus on analytical work incorporates more Six Sigma, whereas more focus on process modeling incorporates more Lean.

As with any management model, LSS has its own toolbox. If you were to perform an online search for “Six Sigma and Lean tools,” you probably would find more than 100 of them, but remember, these tools were not designed for service-based businesses, and as a result, many simply are not applicable to the practice of medicine.

We use 24 tools, chosen based on our experience, in our process improvement work with healthcare facilities. Obviously, we don’t use all of them on every project. In fact, the tools can be mixed and matched pro re nata. The five that we tend to use in almost every engagement, however, are:

  1. Flow charting (or process mapping);
  2. Value stream mapping;
  3. Causal analysis (often referred to as an Ishakawa analysis);
  4. Voice of the customer (sometimes referred to as the Kano model); and
  5. Critical to quality (CTQ) trees.

In process mapping, we create a flow chart of all the steps within a given process. For example, for a typical patient visit we would map out from appointment to check-out, and we would include all the steps in between, such as check-in, wait time, provider encounter, coding, etc. Remember, the purpose of the flow chart is to be able to visualize the process steps, and its creation requires the involvement of the process owners.

When we move to value stream mapping, we begin to apply data and other information to the process map. For example, for each step in the process map, we want to know things such as: how much staff does the step require? How long does it take? How long from this step to the last step? What are the step details? What are the biggest mistakes that are made, and how often are they made? We also want to develop a list of things that could go wrong at each step, along with their risks. This is a great way to create contingency plans ahead of time, allowing errors to be caught before they create any collateral damage.

We then use this data to get to the root cause or causes for each issue identified. Remember, we want to be etiological in our approach; we want to cure the problem, not just treat the symptom. We use the latter two tools, voice of the customer and CTQ trees, to identify process improvement projects based on customer needs.

The Kano model involves the creation of a matrix of issues that identify the differences between what the customer expects, wants, and needs and how variation in each of these creates both business opportunities as well as risk. For example, the basic need for a patient is proper diagnosis and treatment. It’s tough to improve much on this process, but if you don’t meet this basic level, you’re in the wrong business and likely won’t last long.

A patient’s expectations, referred to as “performance attributes,” may elevate variables such as being seen within a reasonable period of time or being treated with courtesy and respect. Improvement in this area improves satisfaction scores and can drive more business to the practice, especially in a more competitive market.

When we deal with “excitement attributes,” we see activities such as a call from the office on the day of a visit to check on the patient and see how he or she is doing. Patients appreciate this and are likely to share word of it with their friends.

To be sure, there are lots of other tools, such as spaghetti diagrams, brainstorming, and SIPOC (which stands for suppliers, inputs, process, outputs, and customers), and again, it would behoove you to seek information about these tools to get a better understanding of their applicability to the practice environment.

Know How to Use Your Tools

Also consider the manner in which these tools will be implemented into the process model. Tools are great, but if you don’t know how to use them, they can be as dangerous as they are helpful (think a scalpel in the hands of a 2-year-old). For this exercise, we consider an assortment of what we refer to as deployment platforms, which are structured processes that are used to deploy, manage, and validate process improvement projects. Before we look at their differences, however, let’s discuss the five basic steps they all have in common:

  1. Defining the issue;
  2. Creating the benchmarks;
  3. Finding the root cause(s);
  4. Identifying and testing possible solutions; and
  5. Validating the results.

It sounds simple enough, until you realize that many organizations get stuck on number one, “defining the issue.” It’s not that we don’t recognize an issue when we see one, but rather that we see so many issues popping out all at once that it becomes overwhelming, making it difficult to choose which to address first.

This effectively is the initial triage stage of the process. Creating the benchmarks often becomes the starting block for any process improvement project, and analytics define the difference between anecdote and antidote. Maybe here more than anywhere else, the phrase “if you can’t measure it, you can’t manage it” comes into play.

Albert Einstein once said that “man cannot use the same thinking to solve problems that he used to create them.” The use of analytics helps us gain a better view of the playing field, allowing us to engage in thinking that is different and more oriented toward problem-solving. Finding the root cause, as discussed previously, is critical to effective problem-solving, and identifying and testing possible solutions comes from the ability to prioritize the root causes.

If you contemplate the first four aforementioned steps, it’s easy to see the straightforwardness with which they stitch themselves together. But the final step, “validating the results,” might be the most critical in the entire process, and unfortunately, it’s the step most often neglected before wrapping up a project.

Here’s an aphorism we coined: whenever you change something, you always end up with something different. What’s important, however, is to recognize whether the “something different” is better or worse (or the same as) what you started with. In other words, you might have completely revamped one or more of the critical processes within your practice and felt really good about making those dramatic changes, but if you haven’t validated whether any of those changes actually improved your organization in any quantifiable way, what’s the point?

Deployment Platforms Include DMAIC, PDSA

Of all the different deployment platforms, two tend to get most of the attention, and for good reason. The first is DMAIC, which stands for define, measure, analyze, improve, and control. As you can see, these steps match quite nicely with the five steps outlined previously. The other main deployment platform is referred to as PDSA (or PDCA), which stands for plan, do, study (or check), and act. Although it’s similar to DMAIC in its conceptual approach, it’s much leaner, and as it sounds, much more applicable to Lean projects.

PDSA is about small, nondestructive tests. For example, if you want to improve wait time, then DMAIC would demand a long-term, statistically valid, well-designed experiment that would take three months and much staff time to complete. Using PDSA, on the other hand, you could complete your project within a week with limited resources, and at just about no cost.

We don’t want to give the impression that DMAIC is never a good idea; it’s just that PDSA is a less cumbersome and often equally effective platform to use for medical practices.

When Practice Improvement Fails

One question in particular bears answering: does process improvement always work? The answer is an unequivocal “no.” When a process improvement project fails, it may be because the target wasn’t really a candidate for process improvement. Most often, however, the failure is due to more tangible, human issues. And although projects fail for many reasons, three reasons seem to be the most common, in our experience:

  • Lack of support and/or buy-in from top management. Support normally comes in the form of passive approval from the owners of the practice, and usually the physicians. In the current economic environment, everyone is on edge, and risk aversion rules the roost when it comes to investing in new opportunities. Without active support from the top, you can pretty much count on a failed effort.
  • Lack of a specific target or goal. More specifically, if you don’t know where you want to be, it’s highly unlikely that you’ll know when you arrive. Not having a specific goal will make it nearly impossible to stay on track, just about guaranteeing failure.
  • Unanticipated effects. The third reason that projects fail is more phenomenological, and it rests in the understanding of chaos. Not always, but much of the time, when you apply a change to one area, it creates changes in other areas – and often, it’s unexpected change. It’s important to recognize that in most cases, change is more global than we might imagine. Looking at the system as a whole is one way to assess the potential for collateral effects.

Is process improvement the silver bullet, the antidote for all that ails us? Again, the answer is no. If a process is not subject to some form of quantification, then it’s not a candidate for what we have discussed here. In a medical practice, quality of care is of the utmost importance, and although LSS can be applied to the understanding of outcomes, quality is a characteristic inherent within the practice and the practitioners. And as a characteristic, either you have it or you don’t; it’s not something that can be taught or learned.

However, the key takeaway here is that for most of the operational and workflow-related problems we face when addressing the profitability and sustainability of a medical practice, process improvement in fact is the answer. Continuous process improvement in general – and LSS, specifically – is a scalable model that is as applicable to the solo provider as it is to the 1,000-physician healthcare system, and we strongly encourage everyone to consider how to implement these tools within their practice. Do keep in mind though, before you deciding to sail off headlong into the sea of organizational change, to make sure that your crew is on board – and don’t forget to keep a close eye on your compass and map.

About the Authors

Frank Cohen is the Director of Analytics and Business Intelligence at DoctorsManagement. Jason Stephens is Director of Consulting Business Partners & Vendors at Doctors Management.

Contact the Authors

This email address is being protected from spambots. You need JavaScript enabled to view it.

This email address is being protected from spambots. You need JavaScript enabled to view it.

Comment on this Article

This email address is being protected from spambots. You need JavaScript enabled to view it.