Upgrading Department PACS with Entry-Level Vendor Neutral Archive Components

Organizations that appreciate the benefits that the Vendor Neutral Archive (VNA) will bring to the organization, sometimes find it difficult to finance the complete VNA solution in a single budget cycle. It is most unfortunate if the decision comes down to deploying none of the solution, due to a lack of funding for all of the solution. In my opinion, continued investment in a proprietary Picture Archiving and Communications System (PACS) archive and the continued accumulation of proprietary data is a flawed strategy.  In a paper sponsored by an unrestricted grant from IBM Systems and Technology I propose an alternative to the full VNA deployment, and discuss an affordable entry-level phase to a multi-phase VNA deployment strategy.

Any IT initiative that is focused on an upgrade to a department PACS storage solution, a refresh of the existing storage solution or a replacement of the disaster recovery (DR) storage solution initiated by an end of life letter, is an opportunity to carefully consider the value to the organization in continuing the proprietary PACS archive paradigm. Migrating the image data from the PACS to a properly configured entry-level VNA is the organization’s opportunity to once and for all normalize its image data and end the cycle of expensive and time-consuming data migrations that are required with every PACS replacement project. The hardware agnostic, entry-level VNA gives the organization the opportunity to apply advanced storage technology to all of its data management applications. All of these benefits can be realized at a fraction of the cost of a fully configured VNA, and in many instances for a price very close to that being quoted for the proprietary PACS archive solution.

Multi-Phase VNA Deployment Strategy

I have been preaching the merits of a multi-phase deployment strategy for a Vendor Neutral Archive for some time now.  This approach is very reminiscent of the deployment strategy for Radiology PACS adopted in the mid-1990’s.  A properly configured VNA is an even larger system than an organization’s largest PACS, so it too represents a large investment.

Nevertheless, a lack of funding to cover the deployment of the complete VNA should not deter the organization from making a start.  Every day going forward, the organization is forwarding a significant volume of new studies to their department PACS, which in the course of ingestion routinely makes proprietary modifications to the DICOM header.  Those PACS-specific idiosyncrasies in the headers are the primary reason behind the expensive and time-consuming data migration that is required when a PACS is replaced or simply upgraded to a next generation.  In effect, an organization is adding to its data migration liability every day that it continues to forward new study data to its department PACS.  At the very least, the organization should consider a pro-active data migration project…whereby all of the image data being managed by its department PACS would be converted from the proprietary format to a Neutral format and simply parked in a separate storage solution.

Most of the Vendor Neutral Archive vendors can configure entry-level versions of their products that will perform this pro-active image data migration and simply store the result for future use.  This approach makes an excellent Disaster Recovery solution that can be shared among multiple PACS.

I recently spoke at length about this subject with David Yeager,   who is a freelance writer and editor.  David spoke with several other individuals knowledgeable in this space and combined all of that information in a very comprehensive article on this subject.  The article appears in the October issue of Radiology Today.  David’s article is an easy read of a complex subject, so I encourage you to check it out here.  The online article also contains some very simple graphics to reinforce the concepts.

There are several entry points to a multi-phase VNA deployment strategy.  Replacing a PACS vendor’s DR solution with a Vendor Neutral DR solution is one such example that makes both strategic and financial sense.

The Real Reason for Deploying a Vendor Neutral Archive

I came across a PowerPoint presentation that I created in July 2006 that referenced a DICOM Migration Server, a term at that time referring to an “open” DICOM Part 10 Storage Solution.  I vaguely remembered the subject, so I opened the file and reviewed the slides.  I felt as though I had traveled back in time to the very earliest days of the paradigm shift that would one day be referred to as the Vendor Neutral Archive.  That’s six years ago last month.  Slide after slide contained bulleted descriptions of the numerous problems facing an organization that had managed to accumulate no fewer than five different department PACS.  Five separate silos of data that could not be exchanged between the PACS. Five different image viewers that the referring physicians had to toggle between.  The last few slides in the deck laid out a rather optimistic (at the time) plan for a strategic solution to the mess.  A grin spread across my face.

I closed the slide deck, assigned a loud red label to the file so I could easily find it again, and fast-forwarded to the present thinking, “You’ve come a long way baby!”

I have been intensely focused on both the concept and the reality of Vendor Neutral Archive for those last six years.  Perhaps that is why it seems so obvious to me why a healthcare organization should make the switch.  They should “take the A out of PACS”, move the data to a VNA, associate a universal viewer to the VNA and use this combo system to manage the distribution of that data to every other system and caregiver throughout the organization.  These are things that even the best of today’s department PACS are simply incapable of effectively doing in a multi-vendor environment.

Based on the questions I continue to see quoted in the print and electronic publications, posted on-line in the focus groups, and raised at the end of many of my webinars, there still appear to be a large percentage of both PACS admins and IT systems analysts that don’t “get it”.  They seem hung up on the technical features of the VNA and all of the potential snags that they fear are bound to occur when two systems, more importantly two vendors, are forced to work together.  The litany of both identified and suspected complications goes on and on.  No doubt the incumbent PACS vendors skillfully placed many of the items on these lists.

OK, it’s time to step back from the techy stuff for a minute.

It’s true.  Many currently installed department PACS are incapable of efficiently inter-operating with a foreign archive without help, simply because they were not designed to work with a foreign archive.  The installed base of VNA solutions would be a pitiful fraction of the real number, if the VNA guys had not developed some very clever workarounds to the inadequacies of many PACS.  Without help, most PACS could not be paired with a VNA. They lack the ability to store images to a foreign archive and then remember where they stored those images.  They are incapable of propagating ADT updates or Merge and Splits to an outside archive.  They have no concept of a deletion policy and therefore have no mechanism for reacting to an externally executed Purge Strategy.  Some PACS have no concept of a relevant prior coming from another PACS, and if the VNA suddenly delivers the study to its doorstep, the PACS thinks it’s a new study and puts it in a reading list.  As I have said, the litany of both identified and suspected complications goes on and on.  The naysayers apparently have not taken the time to read up and learn how all of these problems have been resolved.  As a consequence of those workarounds, the actual installed base of VNA solutions is actually a pretty impressive number.

My advice to those that still don’t get it is don’t get hung up on the technology.  The real argument for deploying the VNA is CONTROL.  It’s time for the organization to take control of its data.  Every day that goes by, another “x” gigabytes is forwarded to the department PACS where it is converted to an effectively proprietary DICOM format that the organization will eventually have to pay in time and money to move to yet another PACS with its proprietary format.  Regardless how soon the organization can afford to replace the incumbent PACS, it’s time to start migrating the data to a VNA…in effect it’s time to mitigate the cost of that future data migration.

What about future VNA migrations, when the first VNA has to be replaced with another VNA?  That’s a really good question.

The answer is actually quite simple.  The real objective in negotiating the contract for the VNA is to gain access outside of any confidentiality stipulation in the contract to the VNA database and all of the tools required to allow the organization to move its data from VNA 1 to VNA 2, at no cost.  Without that arrangement, you’ve missed the point.

Bottom line…initiate a pro-active DICOM migration of the PACS data to a entry-level VNA.  Take control of your data.  As soon as possible, replace the uncooperative PACS with a real PACS, one that fully interoperates with a VNA.

Failover Strategies in Mirrored Configurations of Medical Image Management Systems

The subject of Failover and Resynchronization is near and dear, as I’ve been configuring mirrored systems for years.  I have become quite familiar with how various vendors address this requirement.  The principal reason for building a mirrored Picture Archiving and Communication System (PACS) or a mirrored Vendor Neutral Archive (VNA) solution is Business Continuity.  Most healthcare organizations realize that they cannot afford to lose the functionality of a mission critical system like a PACS and an Enterprise Archive, so they need more than a Disaster Recovery strategy, they need a functional Business Continuity strategy.   Unfortunately, it’s really tough to build a dual-sited, mirrored PACS that actually works.  The sync and re-sync process drives most PACS vendors nuts.  There are very few PACS that can support multiple Directory databases.  I think this shortcoming of most PACS systems is why we have been configuring mirrored VNA solutions from the beginning…if you can’t configure the PACS with a BC solution, then you should at least configure the enterprise archive with a BC solution.

In the dual-sited, mirrored image management system, there are two nearly identical subsystems, often referred to as a Primary and a Secondary.  The two subsystems are comprised of an instance of all of the application software components, the required servers, load balancers, and the storage solutions.  Ideally these two subsystems are deployed in geographically separate data centers.  While it is possible to make both subsystems Active, so half of the organization directs its image data to the Primary subsystem and the other half directs its data to the secondary subsystem, the more common configuration is Active/Passive.  In the Active/Passive configuration, the organization directs all of its data to the Primary subsystem and the Primary backs that data up on the Passive Secondary subsystem.

When the Primary subsystem fails or is off-line for any reason, there should be a largely automated “failover” process that shifts all operations from the Primary subsystem to the Secondary subsystem, effectively making it the Active subsystem, until the primary subsystem is brought back on-line.  When the Primary subsystem comes back on-line, there should be a largely automated “resynchronization” process that copies all of the data transactions and operational events that occurred during the outage from the Secondary back to the Primary.

Business Continuity operations can be even more complicated in an environment where there is a single instance of the PACS and a dual-sited, mirrored VNA configuration. In this environment, the failover and resynchronization processes can be somewhat complicated, giving rise to numerous questions that should be asked when evaluating either a PACS or a VNA.  I thought it would be beneficial to pose a few of those questions and my associated answers.

Q-1: If the hospital-based PACS and Primary VNA are down, how does the administrator access the offsite Secondary VNA and subsequently the data from the offsite VNA? Is the failover automated, or manual?  If manual, what exactly does the admin do to initiate the failover?”

A: The response depends very much on the VNA vendor and exactly how that VNA is configured/implemented.  Some VNA solutions have poor failover/resynchronization processes.  Some look good on paper, but don’t work very well in practice.  With some VNA vendors, system failover and resynchronization in a mirrored environment is a real strong suit, as they support many options (VMWare, Load Balanced-automatic, Load Balanced-manual, and Clustering).  Some VNA vendors have limited options, which are costly and actually create down time.  The better approach is a Load Balanced configuration with automatic failover (which requires certain capabilities existing on the customers network-VLAN/Subnet/Addressing), with manual failover being the second option (and more common).  VMWare is becoming much more common among the True VNA vendors, but many of these vendors will still implement the VMWare clients in a load balanced configuration until customers are able to span VMWare across data centers and use VMotion technology to handle the automatic failover.  There is also the option of using DNS tricks.  For example, IT publishes a hostname for the VNA which translates to an IP in Data Center (DC) A, the DNS has a short Time to Live (TTL), such that if DC A fails, IT can flip the hostname in the DNS and the TTL expires in 1-5 seconds, then all sending devices automatically begin sending/accessing DC B.

There is also a somewhat unique model that implements the mirrored VNA configuration in an Active/Active mode across both Data Centers – whereby the VNA replication technology takes care of sync’ing both DC’s, the application is stateless so it doesn’t matter where the data arrives, because the VNA makes sure both sides get sync’d.

The point in all of this is simply that the better and obviously preferred approach to failover is a near fully automated approach, ONCE THE SYSTEM IS SET-UP.  Resynchronization of the data should be automated as well.  Only updates/changes to the user preferences might require manual synchronization after a recovery.

Q-2: What do the UniViewer (zero client, server-side rendering display application) users have to do to access the secondary instance of the UniViewer? Do the users have to know the separate URL to login to that second UniViewer?

A: If implemented correctly, the UniViewer should leverage the same technology as described above for the VNA.  The user’s URL call goes to a load balancer, which selects the Active UniViewer rendering server.   If the Primary UniViewer (Active) has a failure, another node, or another data center takes over transparent to the end user. The rendering server in turn points to a load balanced VNA such that the users need to do nothing differently if the UniViewer servers or the VNA servers switch.

Q-3: Where do modalities send new studies if the onsite PACS and/or the Primary VNA are down?

A: Once again, this is highly variable, and there are several options.  [1] If the designated workflow sends new data to the PACS first and that PACS goes down, then I’d argue that the new data should be sent to the onsite VNA.  That means changing the destination IP addresses in the modalities.  [2] Vice-versa if the designated workflow sends the new data to the VNA first.   Most of the better VNA solutions can configure a small instance of their VNA application in what I refer to as a Facility Image Cache (small server with direct-attached storage).  One of these FIC units is placed in each of the major imaging departments/facilities to act as a buffer between the Data Center instance of the VNA and the PACS.  [3] In this case, the FIC is the Business Continuity back-up to the PACS.

If both the PACS and the local instance of the VNA are down, the new study data should probably be held in the modality’s on-line storage, for as long as that is possible.  The modalities could also forward the data across the WAN to the Secondary VNA in the second data center, but the radiologists would probably find it easier to access and review the new study data from the modality workstations.

Of course all of these back-up scenarios are highly dependent on the UniViewer.  In the case of those PACS with thin client workstations, if the PACS system goes down, the workstations are useless.  In the case of fat client workstations, most are capable of only limited interactions with a foreign archive.  See the next question and answer for additional detail.

Q-4: Do the radiologists read new studies at the modalities and look at priors using the UniViewer whose rendering server is located in the offsite data center?

While that is possible, my recommendation would be to use the UniViewer for both new and relevant priors.  Some of the UniViewer technology is already pretty close to full diagnostic functionality, some of the very advanced 3D apps being absent. There are already examples of this use of the UniViewer at a number of VNA sites…not only for teleradiology applications, but also diagnostic review if the PACS system goes down.  My prediction is that the better zero client server-side rendering UniViewer solutions are going to be full function diagnostic within a year.   This is a critical tipping point in the VNA movement…a real game changer.  Once the UniViewer gets to that level of functionality, the only piece of the department PACS that is missing will be the work list manager.   As soon as it’s possible to replace a department PACS with a solid [1] VNA, [2] UniViewer, and [3] Work List Manager, the PACS vendors will have a very difficult time arguing that their PACS (less the Archive and Enterprise Viewer) is still worth 90 cents on the dollar, as they are doing today.

Q-5: Does the EMR, if linked to the onsite UniViewer, have a failover process to be redirected to the offsite UniViewer so that clinicians using the EMR still have access to images through the EMR, or do the users need to have the EMR open in one browser and another browser open that points at the offsite UniViewer which they login to separately?

A: Failover from Primary to Secondary UniViewer should be and can be automated (see 1 and 2 above), if implemented correctly and support by the UniViewer technology.

In conclusion, most healthcare organizations are highly vulnerable to the loss of their PACS, because most PACS cannot be configured with a Business Continuity solution.  That problem can be remedied with a dual-sited, mirrored Vendor Neutral Archive paired with a dual-sited UniViewer.  While most VNA vendors can talk about Business Continuity configurations, their failover and resynchronization processes leave something to be desired.  The reader is encouraged to build a set of real-world scenarios, such as those presented here, and use them to discover which VNA will meet their Business Continuity requirements.  The Request For Proposal (RFP) document that I have created for VNA evaluations has an entire section on Business Continuity and the underlying functionality.

Federal Stimulus Funding for Health IT will drive adoption of PACS-Neutral Archives

A post on AuntMinnie’s PACS Forum on April 21 posed the question “Will EHR impact PACS?”.  A number of the regulars on this forum, including myself have posted their opinions.  I find it interesting to see a number of individuals suggesting that the Electronic Health Record (EHR) will morph into some kind of Super PACS supporting Radiology, Cardiology, etc. and featuring a universal viewer.

Last time I checked, the major PACS vendors were still having difficulty integrating their own Radiology and Cardiology PACS into the same platform, so I don’t hold out much hope for these PACS solutions suddenly becoming EHR systems, nor do I believe the EHR vendors are going to burden their development schedules with the effort it would take to add Radiology and Cardiology data management and display applications to their systems.

PACS will continue to focus on the individual imaging departments work flow and diagnostic applications, and EHR will continue to focus on aggregating all sorts of clinical information required to manage a patient’s course of treatment or general healthcare.  I realize that all this stimulus money targeted at EHR usage will “stimulate” the market, but I don’t think there is enough time for any of the vendors of any of these systems to reinvent their wheels.  The fastest route to market is to simply sell what already exists.

EHR systems have historically deferred to the PACS for the image management and the clinical review applications.  A relatively simple interface, currently based on a URL call, retrieves the image data referenced in the EHR listing and activates the corresponding PACS display application to display the images.  This model has been working just fine for some time now, with Radiology PACS being the principal data management system.

Unfortunately, as additional department PACS are deployed, each additional PACS would require its own URL interface to the EHR.  Multiple interfaces mean individual, separate viewing sessions based on individual separate display applications. The physicians would have to learn to use separate and different display applications. There would be no way to consolidate all of a patient’s images from separate departments into a single viewing session within a single viewing application.

My answer to the question posed by the AuntMinnie thread is that the stimulus package will most likely have an immediate impact on the PACS-Neutral Archive rather than the department PACS.

Assuming the EHR will continue to defer to another system for the image data management and display applications, it makes much more sense for that other system to be a consolidated PACS-Neutral Archive (PNA) than multiple department PACS.  The PNA is much further ahead of even the best departmental PACS in managing image data from disparate systems.  The PNA is much further ahead of the best departmental PACS in managing image data for the lifetime of the study.  The PNA is the better platform for managing non-DICOM image data and supporting a multi-modality universal viewing application.  Even before the promise of stimulus money, the PNA had a very positive ROI based on the cost of future data migrations avoided.

In conclusion, I don’t see PACS enveloping the EHR applications, and I don’t see the EHR enveloping the departmental PACS applications.  I see the EHR and the PACS remaining pretty much what they already are, separate entities.  Because of that focus, I do see them becoming more proficient at their respective tasks.  As a consequence, I see the PACS-Neutral Archive coming into its own as the central multi-modality image data repository and provider of the UniViewer display application.

Cost-effective Business Continuity Solutions – So much more than Data Back-up

Most Radiology PACS currently in use have some sort of data back-up in place. At the very least, the Directory database and the Data database are backed up daily to digital tape. In my opinion, digital tape is not reliable and the problem is you don’t know what data you have lost until you try and retrieve it. My low opinion of digital tape is supported by a number of reports from the field. I suspect the vendors that continue to insert digital tape back-up solutions in their early round quotes, do so in order to keep the price of the system down, but a much better solution is worth a few dollars more.

The “tape-less” back-up is a much better back-up solution. Instead of digital tape on a shelf or in a mechanical jukebox, a far more reliable and performance-oriented solution is to store the back-up copy of the Directory and the Data on spinning disk. Thanks to today’s pricing, a multi-processor, multi-core server coupled with a disk-based storage solution is only slightly more expensive than a digital tape library. I think the reliability is worth the additional investment.

Why stop there?

Instead of just writing a copy of the Directory on the back-up storage solution, why not install a second instance of the Directory application (Oracle, Sybase, DB2, SQL, etc.) on the back-up server? Now you have a reasonably cost-effective Disaster Recovery solution, depending on where you have physically placed that back-up system.

Why stop there?

Why not add a second instance of the PACS application to the back-up server? Now you have a reasonably cost-effective Business Continuity solution. Of course this complicates the PACS application considerably. The optimal software configuration would have the two Servers (Primary and Secondary) functioning in an “Active-Active”mode, and that would mean that the Directories are being automatically synchronized in near-real-time, and the study data is being copied from Primary to Secondary on a fairly regular basis.

Only the newest generation of PACS can support this configuration. Most of the PACS being sold today can support a “tape-less” back-up server, but they do not support a second instance of the Directory application on that back-up server. The few that do support a second Directory do not support a second instance of the PACS application. Fewer still that support a second instance of the Directory and the PACS application have the back-up system operating in a standby mode. The Back-up takes over only when the Primary is off-line for scheduled or unscheduled maintenance. While this version of back-up may not sound so bad, the fact is that the failover and eventual reconstitution processes are often manual and labor intensive.

The point in all of this is, with today’s cost of hardware it doesn’t make sense to settle for a back-up solution with questionable reliability, when a much more reliable Business Continuity solution is affordable. The problem is most PACS currently being sold are “old” generations of system architecture wrapped in pretty GUI and flashy 3D applications. While GUI and display applications are important, I believe that the system architecture that supports a solid Business Continuity solution is more important, and sooner or later those old generation PACS are going to be upgraded. You can tell a lot about the longevity of a PACS, by investigating the various back-up solutions that it can support. Why start a five year contract with an old PACS? Do you have room for a forklift in your data center?

Is new Stark Exemption an Opportunity?

I came across an article in Imaging Economics titled “Surveys Show Paper Legacy Tough to Shake”

What caught my eye was the second paragraph statement “A new Stark exception allows hospitals to donate health information technology in the form of an EMR to private physicians.”

I was wondering if the definition of “EMR” could be extended to radiology web viewer? Is this possibly a mechanism for providing the necessary hardware (PC), software and connectivity services to the referring physician office to get them to stop requesting paper and film?

The article is worth reading as it explains why “more than 50% (hospitals) continue to print and distribute paper lab and imaging reports.” This does not come as a surprise to me, but it occurs to me that if so many hospitals are still printing paper radiology reports, a similarly large number must also be distributing hardcopy images.

Clearly the success of a Radiology PACS depends on turning off a large percentage of hardcopy and getting the referring physicians to access images and reports from their offices electronically. I have long argued that the cost of providing a suitable PC and basic connectivity services is more than paid for by the value of the hardcopy. Many clients were concerned about the Stark implication. Is this exception an opportunity?

The article goes on to explain that 62% of hospital executives surveyed in February said their organization had no plans to donate technology. “They’re waiting to see how the government changes the landscape. How will it affect their nonprofit standing, that kind of thing.” Once again, I think this is a shortsighted point of view. The continued printing of hardcopy films is certainly affecting their bottom line. Why not take advantage of this opportunity to legally equip their referring physicians with a much less expensive method to access images and reports?

GC’s Major Guidelines for Picking the Best PACS

Several interesting posts popped up on Auntminnie’s PACS Forum today. Two were related to display software for referring physicians, and two others were related to the log-term archive. In my first response , I spoke about the importance of picking a PACS that featured a single display package that allowed system managers to create individual user profiles by assigning display features through user privileges. I also suggested that the Health System should consider providing the display hardware and IT support for their high volume users, because it would be cheaper in the long run than producing and managing film. In a follow-up response, I flat out stated that as much or more attention should be paid to the display software that is going to be used by the referring physicians as is paid to the display software being used by the radiologists. Failure to win over the referring physicians, especially the surgeons, will surely doom a PACS project.

The first of my responses to the archive issue focuses on using a spinning disk solution for the Disaster Recovery subsystem, and the need for some sophisticated Information Lifecycle Management software in the archive that would make it possible to migrate data from media to media and delete data based on information about the study contained in the DICOM header. In the same article, I couldn’t help but ask the question why anyone would create an exact duplicate of the original image data, if the PACS utilized any proprietary formats. It seems to me that if you are going to invest good money in a DR solution, the second copy should be 100% DICOM and 100% inter-changeable with another PACS. This would eliminate future data migration costs. In a second response, I suggested once again that the time has come to separate the Archive from the PACS. The PACS vendors insistence on using Private Tags and proprietary encoding is blatant vendor lock. It is expensive (data migrations) and it should be stopped.

So here is my simple Guideline for picking the best PACS

1) Distributed server architecture. Each facility gets its own Directory and Data database servers and there is one shared long-term archive. Each facility is self-sufficient, yet there is one consolidated patient folder. The central shared server “aggregates” all of the information from the facility servers. The user doesn’t have to know where to look for any study on any patient in the system.

2) Single master copy of display software, one common GUI, fat client for performance, web-delivered for zero administration. Each user can be granted access to whatever display features and tools they think they need.

3) Software license fee is based on the number of studies under management, NOT the number of users, or the mix of features/tools being assigned.

4) PACS-neutral Archive: guaranteed universal connectivity, ability to morph DICOM Header Tags in order to copy any meta data in Private Tags to Public Tags, no future data migration necessary. If the PACS vendor that ranks the highest in every other category cannot provide this kind of archive, buy the archive from the vendor who can and configure the PACS with a small working cache.

5) Make sure the archive supports a sophisticated ILM strategy, one that migrates data from media to media or deletes data based on information in the DICOM Tags, data transfers have zero impact on the PACS or Archive application server.

There are other important issues and features to be sure, but they pale in significance to these five.

What was the Department of Veteran Affairs Thinking?

I came across a news article today announcing the VA’s plan to establish a Disaster Recovery program for all of their Radiology Departments that had already installed the Philips iSite PACS.

“Royal Philips Electronics has announced an agreement with the Department of Veterans Affairs (VA) to provide disaster recovery services that are dedicated to VA users with Philips iSite. Managed and hosted by Philips, the VA disaster recovery services will provide automated backup of all Philips iSite Radiology image data.”

It is well known that the Philips iSite PACS stores the image data in a proprietary (iSyntax) format. The Philips PACS is not the first PACS to be deployed by the VA and it probably will not be the last. When the time comes to replace the iSite PACS with something else, all of that study data accumulated over the years will have to be migrated to that next system. That is going to cost both time and money.

A shared Disaster Recovery program is a great idea, but why deploy a DR solution that stores another copy of the study data in a proprietary format? It seems to me that the deployment of a Disaster Recovery solution is an excellent opportunity to create a second copy of the data in a PACS-neutral format. Start copying the historical data already stored in the iSite PACS to a Vendor-neutral Enterprise DR (archive) solution. Call it a “pro-active” data migration. Then continue to store all new study data accumulated by iSite to this Vendor-neutral DR solution.

When the time comes for any of the sites to move on to their next PACS, there would be no need to migrate that site’s study data over to the new PACS. A Vendor-neutral archive (server and storage) would be built and loaded with that site’s historical data (in a Vendor-neutral format) and then shipped to the site. This local facility server would interface to whatever new PACS is being deployed. The new PACS would not have to be configured with a long-term archive. There would be no need for the time-consuming and expensive data migration.

A Vendor-neutral Enterprise DR solution could also be shared with all those other VA facilities that do not have Philips iSite PACS. What are those sites suppose to do for their DR solution? How many different DR solutions does the VA want to support? Could it be that all VA facilities will be encouraged to upgrade to the iSite PACS? No doubt that’s the Philips plan.

Don’t misunderstand, I think that iSite is one of the better PACS in the market, but data migration is an inherent problem with changing PACS, in some cases with the next generation PACS of the same vendor (Siemens Magic to Siemens Syngo). It simply doesn’t make sense to build a DR strategy that doesn’t take into account the high probability that some other PACS will be deployed somewhere downstream, and thus require a sizeable data migration project. A sensible plan would take reasonable steps to avoid that problem.

It should not be a matter of money. Hardware is hardware. Granted, the Philips software license for that second copy of the data is probably less than what the Vendor-neutral Enterprise DR software will cost. But the cost of all those future data migration projects would more than likely cover the premium charged for a Vendor-neutral Enterprise DR solution that could be shared by every VA site today.

I’ve written a few other posts on this subject that you might find interesting.

PACS-neutral Enterprise Archive – Who will build it?
Looking for a PACS-neutral DICOM Archive?
An Enhanced DICOM Archive would be the ticket!
PACS Vendors think PACS-neutral Archive is crazy idea
SCAR ’06 Update

If you would like to have a tool to help you estimate the cost and time associated with your future data migration projects, just email me at graycons@well.com and ask for the Migration Prognosticator.

Test Server Shortfalls

I started reading a discussion thread last week on AuntMinnie.com dealing with downtime during PACS software upgrades. While reading comment after comment, I’m thinking about how test servers are intended to minimize that downtime. Then it occurs to me that there are very few PACS systems out there with Test Servers. First of all they are expensive if they are configured to test more than a few modalities and more than a few display stations, Secondly, there are some PACS vendors that do not even offer a Test Server.

A properly configured Test Server will be configured to accept all of the modality inputs and have a small Directory database and sufficient local storage to accommodate the studies that are acquired during the testing. You don’t want to add test studies to the primary PACS database. It will have two network interfaces, so it can be used when the primary network in undergoing periodic scheduled maintenance. It will include a HIS/RIS interface component, so that functionality can be tested. It will include all of the display applications. In short, the real Test Server is as close to being a mirror of the production server’s interfaces and applications as you can make it. That’s the point of it being a Test Server. On the other hand, the vendors should realize that a Test Server is just that, a Test Server. Placing it in service once a year to test a new release does not warrant pricing the software anywhere near that of the production server.

Even though a true Test Server is configured as close to a Production Server as possible, that does not necessarily make it a back-up server. The Test Server is typically configured on a much smaller, less robust server platform than the Production Server. A Test Server configured with sufficient horsepower to be a Business Continuity Sever would be considerably more expensive. A true Test Server and a Business Continuity Server differ primarily in the server hardware. While the concept of a Business Continuity Server is very valid in parts of the country that are at greater risk for natural disasters, most facilities would consider it a luxury. The true Test Server however, should be a requisite in every system.