This chapter is related to the aims of Section H3(i) from the 2017 CICM Primary Syllabus, which expects the exam candidate to "describe the principles of dialysis and renal replacement fluid". Renal replacement fluid - taken literally - is the volume of fluid, similar to extracellular fluid in composition, which is used to replace the volume which is removed by ultrafiltration. All or only some of the volume can be replaced, which is determined by your desired net fluid removal rate. The more interesting discussion about replacement fluid is - whether to give it before or after the filter. Each approach has its advantages and disadvantages.
From the CICM primary exam point of view, there has never been an SAQ on this topic, and it can be safely omitted from the study schedule. In the Part II exam, these issues were explored in Question 5.1 from the first paper of 2009; the college gave us a poor quality clip art drawing of a pre-diluted circuit and asked "What are the advantages of the replacement fluid administered as shown in the diagram?"
For purposes of Part II exam revision, pre- and post-dilution strategies were compared in a tabulated format. This is reproduced here to aid rapid revision.
- Clearance of middle molecules (those dependent on convection) is proportional to ultrafiltration rate
- Clearance of small molecules (those dependent on diffusion) is greater because of the higher concentration gradient.
- Less replacement fluid is required, which affects the cost of treatment
- Filter lifespan might be shorter, espeically if you don't want to use anticoagulation
- Maximum dose of dialysis will be limited by blood flow rate
- Clearance of solutes might be slower
- Filter lifespan might be longer
- Theoretical maximum dose of dialysis is much higher
The time-poor exam candidate should probably not have wasted their time going over this material. Even the exam candidate with near-infinite time resources should probably stop reading at this point. What follows is a long and rambling exposition which is ultimately to nobody's benefit.
Assuming a solute has a sieving coefficient of 1.0, the post-filter blood and the effluent in CVVH should have roughly the same concentration of that solute. The blood returning to the patient will therefore still have a relatively high concentration of the solute. To dilute this returning blood (and to maintain volume in high volume ultrafiltration), replacement fluid needs to be given post-filter.
So, what are the advantages of doing it that way?
Clearance of solute is directly related to ultrafiltration rate. Let's say your dialysis mode is pure haemofiltration. When you set the ultrafiltration rate to maximum (which corresponds to a filtration fraction of around 30%), you can be confident that you are removing 30% of the highly sieved solutes (i.e. ones with a sierving coefficient close to 1.0). For CVVHDF (a mixed modality) this statement is only valid for solutes which depend on convection for their clearance, i.e middle molecules with a low diffusive flux and a high convective flux.
The clearance of solutes should be higher in CVVHDF. This is because of the fact that the concentration gradient for solutes is higher. The higher concentration gradient between undiluted blood in the blood compartment and the dialysate compartment tends to drive the diffusional clearance of molecules which depend most on this method of clearance. In contrast, pre-dilution techniques send some fairly dilute blood into the filter, which decreases this concentration gradient and diminishes the diffusive flux of these small molecules.
A smaller volume of replacement fluid is required. This is simple arithmetic, and is based mainly on the fact that there is a finite practical limit to the ultrafiltration rate, which is set by the circuit survivability at higher haematocrit values. Let's say you want to remove 200ml/hr of fluid, and the circuit's 30% filtration fraction limit allows you an ultrafiltration rate of no more than 400ml/hr. The post-filter replacement fluid rate is therefore going to be 200ml/hr to keep a negative fluid balance. In contrast, in pre-dilution techniques the ultrafiltration rate is limited only by your imagination, and the rate of pre-dilution fluid delivery. Conceivably, you could have a 5400ml/hr pre-dilution rate and a 5200ml/hr ultrafiltration rate, still removing 200ml/hr of fluid. The difference is cost: pre-mixed bags of sterile replacement fluid are not cheap. In fact, the companies which manufacture dialysis equipment mainly make their money from consumables, and much of a department's dialysis budget is used in this fashion. Ergo, post-dilution is cheaper.
Filter lifespan can remain unaffected if protected by adequate anticoagulation. Fake news fear-mongering has given people an inappropriate impression of post-dilution techniques as being somehow deleterious to filter lifespan. The high end-filter haematocrit, those panic merchants say, is going to block your delicate hollow fibres and clag the filter earlier. Not so, it turns out. Nurmohamed et al (2011) ran pre- and post-dilution CVVH with systemic heparin aiming for an APTT of 55-60 (probably a lower APTT than that achieved by regional circuit heparin). There was no statistically significant difference in filter lifespan. Ergo, provided you have adequate anticoagulation it does not really matter where you shove your replacement fluid. Adequate is the operative word; Uchino et al (2004) had completely different conclusions (reduced filter lifespan with post-dilution) because they used only 5 units/kg/hr of heparin with an average APTT of around 38-41 seconds.
Some of these points have already been mentioned briefly in the section above, but they are worth elaborating upon.
The rate of ultrafiltration is limited by the blood flow rate. You can only ultrafilter up to filtration fraction of 30%, because beyond that your haematocrit rises into the dangerous range of 0.50 and above, threatening to clog your filter with shredded red blood cells. Ergo, blood flow rates are the rate-limiting step for ultrafiltration and therefore convective solute clearance, when you use post-dilution. The limits on blood flow rates are set largely by what you're using for access (pump rate is not the limit, given that modern roller pumps can generate blood flows well in excess of your wildest dreams - the local Prismaflex module can happily pump 27 litres of blood per hour). However, this limitation is probably not a realistic concern. Consider: at a filtration fraction of 30% you've removed 30% of the blood volume as effluent, and with a typical blood flow rate of 150ml/min that ends up being 2700ml/hr of ultrafiltered effluent. That's not bad. Anybody complaining about only removing 2700ml of fluid per hour has unrealistic expectations. If we use a 70kg patient and effluent volume as a surrogate measure of dialysis dose, this "limit" ends up being a dialysis dose of 38.5ml/kg/hr, almost double the usual dialysis dose. The RENAL investigators (2009) demonstrated pretty clearly that anything over 25ml/kg/hr does not improve survival.
With low blood flow rates optimal dialysis dose may not be achieved. This follows on from the previous point. Let's say you're using CVVH. To get your target of 25ml/kg/hr, in a 70kg person you'd want a total effluent rate of 1750ml/hr. That equates to an ultrafiltration rate of 29ml/min. If your filtration fraction is maximal (30%), your blood flow rate will have to be at least 97ml/min. If for whatever reason your dialysis access is precarious, this average blood flow rate may not be achievable. To be sure, 100ml/min seems like a low benchmark, but in reality the patient might be fidgety, underfilled, or with a small vas cath, or with an unusually low (high resistance) vas cath. Either way, you might struggle with even these low blood flow rates.
Filter life may be degraded by high end-filter haematocrit. As already discussed above, this can be mitigated by anticoagulation, but this is not always possible. Conceivably, under some circumstances not even regional anticoagulation is possible. In such circumstances post-dilution replacement fluid will give rise to an increased risk of filter failure.
Here, one risks repeating oneself. The disadvantages of post-filter replacement fluid are the advantages of pre-filter replacement.
Ultrafiltration rate is not limited by the blood flow rate. Unlike the the post-dilution technique, theoretically ultrafiltration rate in pre-dilution is only limited by your imagination and manpower. Practically, the limitations are equipment-related. The replacement fluid roller pump maxes out at 8000ml/hr, or 130ml/min. Thus, if the blood flow rate is also 130ml/min (which is low-average) and a haematocrit is 0.30, the pre-filter haematocrit is dropped to 0.15. The maximum effluent pump rate is 10,000ml/min, so it can certainly remove all of that predilution fluid, and then some. If you aim for 2000ml/hr fluid removal (which should haemoconcentrate the post- filter blood to a dangerous level) the the dialysis dose (i.e. effluent rate) will therefore be 10,000ml/hr, or 149ml/kg/hr. With neutral fluid balance as your goal, maximum dialysis dose is therefore 114ml/kg/hr. At least these are the realistic limits of dialysis dose using the locally available Prismaflex machine. As discussed above, this is not really an "advantage" of pre-dilution because at no stage will anybody ever exceed even the relatively modest dose limits of post-dilution (38 ml/kg/hr, as discussed above).
Urea clearance is enhanced by elution of urea from RBCs. When one adds the pre-dilution fluid, one increases the urea concentration gradient between the blood and the intracellular compartment of red blood cells. Urea then diffuses into the extracellular compartment of the filter blood, where it is available for diffusion into the dialysis fluid or for ultrafiltration. Or so it is said. This is actually a totally speculative theoretical concept; as far as a half-hour of Googling can ascertain, nobody has ever accessed the filter to determine whether pre-dilution produces a movement of of urea which is large enough to affect total mass transfer. This seems to be something inferred (eg. by Brunet et al, 1999) from the different clearance rates of urea, creatinine and urate when comparing pre-dilution to post-dilution. The authors attributed the slight difference to "variable partition coefficients between red blood cells and serum".
Filter lifespan is increased with pre-dilution. Haematocrit starts low and returns to normal, instead of starting normal and increasing to high. As discussed already, this is really only an issue if you are unwilling or unable to use any sort of anticoagulation. Conceivably, with the use of pre-dilution one may be able to avoid anticoagulation. Over the course of many hours and days, the improved filter survival might translate into longer dialysis sessions and less time between circuits, thereby increasing the total daily clearance of toxins.
Solute concentrations are decreased, particularly for techniques reliant on some degree of diffusional clearance, i.e. CVVHD and CVVHDF. This is because the concentration gradient between the filter blood and the dialysate is decreased by the dilution of pre-filter blood. Without dilution, the pre-filter urea might be 30 mmol/L, and obviously zero in the dialysate (i.e. a gradient of 30 mmol/L). With dilution by 50%, the urea gradient would be 15mmol/L. Because the concentration gradient is one of the most important variables which affect the diffusional flux of small solutes, the rate of clearance of such solutes is decreased in pre-dilution techniques. Brunet et al (1999) found that the reduction in clearance was 15% for urea, 18% for urates, and 19% for creatinine (when compared to an equivalent dose of post-dilution). This might be somewhat offset by the increased surviability of the filter, and is probably meaningless in the long term. If one is going to be on CRRT for 72 continuous hours, it matters little that the same solute clearance is achieved 4-5 hours sooner with one technique as compared to another.