You are working as an FY2 in the Emergency Department and you go to see a 47 year old man with chest pain. You really want to know if this chest pain is being caused by serious pathology and whether doing a high sensitivity troponin test will help.
Learning Objectives
- Why we rule out the worst case scenario in emergency care, rather than make a diagnosis
- Diagnostic decision tools and tests we might use in the acute and emergency care
- Understand simple statistics: sensitivity and specificty and likelihood ratios.
Patients come to us wanting a diagnosis, but in Emergency Medicine we are often working to rule out the worst case scenario and tell them what they don’t have.
We often assume that patients in the ED have also have a higher prevalance of disease than those presenting to general practice (although this may be a point that is worth further discussion).
For many of the presentations we see in Emergency care, clinicians will have list of a few conditions that we actively rule out. This may be done on the basis of history alone, after a history and examination or after a history, examination and investigations. We will go through all of these during their own podcast in the future.
Presenting Complaint
Conditions to rule out
Chest Pain
Acute Coronary Syndrome
Aortic dissection
Pulmonary Embolism
Pneumonia
Pneumothorax
Shortness of breath
Asthma/COPD
Acute Left Ventricular Failure
Pulmonary Embolism
Pneumonia
Pneumothorax
Headache
Subarachnoid haemorrhage
Meningitis
Space occupying lesion
Giant cell arteritis
You may think that only things like blood tests and Xrays are diagnostic tests, but in fact everything we do is a diagnostic test: whether that is asking question in the history; listening to the chest or a CT scan.
Each of these tests has its own accuracy and we can express this as the sensitivity and specificity.
Every time we ‘do a test’ that test can have one of four outcomes:
- true positive – the test is positive and you do have the disease
- false positive – the test is positive and you do not have the disease
- true negative – the test is negative and you do not have the disease
- false negative – the test is negative and you do have the disease
We can put this in a 2 x 2 table to help explain this further
Test Result
Disease present
Positive
Negative
Positive
True positive (a)
False negative (c)
Negative
False positive (b)
True negative (d)
We can now use these four different values to calculate the sensitivity and specificity
Sensitivity
Sensitivity is the ability of a test to detect the patient with a condition with a positive test: it’s a measure of how good the test is if it is positive.
To calculate this we need to find the number of patients who test positive and then divide this by all the patients who have the disease (true positive and false negatives)
sensitivity = number of true positives/number of true positives + number of false negatives
We can see that if the number of false negatives decreases then the sensiticity will improve as it will get closer to 1.
A test that is sensitive is good at ruling out disease because more of the negative results are correct.
Specificity
Sensitivity is the ability of a test to detect the patient without a condition with a negative test: it’s a measure of how good the test is if it is negative.
To calculate this we need to find the number of patients who test positive and then divide this by all the patients who have the disease (true positive and false negatives)
specificity= number of true negatives/number of true negatives+ number of false positives
Exactly like for sensitiviyt we can see that if the number of false positives decreases then the specificity will improve as it will get closer to 1.
Thus, a test that is specific is good at ruling in disease because more of the positive results are correct.
SpIN and SnOUT
The easiest way to remember this is Spin and Snout: Specific tests rule in; sensitive tests rule out.
Examples of sensitive and specific tests
Often tests that are sensitive and not specific and vice versa. In fact, when we set ‘cut offs’ for certain tests these are often done as a compromise between the two.
Specific test – an Xray (radiograph) of a limb would be a good example of a specific test – if you see a fracture it is higly likely there is one there. Conversely, for some bones, not seeing a fracture doesn’t mean there isn’t one (the neck of femur is a good example of this).
Sensitive test – high sensitivity troponin (hsTrop). Yes the name does give it away (although it is called sensitive test for a different reason, but let’s not worry about that now). If you have a hsTrop below the cut off then cardiac myocyte damage is much less likely. It doesn’t though, tell what is causing the damage if it is positive – that could be acute coronary syndrome or something else causing an oxygen supply/demand mismatch.
Surprisingly, it is pretty rare that we actually make a definite diagnosis: we are more likely making a list of differential diagnoes and assigning them each a probablity. We then need to balance up the probability of the patient having a disease and then the harm and benefit of any treatment we may or may not choose to give.
To some extent that tipping point for diagnosing should depend upon the condition we are looking at. At one extreme we often use the generic term for all non-specific paediatric illness (it’s a bit of a virus –and no antibiotics won’t work); at the other extreme we want to be pretty certain that my diagnosis of myocardial infarction is correct as, the therapy for the condition (percutaneous coronary intervention – PCI) has a risk in itself. Setting the tipping point for labelling is very much disease specific, but as diagnosticians we need to have a good feel for the consequences of diagnosis (or not diagnosing).
Bearing all of this in mind, it is fair to say that we are calculating the likelihood of a patient having (or not having a certain diagnosis)…
We can now put all of this together to apply these principles to the patients we are caring for. Making a diagnosis (or estimating the probability of the patient having a condition) is dependant on all of the factors above: the pre test probability (which may be the prevalence of disease in the group we are seeing) combined with the tests we then perform (never forget that this includes the history and examination, as well as ‘tests’).
We need a way of converting the pre test probability into a post test probability and this is where likelihood ratios come in.
The likelihood ratio assesses how good a test is using both the sensitivity and specificity. We can do this both for positive results and therefore the change in likelihood of the patient having a certain disease (the positive likelihood ratio or LR+) and negative results ( the change in likelihood of the patient not having a certain disease (the negative likelihood ratio or LR-).
A test which makes no difference at all has a likelihood ratio of 1 – the pretest and post test probability are the same. For LR+ the larger the number the better the test for saying a patient has a condition: for LR- the smaller the number the better the test for saying the patient hasn’t got the disease.
These are calculated like this….
LR+ = sensitivity/(1-specificity)
LR- = (1-sensitivity)/specificity
… but I would much rather you didn’t worry about the equations and concentrated on the concepts
This is all very well, but how on earth are we supposed to use all of this in the clinical environment?
Well, we have a bit of a cheat and can use a nomogram. you take your pre test probability and then using the likelihood ratio will find out a post test probability
Start by working out your pretest probability – this could be the prevelance of the disease in your population, or after taking a history and putting all of the information you’ve gained together
Draw a line from your pretest probability through the likelihood ratio for the test and ths will give you the post test probability
An example – d dimer in PE
Low pre test probability – 1 in 20 (5% chance of having a PE)
The d-dimer is a sensitive test – it is helpful if it is negative and will help rule out the disease. Let’s say we have decided that the patient’s probability of having a PE is 5% – this isn’t low enough to rule out a serious diseae like PE so we need to do a test to rule it out. A d-dimer of <500ng/ml has a LR- of 0.05
Our post test probability is now about 0.3%: the chance of the patient having a PE is about 1 in 300 and we would be happy to ‘rule out the condition based on that.
High pre test probability – 1 in 4 (25% chance of having a PE)
Even though the d-dimer is a sensitive test, here when the pre test probability is higher a negative test this isn’t low enough to rule out a PE.
Our post test probability is now about 1.5%: the chance of the patient having a PE is about 1 in 66 and most clinicans would not accept that and want a further test to rule out the diagnosis.
You are working as an FY2 in the Emergency Department and you go to see a 47 year old man with chest pain. You really want to know if this chest pain is being caused by serious pathology and whether doing a high sensitivity troponin test will help.
You go to see the patient and take a history, perform an examinaton and look at his ECG. Although the pain is somewhat suspicious in nature he has no previous hostory or risk factors and his examination (and ECG) are entirely normal. You decide to do a pair of high sensitivity troponins three hours apart and they are 4ng/l and 6ng/l respectively. Based on his low pretest probability and a negative likelihood ration of 0.06 you are happy to rule out ACS as a cause of his pain.