Over the last few days I’ve been having several discussions on twitter, about clinical tests including the reliability, validity and accuracy of said tests. During the discussions, with some very clever clinicians, it became evident that clinicians may not have a good grasp on what constitutes a valid test, or how to test with intent using a Bayesian Conditional Probability approach, for testing a clinical hypothesis.
So I decided to write a numpties (that’s me) guide to clinical testing. So strap in and enjoy the journey.
When I originally trained and on continuing education courses, early in my career, I remember learning clinical tests on a simple level. Instructors would say “here is a test for shoulder impingement (ie Hawkins Kennedy or Neer’s test), if it’s positive your patient has a shoulder impingement.” Despite evidence against this being correct, you can still find this information with a simple Google search on “shoulder impingement tests” 👇
Clinical tests should not be used to prove a hypothesis or diagnosis, they should be used with the sound clinical intent of assessing the shift in probability of your clinical hypothesis.
It is important to choose the best tests, that will improve the probability of your clinical hypothesis being correct. It is also important to remember that your clinical hypotheses should be based on sound clinical reasoning and on receiving the patient historical narrative. Here is a link to a webinar I did on clinical reasoning in low back pain which give examples of this https://youtu.be/8KZvRbmQu7s
So what do we need to be aware of when choosing a clinical tests?
For most clinical tests we need them to have good clinical utility, based on good reliability and validity. Sounds complicated and technical. Simply put reliability is the ability for agreement, validity is the ability of the test to do what it says on the tin, ie how good is is at finding the target pathology (the tests sensitivity) and how good is it at detecting when you don’t have the target condition (the tests specificity)
Here is a link to an old blog post I wrote showing how sensitivity and specificity are calculated http://retlouping.blogspot.com/2009/04/truth-in-diagnosis-simple-guide-to.html?m=1
Sensitivity of a test is basically a description of the true positive rate.
Specificity of a test is basically a description of the true negative rate.
The aim of a good test is to have both high Sensitivity (Sn) and high Specificity (Sp)
An old pneumonic was used to indicate that Sn and Sp could be used to rule conditions in an out, SnNout and SpPin. In simple terms if a test has a high Sn it will be very goo at detecting the target, so counter intuitively it could be used to rule a target out if the test result was negative. Conversely Sp when high would be good at saying you don’t have the target hence when a test with high Sp was positive it could thus be used to rule in the target. I would propose that this simple view of testing doe snot improve your ability to test your Clinical hypothesis. A combination of Sn & SP can be used to calculate a ratio known as the likelihood ratio, which can then be used to calculate the change in probability when you perform your test when it is both positive and negative.
(Some further reading here. https://en.m.wikipedia.org/wiki/Sensitivity_and_specificity#Medical_examples
Likelihood ratios are useful as they are a combination of the true positive and true negative rates of the test. Once calculated they can be used with a Bayesian nomogram or online calculator to calculate the post test probability of your clinical hypothesis give you know the pre test probability, which is generally available looking at population data or the occurrence in your clinical population of the target condition.
Calculation likelihood ratios: 👇
Once you have calculated the likelihood ratios of you test (here is a link to an online calculator http://getthediagnosis.org/calculator.htm ) you can use them as a rough guestimate of how a positive or negative test may alter the post test probability of your clinical hypothesis. See the table below 👇
if you know the pre test probability of the target hypothesis you can also get an accurate post test probability using a Fagan or Bayesian nomogram. I speak to this in the blog I posted earlier.
The nomogram is easy to use (but you can use the free online calculator posted above) find the pre test probability on the right, find the Likelihood ration in the middle and dram a line from one through the other extending to the post test probability on the right.
It is important to note that due to Bayesian conditional probability the post test result of your test now becomes the new pre test probability of you next test. Hence be aware that if you use a test that has a positive likelihood ratio below 1 you are lowering your new pre test odds. See the example below.
Conditional probability is also why a cluster of average test can produces a better post test odds than a single test average test. Each test shifts the pre test probability for the next test.
I hope this all makes sense? It highlights that in the old way of thinking when a test came up negative you may just have chosen a new tests or several new tests until several, until one came up positive “confirming” your hypothesis. It is important to remember that every test has an effect on the starting point of the next test.
Look at the data set below, it can bee seen that in the case of testing for SI joint pain we should be aware that many of the listed tests have poor clinical utility.
Any test with a +LHR below one is basically a useless test as it lowers the post test probability and thus the new pre test probability for the next test.
It is also important to remember that if you have a very high pre test probability of a target condition, based on the history findings received from the patient, it is important to be cognizant that a poor clinical test may not shift the probability post test enough for you to reject the diagnosis. An example of this is with COVID-19 and testing. 👇 this is why we should seek to understand the basics of clinical tests and their application to clinical scenarios.
I have tried to simplify testing to allow understanding of the complexity and affect testing can have on probability. I propose that we should use clinical tests sparingly and that all clinical tests should be seen as ‘special”. Ask your self “well the test I am about to perform shift the probability of my clinical hypothesis significantly in a positive direction?” If the answer is “No” then why perform the test?
As usual thanks for reading.
“Always test with intent.”