Medical Students / JMOsNews / Opinion

AMA – Medical admission test is not a valid predictor of academic performance

MJA media release: The Undergraduate Medicine and Health Sciences Admission Test (UMAT) does not reliably predict academic performance at university, according to a study published in the Medical Journal of Australia.

In 2009, 14 universities in Australia and New Zealand used the UMAT as part of their selection processes for accepting students into medical degree programs.

read more

More…

<blockquote></blockquote><br />
The MJA is yet to release the online version of this study but when they do I'll post up the study's abstract and include it in the original post. It will be interesting to scrutinise their methods although, according to the AMA article, this study looks only at UMAT and how it correlates to GPA.
    The experimental research method and statistics used would be interesting to see. Keep us posted Matt ^^
      I think we can all give them a collective "Duh".
        Oh really now? How interesting. -_-
          See I've heard that <span style="font-style:italic;">apparently</span> the NZ schools (Otago + Auckland) have a similar study about to be published that showed, while the correlation was weak, UMAT significantly improved the predictive ability of med selections. If indeed said study exists I'll be sure to post it on the forums as well - provide an interesting contrast.
          so what if umat is not a valid test, it gets the government money and helps weed out most of the med hopefuls which is why the schools implement it in the first place (personal opinion)
          How is this even a legitimate research question? UMAT is (mostly) about people and emotions, medicine is (mostly) anatomy and physiology<br />
          <br />
          This (<a href=http://www.google.co.nz/url?sa=t&source=web&cd=3&ved=0CC0QFjAC&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.11.3641%26rep%3Drep1%26type%3Dpdf&rct=j&q=medical%20school%20dropout%20pdf&ei=CjKYTZ_8HomkugOQmID_Cw&usg=AFQjCNE8Q8XGxuQKGzLplwY8E4Gsh0kESw&cad=rja>http://www.google.co.nz/url?sa=t&source=web&cd=3&ved=0CC0QFjAC&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.11.3641%26rep%3Drep1%26type%3Dpdf&rct=j&q=medical%20school%20dropout%20pdf&ei=CjKYTZ_8HomkugOQmID_Cw&usg=AFQjCNE8Q8XGxuQKGzLplwY8E4Gsh0kESw&cad=rja</a>), however, DID find something that correlates with acadamec performance in univeristy (any guesses?)
<blockquote></blockquote><br />
So UMAT does not correlate to GPA.. but GPA does not correlate directly to performance as an intern/doctor, am I right?
    I've added the Abstract to the original post in this thread and note that the current issue of the MJA includes an editorial piece called <span style="font-style:italic;">What's the matter with UMAT</span>?
      <font size="3">What’s the matter with UMAT?</font><br />
      <font size="3"><br /></font><br />
      Annette G Katelaris<br />
      MJA 2011; 194 (7): 330<br />
      <br />
      <br />
      The selection of medical students is an important issue for the community and the medical profession because of the high costs involved in medical education and the need for graduates to be good doctors. Since the 1970s, the method of selecting medical students in Australia has evolved from the use of purely academic criteria based on secondary school matriculation results to the use of interviews that assess personal characteristics, and more recently to tests of aptitude — the Undergraduate Medicine and Health Sciences Admission Test (UMAT) and Graduate Australian Medical School Admissions Test (GAMSAT).<br />
      <br />
      <br />
      The imperative for change in the selection process has been the perceived need for doctors to provide academically and clinically appropriate medical care in a professional and humane manner that is appropriate for the society in which they work. Additionally, reliance on results achieved in high school has been held to introduce considerable socioeconomic bias (BMJ 2002; 324: 952-957).<br />
      While some of the steps taken to reduce socioeconomic disadvantage are transparent, such as pathways designed to improve access of Indigenous and rural students to medical school, the rationale supporting the use of interviews and aptitude tests has not been well articulated. These methods add complexity and cost to the selection process, and their evaluation is a priority.<br />
      <br />
      <br />
      The Journal has published comment (<a href="http://www.mja.com.au/public/issues/188_06_170308/pow10038_fm.html">MJA 2008; 188: 323-324</a>) and research on the selection of medical students over many years, including papers on the role of the interview (<a href="http://www.mja.com.au/public/issues/188_06_170308/wil10810_fm.html">MJA 2008; 188: 349-354</a>), the effect of coaching (<a href="http://www.mja.com.au/public/issues/189_05_010908/gri10324_fm.html">MJA 2008; 189: 270-273</a>) and the role of the GAMSAT (<a href="http://www.mja.com.au/public/issues/186_03_050207/gro10631_fm.html">MJA 2007; 186: 120-123</a>). Unsurprisingly, there seems to be agreement that academic ability is a good predictor of completing medical school, but the minimum level of academic ability required is not certain. Studies assessing other selection methods suggest that the additional benefit conferred by the interview may be small and that of the GAMSAT may be negligible.<br />
      In this issue of the Journal (<a href="http://www.mja.com.au/public/issues/194_07_040411/wil11056_fm.html">&#8594; Predictive validity of the Undergraduate Medicine and Health Sciences Admission Test for medical students’ academic performance</a>), Wilkinson and colleagues contribute to this debate with the first peer-reviewed data on the predictive validity of the UMAT for medical students’ academic performance. The paper shares limitations of other studies in this area — it is a correlation study (and thus cannot prove causation), it assesses outcomes in a highly performing and selected cohort of potential students (“range restriction”) who might all be expected to perform well in medical school, and it does not evaluate the clinical performance of students after graduation.<br />
      <br />
      <br />
      Wilkinson et al’s finding that there is only weak correlation between UMAT results and performance in medical school makes it vital that research into selection processes continues. It remains to be seen whether the UMAT predicts clinical performance and contribution to the medical profession and the health of the community, but early indications seem to suggest it has little to offer.<br />
      <br />
      <br />
      <br />
      <br />
      Annette G Katelaris, MB BS, MPH, FRACGP, EditorMedical Journal of Australia, Sydney, NSW.<br />
      akatelarisATampco.com.au
        Well we didn't need a research paper to figure that one out :p It's great this is being looked into, and hopefully in the future there will be better systems put in place for applicant selection.
          It's a poor study.<br />
          <br />
          It looks at UQ direct entrants' performance in terms of GPA compared with UMAT. Because they are direct entrants they don't actually start medicine until their 3rd or 4th year, so the slight correlation in the first year of study s telling us UMAT selects students who are good at literature, music, arts etc... but not medicine.
          <blockquote>It's a poor study.<br />
          <br />
          It looks at UQ direct entrants' performance in terms of GPA compared with UMAT. Because they are direct entrants they don't actually start medicine until their 3rd or 4th year, so the slight correlation in the first year of study s telling us UMAT selects students who are good at literature, music, arts etc... but not medicine.</blockquote><br />
          <br />
          Out of all the medical courses that use UMAT for entrance they chose to use a provisional course..... why?! Where is the logic in that?<br />
          <br />
          Why not Monash or UWS or JCU or UNCLE/UNE....... so forth?
          well if they wanted to research it properly they should have looked at all undergraduate medical courses in Australia. Even though most people think UMAT is a bit of a wank :p
          The UMAT has it's value, I'm just not certain why anyone would think it predicts performance? The whole idea of the UMAT is to measure the sorts of things that can't be graded particularly well on a written exam or an OSCE. This research implies we should all conclude 'the UMAT is useless' by using a completely irrelevant metric to judge it.
          <blockquote>The UMAT has it's value, I'm just not certain why anyone would think it predicts performance? The whole idea of the UMAT is to measure the sorts of things that can't be graded particularly well on a written exam or an OSCE. This research implies we should all conclude 'the UMAT is useless' by using a completely irrelevant metric to judge it.</blockquote><br />
          <br />
          I think there needs to be, or at least some attempt made, at validating the UMAT as a selection tool but I agree that it tests something very different to written knowledge-based exams and different, as well, to OSCEs. There is no simple test of what makes a good medical student or a good doctor, the concept itself is too vague and complicated to be simplified down to a test result. This being the case, we're left with a situation where validating something like the UMAT is very difficult as well because we have nothing to measure it against. <br />
          <br />
          The question then becomes, given we have no way of validating the UMAT, and we know get good grades in high school correlate (though not strongly) with good grades at university, should we not just be using grades to admit medical students. It is, at least, a relatively known entity.
          Reading through the study properly, my conclusion is that, if anything, it (weakly) supports the value of the UMAT. Given the design of the study, it doesn't look like it could ever provide any useful information into the predictive validity of UMAT - to do that, you'd at least need to correlate only with med school marks, with particular focus on OSCEs (where you would expect previous academic marks to be less useful in the prediction of success). However, it does seem to suggest that UMAT does not correlate well at all with early academic results in pure science courses, and therefore suggests that UMAT adds value to admissions by providing information not already given by academics. (Whether that information is in fact useful remains to be seen...)<br />
          <br />
          Thoughts?
          <blockquote>Reading through the study properly, my conclusion is that, if anything, it (weakly) supports the value of the UMAT. Given the design of the study, it doesn't look like it could ever provide any useful information into the predictive validity of UMAT - to do that, you'd at least need to correlate only with med school marks, with particular focus on OSCEs (where you would expect previous academic marks to be less useful in the prediction of success). However, it does seem to suggest that UMAT does not correlate well at all with early academic results in pure science courses, and therefore suggests that UMAT adds value to admissions by providing information not already given by academics. (Whether that information is in fact useful remains to be seen...)<br />
          <br />
          Thoughts?</blockquote><br />
          <br />
          That's not an argument for the UMAT having some other form of worth. The UMAT <span style="font-style:italic;">may</span> add some other sort of admission worth but we can't comment on that based on the results of this study.<br />
          <br />
          <blockquote>I wonder what the results when looking at the later clinical years. Where skills like empathy, relating to patients etc. are slightly more important.</blockquote><br />
          <br />
          Grades in clinical years are definitely measuring something different to grades in preclinical years but I think this study does make some comment on clinical years finding no correlation (with the limitation that not all of the study participants have yet reached clinical years - most have since the study period is 2005-2009).
          <blockquote>That's not an argument for the UMAT having some other form of worth. The UMAT <span style="font-style:italic;">may</span> add some other sort of admission worth but we can't comment on that based on the results of this study.</blockquote><br />
          <br />
          Perhaps I need to clarify a bit.<br />
          <br />
          The way I read the study, it sounds like they define year 1 as year 1 of the pre-med degree undertaken (not year 1 of the med degree). Therefore the furthest any of the participants will have gotten is year 2 of the med degree (and there aren't many participants in the "year 4" research block). The vast majority of the participants in this study have GPAs that have not at all come from a med degree. Even where it does include marks from med school, they have been diluted by marks from a non-med course. Furthermore, it does not distinguish between med and non-med in "year 3 and year 4" (where it could be either, depending on the length of the pre-med degree). All they've done is taken a block of university grades, the majority of which were not awarded in med school, and tried to correlate UMAT to them as a whole. Quite frankly, this seems like a shockingly poor design, and I wouldn't trust the results of it (whatever they may be).<br />
          <br />
          However, as I see it, there are a number of factors UMAT must satisfy to prove its worth - one of these being a relatively low correlation with other measures used in admissions (so that we know it's adding new information/value). This study shows a low correlation with "year 1" and "year 2", which, given the status of these years as general uni pure science study, indicates (I feel) a low correlation between UMAT and pre-med academic marks, such as those used in med admissions. Therefore it meet this criterion, and <span style="font-style:italic;">if it can be shown to predict success in med</span> (which we have no proper evidence on yet), this study indicates it is a potentially worthwhile addition to admissions.
          <blockquote>Perhaps I need to clarify a bit.<br />
          <br />
          The way I read the study, it sounds like they define year 1 as year 1 of the pre-med degree undertaken (not year 1 of the med degree). Therefore the furthest any of the participants will have gotten is year 2 of the med degree (and there aren't many participants in the "year 4" research block). The vast majority of the participants in this study have GPAs that have not at all come from a med degree. Even where it does include marks from med school, they have been diluted by marks from a non-med course. Furthermore, it does not distinguish between med and non-med in "year 3 and year 4" (where it could be either, depending on the length of the pre-med degree). All they've done is taken a block of university grades, the majority of which were not awarded in med school, and tried to correlate UMAT to them as a whole. Quite frankly, this seems like a shockingly poor design, and I wouldn't trust the results of it (whatever they may be).<br />
          <br />
          However, as I see it, there are a number of factors UMAT must satisfy to prove its worth - one of these being a relatively low correlation with other measures used in admissions (so that we know it's adding new information/value). This study shows a low correlation with "year 1" and "year 2", which, given the status of these years as general uni pure science study, indicates (I feel) a low correlation between UMAT and pre-med academic marks, such as those used in med admissions. Therefore it meet this criterion, and <span style="font-style:italic;">if it can be shown to predict success in med</span> (which we have no proper evidence on yet), this study indicates it is a potentially worthwhile addition to admissions.</blockquote><br />
          <br />
          Without meaning to be contrary, because I think your post is quite considered, the UMAT not being correlated with GPA is <span style="font-style:italic;">not</span> an argument that it is adding new information or value. It may or may not be adding new information or value, but the fact that it is not correlated with GPA is not evidence for or against this.<br />
          <br />
          Otherwise, you make a good point about the GPA correlation being relevant to the cumulative years of study since high school (i.e. the study considers GPA from medsci, arts, etc and diminishes completely once you get into the actual MBBS years. I agree that the study's results are difficult to interpret because of the study design which doesn't really address the question.
          I think this may be coming down to an issue of definitions. As I see it, for UMAT to add value it has to correlate with success in medicine, but not correlate strongly with other measures used in admissions (otherwise, even if it is a reasonable predictor, it still isn't adding much beyond what could have been deduced already). I agree with you that from this study we do not know if it is or is not adding value, because we do not know how well is predicts success in medicine. However, had this study shown a strong correlation with "year 1" (bearing in mind that this is essentially equivalent to what we use in NZ to indicate academic ability for med admissions), then regardless of whether or not it is correlated with later success, I would have viewed that as casting doubt on its practical use (and ability to add value). Because this was not the result, this study has essentially shown that UMAT passes this first hurdle. If a subsequent study finds UMAT to be a good predictor of success in medicine, I think that the combination of such a study with this study would lead to stronger support for UMAT than such a study on its own. This is why I regard this study as providing some weak and limited support for UMAT (even though we still don't know if it adds value).<br />
          <br />
          Does that make sense to you, or am I missing something here?
          It makes sense to me, I understand what you're saying, but I don't think it's logical. It's a leap of logic to say that UMAT doesn't correlate with GPA therefore it's more likely to add some other value or information. Even if the UMAT did correlate with GPA that doesn't mean it doesn't add some other information or worth since the UMAT is multidisciplinary (verbal reasoning, non-verbal, understanding people).
          OK, fair point. However, this study found no decent correlation with GPA from any of the 3 sections, so I'm still not understanding your objection on that front.
          It did find a correlation between GPA and Section 1 overall, and first year GPA with Section 1. Although, I'm not objecting that UMAT does not predict GPA well.
          <blockquote>It did find a correlation between GPA and Section 1 overall, and first year GPA with Section 1.</blockquote><br />
          <br />
          The correlation was significant, but definitely not strong enough to raise issues of multicollinearity.
          I'm not sure what you mean there.
          <blockquote>given we have no way of validating the UMAT, and we know get good grades in high school correlate (though not strongly) with good grades at university, should we not just be using grades to admit medical students. It is, at least, a relatively known entity.</blockquote><br />
          <br />
          Well... I can predict quite confidently based on previous ability to juggle that students who enter medicine will be able to juggle effectively. Unless we think juggling is important to medical admissions though, I'm not suggesting it should be a criteria for entry. The whole idea behind the UMAT is that marks aren't actually that important and therefore they shouldn't be focused on any more than juggling. Academic performance is relevant, but we shouldn't be saying that just because it's measurable we are going to rely on a very poor measure of what makes a good doctor.
          <blockquote>Well... I can predict quite confidently based on previous ability to juggle that students who enter medicine will be able to juggle effectively. Unless we think juggling is important to medical admissions though, I'm not suggesting it should be a criteria for entry. The whole idea behind the UMAT is that marks aren't actually that important and therefore they shouldn't be focused on any more than juggling. Academic performance is relevant, but we shouldn't be saying that just because it's measurable we are going to rely on a very poor measure of what makes a good doctor.</blockquote><br />
          <br />
          That's the thing though, what's to suggest the UMAT is anymore useful than juggling ability in medical students? There's no way to validate its usefulness. Academic performance is relevant and we can reliably measure it. I agree it's not a great measure of what makes a good doctor, but it is at least evidence based and it is relevant. It's a bit like saying SSRIs aren't great at treating depression so lets use snake oil because it sounds like it's important and SSRIs are obviously not an all-encompassing treatment.<br />
          <br />
          P.S. I'm sort of just arguing logic here. My own opinions aren't necessarily reflected.
          @ Jono:<br />
          I suspect that you are underestimating the importance of academic performance. Above a certain threshold, academic ability may not be substantially correlated to "being a good doctor" but no one seriously believes that "marks aren't actually that important" or that it is a "very poor measure of what makes a good doctor". This must be taken into the context that almost universally, high level of prior academic achievement is a prerequisite for entry into medicine regardless of entry method. There may not be much of a difference between someone who is academically 2 standard deviations above average compared to someone who is 3, but let's not pretend that we actually believe that someone who scores 2 standard deviations <span style="font-style:italic;">below</span> average is likely to make an "equally good doctor", <span style="font-style:italic;">on average</span>.<br />
          <br />
          For instance, in the previous UQ study looking at graduate selection criteria, it is only previous GPA that was substantially correlated with academic achievement: <a href=http://www.mja.com.au/public/issues/188_06_170308/wil10810_fm.html>http://www.mja.com.au/public/issues/188_06_170308/wil10810_fm.html</a><br />
          <br />
          A small case-controlled study only, but poor academic performance in medical school was correlated with subsequent professional misconduct: <a href=http://www.bmj.com/content/340/bmj.c2040.full>http://www.bmj.com/content/340/bmj.c2040.full</a><br />
          <br />
          I would also note that it is the prerogative for medical schools to choose students who are likely to do well academically in their program. Pragmatically, medicine is a rigorous academic course and the less students who fail the better.<br />
          <br />
          Regards.
<blockquote></blockquote><br />
I wonder what the results when looking at the later clinical years. Where skills like empathy, relating to patients etc. are slightly more important.
<blockquote></blockquote><br />
While the correlation was significant (i.e. suggesting an effect), it was also incredibly weak. In other words, it still suggested a very large potential for S1 to provide information not provided by 1st year GPA.<br />
<br />
[offtopic]I'm sorry if this is getting overly argumentative by the way, I'm just curious to probe your line of thinking. Do tell me if it's getting tiresome.[/offtopic]
    [offtopic]Matt, tired of MSO? Lolz.[/offtopic]<br />
    <br />
    A low correlation does not suggest S1 provides other information. S1 could show a weak correlation and not provide any other information at all. It does suggest GPA is only very minimally explained by scores in section 1 and that there are other things that fully explain GPA (which is not surprising).
<blockquote></blockquote><br />
By definition a weak correlation between two things implies that each provides information the other doesn't. Turning your second statement around, in this case it implies that S1 scores are only minimally explained by GPA - therefore they are mainly influenced by some underlying variable not measured by GPA. The question is one of whether that underlying variable is one we want to be included in the med selections model (which is where more research needs to be done). This is coming back to the fundamental clash here. I believe that if the correlation had been large, that would imply that S1 can be largely explained by GPA - therefore it doesn't reflect anything that hasn't already been measured. Because the correlation was small, it does reflect something unmeasured and therefore has greater potential to add something new and valuable to the selections model.<br />
<br />
[offtopic]Haha... thought so :p[/offtopic]
    Sure, but there's no reason to think those other explanatory factors give any new information or value. Sex, for example, explains some of the differences and that doesn't give us any new information or value. I guess you've already touched on this point where you point out variables being included or not included in the medical admission process.
      My line of reasoning is low correlation = more possible factors that UMAT could be measuring = more potential to make the admissions process more robust.<br />
      <br />
      The thing is, say we find that both UMAT and GPA are reasonable predictors of success in medicine. If they are well correlated with each other, then it goes without saying that one will predict success in med if the other also does. If they are not, well, that's interesting, because it tells us that UMAT is tapping into something that GPA is not but that can predict success in med, and that therefore the inclusion of UMAT is making the overall process more robust.<br />
      <br />
      As I've said before, this is entirely dependent on data emerging that (unlike this study) actually helps to answer the question of whether UMAT can predict success in med. I just feel that by isolating UMAT from GPA, this study has made any future conclusions from such research all the more powerful.
        <blockquote>My line of reasoning is low correlation = more possible factors that UMAT could be measuring = more potential to make the admissions process more robust.</blockquote><br />
        <br />
        Though possible, that isn't the most parsimonious explanation for low correlation. The low correlation detected might simply reflect that there is in fact, little correlation. Indeed, there is no reason to believe in your supposition at all.<br />
        <br />
        IMHO, your line of thinking suffers from a problem of a preformed hypothesis (that the UMAT is useful) that cannot be falsified. If a substantial correlation is found between UMAT and GPA, it could be claimed that it demonstrates that by proxy, it is a useful predictor for academic ability. If a substantial correlation is not found, it is claimed that it is "detecting other things" and so also useful.<br />
        <br />
        <blockquote>The thing is, say we find that both UMAT and GPA are reasonable predictors of success in medicine. If they are well correlated with each other, then it goes without saying that one will predict success in med if the other also does. If they are not, well, that's interesting, because it tells us that UMAT is tapping into something that GPA is not but that can predict success in med, and that therefore the inclusion of UMAT is making the overall process more robust.</blockquote><br />
        <br />
        That reasoning is unsound. If both UMAT and GPA are correlated with "success in medicine" (let's assume for now that a valid metric exists), even if they were measuring different things which are entirely unrelated, then UMAT and GPA would similarly be correlated.<br />
        <br />
        If A is correlated with C, and B is correlated with C, then there is a good chance that A will be correlated with B (depending on how you sample your population).<br />
        <br />
        This is one of the problem with observational studies; you are limited in the statements of causation that you can make even when you find correlation. It says nothing about the <span style="font-weight:bold;">direction</span> of causality. Even if UMAT and GPA were strongly correlated, it does not necessarily imply that they are "measuring similar things" or that there are common causative factors. The reverse is also true; that they were not found to be correlated does not necessarily imply that they are "measuring different things".<br />
        <br />
        Lastly (this has been covered in this thread though not explicitly stated), a statistically significant finding just means that a probable real difference is found. This in itself isn't overly meaningful as with sufficient numbers, you can almost always find statistically significant differences as no two samples are likely to be identical. The importance is the size of the correlation and as has been stated, it is small.<br />
        <br />
        Regards.
<blockquote></blockquote><br />
vitualis - I stand by my argument, and certainly do not feel you've said anything to undermine it (though I do thank you for your reasoned contribution). However, before leaving this matter to rest, I'd just like to respond to a few things in your post that irk me a bit:<br />
<br />
Firstly,<br />
<blockquote>The low correlation detected might simply reflect that there is in fact, little correlation. Indeed, there is no reason to believe in your supposition at all.</blockquote><br />
A low correlation is low for a reason - i.e. variation in one variable explains little of the variation in the other variable. If this does not indicate that variable A reflects different underlying traits than variable B (which is a key principle underlying important statistical methods such as factor analysis), then what does it indicate? What is a different possible <span style="font-weight:bold;">reason</span> for a low correlation?<br />
<br />
Secondly,<br />
<blockquote>IMHO, your line of thinking suffers from a problem of a preformed hypothesis (that the UMAT is useful) that cannot be falsified.</blockquote>While you are perfectly entitled to think this, I stated previously that I would regard a strong correlation between UMAT and pre-med academic marks as undermining the usefulness of UMAT, so I really don't see what you're founding this accusation on as it relates to me.<br />
<br />
Thirdly,<br />
<blockquote>That reasoning is unsound. If both UMAT and GPA are correlated with "success in medicine" (let's assume for now that a valid metric exists), even if they were measuring different things which are entirely unrelated, then UMAT and GPA would similarly be correlated.<br />
<br />
If A is correlated with C, and B is correlated with C, then there is a good chance that A will be correlated with B (depending on how you sample your population).</blockquote><br />
<br />
Right, I've just played around with SPSS and come up with the following hypothetical data set:<br />
<br />
(GPA,UMAT,"MED")<br />
<br />
(8.2,62,3.8)<br />
(8.1,62,3.7)<br />
(8.7,56,4.1)<br />
(8.5,55,4.0)<br />
(8.2,58,3.6)<br />
(8.0,61,3.5)<br />
(8.6,61,4.2)<br />
(8.3,61,3.9)<br />
(8.6,61,4.1)<br />
(8.1,60,3.7)<br />
(8.4,57,3.2)<br />
(8.2,63,4.4)<br />
(8.3,59,3.5)<br />
(8.2,60,3.8)<br />
(8.1,62,4.1)<br />
(8.1,60,4.0)<br />
(8.1,58,3.4)<br />
(8.5,59,3.6)<br />
(8.3,55,2.9)<br />
(8.2,59,4.0)<br />
<br />
Here both GPA and UMAT correlate positively with the "med" metric. Under your line of reasoning, they would therefore correlate positively with each other. In fact, they correlate negatively with each other. (You can try the analysis for yourself if you like).<br />
<br />
Now, I realise you acknowledge this in your second paragraph with the term "good chance", so I really don't see any basis for your first paragraph, which deals in absolutes, and goes so far as to declare my reasoning "unsound" on the basis of the (incorrect) assumption that if UMAT and GPA are both positively correlated with med, they MUST be positively correlated with each other.<br />
<br />
Finally,<br />
<blockquote>This in itself isn't overly meaningful as with sufficient numbers, you can almost always find statistically significant differences as no two samples are likely to be identical.</blockquote><br />
<br />
Irrelevant, but this statement is blatantly false. Increasing sample size increases the power to find statistically significant results, but it does <span style="font-weight:bold;">not</span> increase the likelihood that a statistically significant result is false and so meaningless. The main risk for this comes with multiple comparisons.<br />
<br />
I agree with you though that the concept of statistically significant is one far less useful that it is often made out to be. I also agree with your above sentiments regarding use of academic marks in med admissions.
    <blockquote>A low correlation is low for a reason - i.e. variation in one variable explains little of the variation in the other variable. If this does not indicate that variable A reflects different underlying traits than variable B (which is a key principle underlying important statistical methods such as factor analysis), then what does it indicate? What is a different possible <span style="font-weight:bold;">reason</span> for a low correlation?</blockquote><br />
    <br />
    There are many possible reasons for the low correlation: (i) random variation, (ii) a non-representative sample (UMAT actually IS correlated with GPA but this sample only contained students with extremely high academic achievement; analogy: the correlation between serum cholesterol and cardiovascular disease is weak if your sample population ONLY contained people with elevated cholesterol), (iii) a non-homogeneous population (e.g., UMAT is positively correlated with GPA in a subgroup of students and negatively correlated with GPA in another subgroup, (iv) the UMAT actually does measure the traits that result in better academic performance but its scoring/ranking is unreliable, (v) systematic error, etc.<br />
    <br />
    With regards to "variable A reflects underlying different traits than variable B", I misunderstood your original statement up this thread. Nevertheless, it is only a probabilistic statement; it is possible, maybe even probable that the UMAT is measuring "different underlying traits".<br />
    <br />
    However, as per discussed above, that doesn't actually make it useful. Why don't we measure juggling performance as well?<br />
    <br />
    <blockquote>While you are perfectly entitled to think this, I stated previously that I would regard a strong correlation between UMAT and pre-med academic marks as undermining the usefulness of UMAT, so I really don't see what you're founding this accusation on as it relates to me.</blockquote><br />
    <br />
    Hmm... I wonder if we've just argued past each other... "pre-med academic marks"? The study is the correlation between selection UMAT scores and actual in course academic achievement and GPA. My argument is that the correlation (or lack thereof) between UMAT and academic achievement in the medical course is highly relevant. I had assumed reading your posts that you had argued the opposite but now I see you were actually referring to "pre-med academic marks". I have little interest in that.<br />
    <br />
    <blockquote>Now, I realise you acknowledge this in your second paragraph with the term "good chance", so I really don't see any basis for your first paragraph, which deals in absolutes, and goes so far as to declare my reasoning "unsound" on the basis of the (incorrect) assumption that if UMAT and GPA are both positively correlated with med, they MUST be positively correlated with each other.</blockquote><br />
    <br />
    As the above paragraph.<br />
    <br />
    <blockquote>Irrelevant, but this statement is blatantly false. Increasing sample size increases the power to find statistically significant results, but it does <span style="font-weight:bold;">not</span> increase the likelihood that a statistically significant result is false and so meaningless.</blockquote><br />
    <br />
    I suspect that you didn't actually understand my last statement. By "meaningless", I am making a <span style="font-style:italic;">value judgement on whether the difference or association is anything we should care about</span>, <span style="font-weight:bold;">not whether it is false</span>. Perhaps a poor choice of words but I'm invoking a similar concept to <span style="font-style:italic;">clinical significance</span>. Statistical significance here is less important than the <span style="font-weight:bold;">magnitude</span> of difference (or the magnitude of the association). As before, the likelihood of two populations actually being equivalent is close to zero so by increasing the sample size, you can almost always find a statistically significant result.<br />
    <br />
    This study found a statistically significant result yes, but the actually association is so weak to be <span style="font-weight:bold;">meaningless</span>, insofar as <span style="font-weight:bold;">not useful in the situation as it was actually applied</span>.<br />
    <br />
    I suppose that we will see what the data shows as the students in this study become more senior and finish the UQ undergrad program. We could be surprised but I doubt it.<br />
    <br />
    Cheers.
      Lol why is it so hard to figure this out? UMAT was only introduced to weed out Asians otherwise all medical and dental students would be Asians. My GP was the one that told me this. He said that before the UMAT there were a lot of Asians and the medical profession wasn't too happy about it. They had to invent some bs along the line of "your marks GPA/TER doesn't correlate with blah blah blah so we will introduce this test so that only people who are truly compassionate and care about medicine will enter the profession". LOL
        G
        • G
          Green Vuvuzela
        • April 6, 2011
        <blockquote>Lol why is it so hard to figure this out? UMAT was only introduced to weed out Asians otherwise all medical and dental students would be Asians. My GP was the one that told me this. He said that before the UMAT there were a lot of Asians and the medical profession wasn't too happy about it. They had to invent some bs along the line of "your marks GPA/TER doesn't correlate with blah blah blah so we will introduce this test so that only people who are truly compassionate and care about medicine will enter the profession". LOL</blockquote><br />
        <br />
        You're joking, right?
          <blockquote>You're joking, right?</blockquote><br />
          <br />
          I hope so >.>
        <blockquote>Lol why is it so hard to figure this out? UMAT was only introduced to weed out Asians otherwise all medical and dental students would be Asians. My GP was the one that told me this. He said that before the UMAT there were a lot of Asians and the medical profession wasn't too happy about it. They had to invent some bs along the line of "your marks GPA/TER doesn't correlate with blah blah blah so we will introduce this test so that only people who are truly compassionate and care about medicine will enter the profession". LOL</blockquote><br />
        <br />
        While I think the vast majority of members on the board will disagree with you on this I also suspect that this is a myth that probably quite a lot of people have faith in. These people, and your GP it seems, have it wrong. The stereotype that Asians are all super-maths nerds with no people skills is compatible only with ignorance.
<blockquote></blockquote><br />
I have to agree with what Matt just said. Also, some people regardless whether they are asian or not do not get through the UMAT hence debunking the idea that UMAT simply weeds out the asians. Although, I would think graduate entry would be a preferable alternative to this. ^_^
I wonder if Queensland will ditch UMAT. Afterall they are quite qutsy^_^. They took out interviews in 2008 and although judging GAMSAT to be a poor indicator of academic performance, kept it only because there was no way of selecting applicants with a range of undergrad backgrounds. The question is: How do you select the 'right' medical students?<br />
<br />
<br />
[MENTION=5402]crickethunger[/MENTION] , seriously? At the very least, UMAT can find out if you are "truly compassionate"? Interacting with people like now between you and me can be a better indicator.
    UQ ditched the interviews because they were too expensive. They did so shortly after publishing a study which showed GAMSAT to be of little/no value and the interviews to be of minimal value - despite this, the interviews were canned and the GAMSAT remained. Admittedly the study itself had all the same flaws as this one (looking mainly at grades as an outcome).
      I think that some of the criticisms against UQ are relatively unjustified. Yes, there is a limitation in using academic performance as an outcome measure, <span style="font-style:italic;">but what other reliable outcome measure can actually be used that is better</span>? I would argue that there are none. As a teaching institution, UQ <span style="font-weight:bold;">should</span> be interested in academic performance.<br />
      <br />
      For graduate entry, outcomes from interviews did not substantially predict performance and they are expensive to run both from the perspective of monetary cost and time. We should not dogmatically hold onto the belief that interviews select "better" medical students. There is little evidence that this is true.<br />
      <br />
      The GAMSAT serves a pragmatic purpose for the universities which is why I suspect that it hasn't been abandoned.<br />
      <br />
      Regards.
        <blockquote><br />
        For graduate entry, outcomes from interviews did not substantially predict performance and they are expensive to run both from the perspective of monetary cost and time. We should not dogmatically hold onto the belief that interviews select "better" medical students. There is little evidence that this is true.<br />
        </blockquote><br />
        <br />
        Of note, there is arguably no possible way to effectively measure the outcomes of what an interview is trying to select for and so the scanty evidence is not surprising. In medicine there are a number of things that would be impractical to develop an evidence base for but are nonetheless the standard of care.
          I disagree with that Matt.<br />
          <br />
          Many assessments in later clinical years (e.g., short cases/long cases/OSCE and vivas) have a large component that is determined by ability to effectively communicate. Being a avid book learner will not in itself allow you to do well academically in the clinical components of medical school and it was believed that interviews would select candidates who would eventually perform better in these components. In fact the UQ study did find this; interview performance has the greatest association with clinical exams in the later years of medical school. The problem was that the size of the association was still very small and if memory serves, smaller than the association of this assessment component with the entry GPA.<br />
          <br />
          Let's be frank. The interviews are not being used because we think that it would select better medical students (and hopefully, produce better doctors) in some ineffable way. They are being used for a specific purpose and they should be fit for purpose. Time and cost is not limitless; every hour that an academic spends on selection interviews is an hour that he or she is not spending on actual educational activities.<br />
          <br />
          Regards.
          J
          • J
            JeremiahGreenspoon
          • April 16, 2011
          I've heard it said (from doctors) that interviews are a very effective way at weeding out Asian applicants in particular.<br />
          That would never be defined as one of the purposes of the process, but is not that surprising.
          <blockquote>I disagree with that Matt.<br />
          <br />
          Many assessments in later clinical years (e.g., short cases/long cases/OSCE and vivas) have a large component that is determined by ability to effectively communicate. Being a avid book learner will not in itself allow you to do well academically in the clinical components of medical school and it was believed that interviews would select candidates who would eventually perform better in these components. In fact the UQ study did find this; interview performance has the greatest association with clinical exams in the later years of medical school. The problem was that the size of the association was still very small and if memory serves, smaller than the association of this assessment component with the entry GPA.<br />
          <br />
          Let's be frank. The interviews are not being used because we think that it would select better medical students (and hopefully, produce better doctors) in some ineffable way. They are being used for a specific purpose and they should be fit for purpose. Time and cost is not limitless; every hour that an academic spends on selection interviews is an hour that he or she is not spending on actual educational activities.<br />
          <br />
          Regards.</blockquote><br />
          <br />
          I am more inclined to explain away the results of the UQ study through a lack of reliability in examining an ability to communicate. Further, interviews and OSCE also examine things other than an ability to communicate effectively and so it's not surprising that the association in these studies was small. I would be interested in long follow-up studies examining the MMI, the major selling point of this interview format is its intrinsic reliabilty. Of course extrinsic reliability (i.e. when compared with other communication things like OSCEs) may not be nearly as good.
          I somewhat agree with you but why then are we placing so much on this "ability to communicate"? I don't think that this ability on it's own is important; rather it is how this skill can be applied in actual clinical practice and so the testing procedure is valid. It stills stands that interview performance in the UQ study had no useful association with medical school performance, even in the domain it is most likely to have a benefit. Unless you have a good reason believe that the interview performance has some substantial association with medical student capability that is somehow being missed by existing assessment criteria, then I don't think that you can disregard this result quite so easily. Clearly, UQ did not.<br />
          <br />
          Remember, these interviews are not being run on the general population. They are on candidates who have already demonstrated a high level of academic. I suspect that in this population, interviews are poor at ranking candidates.<br />
          <br />
          I think that the MMI has promise but I think we will have to wait and see its results. I suspect it may be better associated with performance but they are also more costly to run.<br />
          <br />
          As for "weeding out" Asian candidates, this is certainly a popular view but one that I don't think is true, insofar as a conscious decision or intent.<br />
          <br />
          Regards.
          <blockquote>I somewhat agree with you but why then are we placing so much on this "ability to communicate"? I don't think that this ability on it's own is important; rather it is how this skill can be applied in actual clinical practice and so the testing procedure is valid. It stills stands that interview performance in the UQ study had no useful association with medical school performance, even in the domain it is most likely to have a benefit. Unless you have a good reason believe that the interview performance has some substantial association with medical student capability that is somehow being missed by existing assessment criteria, then I don't think that you can disregard this result quite so easily. Clearly, UQ did not.</blockquote>.<br />
          <br />
          This doesn't change my criticism though - that panel interviews have poor intrinsic reliabliity and that OSCEs are measuring other things outside of ability to communicate or even ability to communicate in a clinical setting. I think the belief that interview performance has some association with medical student capability that is being missed by existing assessment critieria is one that must be held in order for medical schools to continue to undertake interviews. Without this belief it is surely inappropriatate but I agree its a contentious issue.<br />
          <br />
          That said, I don't disregard UQ's results, I think panel interviews are probably out-dated and not useful. I am interested to hear more research on MMIs, especially in an Australian setting. Hopefully this is forthcoming since so many schools are now adopting this interview style.<br />
          <br />
          <br />
          <br />
          <blockquote>As for "weeding out" Asian candidates, this is certainly a popular view but one that I don't think is true, insofar as a conscious decision or intent.<br /></blockquote><br />
          <br />
          I really think one must be very ignorant of those people around them in order to maintain a view such as this.
          <blockquote>.<br />
          I really think one must be very ignorant of those people around them in order to maintain a view such as this.</blockquote><br />
          <br />
          I recall a discussion with a colleague from interstate about the supposed "Asian" bias of the UMAT/Interview. In the mid 90s the UMAT/Interview entrance criteria was introduced, and for a number of years a number of high achieving students who achieved perfect scores in year 12 students were denied entry into the medical schools of their home state. Sensationalist headlines followed, due to these students being quickly snapped up by the institutions like the University of Melbourne and the associated outrage due to the perception that the change in admissions was crippling the state due to brain drain. <br />
          <br />
          While I doubt that there was any intent to change the racial make up of medical students, I do recall one particular state branch of the Australian Chinese Medical Association making a statement alleging that the interview system was racist with one prominent surgeon threatening legal action. I suspect he might have had a family member who may have missed out, but this is mere speculation. <br />
          <br />
          Interestingly, it doesn't even rate as an issue these days. I wonder if this has something to do with the rise in popularity of various UMAT and interview preparation courses which probably arose as a legitimate way to circumvent the system. This I feel, has probably changed the attitude towards the current process from being some insurmountable barrier to just another thing that can be studied for.
          I think it'd be really quite difficult to create an intrinsically racist admissions process. On what point would you differentiate me and my anglo-saxon background from my friend who's parents are Sri Lankan? It's not like Asian people as a race are naturally better or worse at, say, spatial reasoning or comprehension or whatever else you choose to test. The interview process probably does have some subtle subconcious racism going on, but I think its effect would be small and, in the end, it would be unlikely to be a critical factor.
          J
          • J
            JeremiahGreenspoon
          • April 17, 2011
          It's true that it's not possible to say that you can develop an intrinsically racist selection process (via subtle means).<br />
          However, the interview does test for people skills/communication skills/sociability whatever it would best be named. And in turn, there has always been a stereotype of the extremely high-achieving (and highly motivated, hard working) Asian student who does not have some of those skills. It's arguable that the proportion of high achieving Asian students who don't have them would be higher than western students, hence the interview could be considered a filter.<br />
          <br />
          I don't know if there are any statistics for it, but Liquid8 makes a relevant point. As well, the discussion I was referring to was between two doctors, one pointing out to the other that the current crop of mostly asian registrars (I think, not sure of the exact stage) in the hospital were pre-interviews being a widespread part of the admissions process, while the current crop of interns was more diverse. That example may well not be representative of medical grads in Australia on the whole though, and is just an anecdote...
          <blockquote>It's true that it's not possible to say that you can develop an intrinsically racist selection process (via subtle means).<br />
          However, the interview does test for people skills/communication skills/sociability whatever it would best be named. And in turn, there has always been a stereotype of the extremely high-achieving (and highly motivated, hard working) Asian student who does not have some of those skills. It's arguable that the proportion of high achieving Asian students who don't have them would be higher than western students, hence the interview could be considered a filter.<br />
          </blockquote><br />
          <br />
          I think there is a children of migrant effect, wherein migrants prize education in their children, and through this a disproportional (relative to the rest of the population) focus is put on achieving high results. These children may not develop the same social skills as children of a non-migrant background (often, I find, they have no problem with these skills) but that doesn't make the process racist. In fact, I think these students are probably just the more socially withdrawn of the migrant children group, they have less distractions and so do well academically, but what lets them down is not their race but their social predisposition. You could argue interviews are thereby a form of social engineering, and in that case I would agree with you.
<blockquote></blockquote><br />
I was talking to a guy that runs UMAT workshops the other day and he told me how he had coached a bunch of doctors who had no idea how to do even half of the questions that would typically asked. Assuming all these doctors were reputable, I dont think a good UMAT score has much weight with regards to being a good doctor.<br />
<br />
His justification for the UMAT scheme however, was that it apparently helped decrease the levels of misdiagnosis and that studies showed that those who could critically think in a short space of time (i.e what UMAT "tests") made the least errors in diagnoising patients.<br />
<br />
Thoughts?
    <blockquote>I was talking to a guy that runs UMAT workshops the other day and he told me how he had coached a bunch of doctors who had no idea how to do even half of the questions that would typically asked. Assuming all these doctors were reputable, I dont think a good UMAT score has much weight with regards to being a good doctor.<br />
    <br />
    His justification for the UMAT scheme however, was that it apparently helped decrease the levels of misdiagnosis and that studies showed that those who could critically think in a short space of time (i.e what UMAT "tests") made the least errors in diagnoising patients.<br />
    <br />
    Thoughts?</blockquote><br />
    <br />
    I think the opinion of somebody who runs UMAT workshops on the usefulness of the UMAT is inherently biased.
      While they may not be able to answer the question given the confines of the UMAT, I'd doubt they'd have no idea how to approach such a scenario in real life. Medical school teaches you these things, just in case people are somewhat socially illiterate and so that you have set guidelines to work by.
B