Registered  members with 100+ posts do not see Ads

MSO UCAT Statistics 2020

ucatboy

final year eek
Valued Member
(This post is best viewed on a computer screen)

TL;DR: full excel file of all the data is attached, mess around with it at your own leisurely pace 🥰 Unfortunately, MSO doesn't support the uploading .xlsx (Excel) files, so the only other option I had that preserved the layout and formatting was to compress it into a .zip file. Use an unzipping tool such as WinRAR to unzip it.

Background info
I thought I'd do a running tabulation of all of the scores reported on MSO as a way of procrastinating from my studies, so what you're seeing here is the result of 1.5 months of pure procrastination and falling behind in class 🤪 Here is just some background:
  • 155 data points (each corresponding to a unique MSO user) in total, taken between July 1 (earliest score report) and August 20 2020 (latest score report as of 20/08/2020). This represents a 68.5% year-on-year increase.
  • All data was sourced from publicly accessible threads on the forum, namely Post-UCAT Discussion 2020 and Predicting UCAT Scores from Mock Exam Results. This was done to avoid breaches of privacy or confidentiality from reporting scores that were addressed to me or other users via PM or profile post.
  • For the above reason, I've decided to attach usernames to each of the scores in my spreadsheet for convenience, as it is publicly accessible information, and doing so saves time tracing the score back to the user.
    • If your username is present in the excel spreadsheet and for whatever reason you don't feel comfortable having either your name or your scores on there, just chuck me a DM and I'll remove it and update the file.
  • Only personal scores provided by people with a registered MSO account were counted, as the whole point of this exercise is not to record every single UCAT score in existence (that's Pearson's job) but to make some fun comparisons between the scores voluntarily reported by MSO users (i.e. someone who goes to the lengths of creating an MSO account to post about their scores) and the average test taker (this data should hopefully be made available by Pearson by the end of the month).
    • This means that I unfortunately didn't include the scores any of you reported on behalf of your friends, but I really appreciate the gesture to help nonetheless (big thanks to a joke lol, rjexik97, cxl777 and nb, hopefully I didn't miss anybody!).

Key findings
  • 2979 mean, up 120 from last year (2859, n = 92).
  • 187 standard deviation, down 46 from last year (233)
What does this mean?
While the mean may have increased by a noticeable amount, the decrease in standard deviation leads to a "narrowing" of the bell curve, making higher and lower scores less common.

HELP! DOES THIS MEAN THAT THE AVERAGE WILL BE UP 120 THIS YEAR?
NO.
With a sample size of 150 out of a population size of roughly 15000 (literally 1%), we can't draw ANY conclusions of significance from the limited data that we have. There are many possible explanations for this increase in the MSO average from 2019 and 2020, one of which is reporting bias: less inclination for users to report scores below 2800 due to 2019/2020's admissions cycle setting the "standard" for interview/offer eligibility: 2800+. In 2019, 21/92 = 22.8% of all scores reported were lower than 2730, while in 2020 that figure has fallen to 11/155 = 7.1%, almost a two-thirds reduction. Relax.

I've inserted a histogram containing all of the data below, it's not a perfect bell curve (due to the over-reporting of scores in the 2800 range, the "standard") but it'll do:
1597912604519.png

Also provided is a bar chart representing score intervals of 100:
1597917033913.png

As you can see, the over-reporting of 2800+ scores and under-reporting of 2900+ scores is made even more obvious here. The distribution looks pretty nice otherwise though.

Section Score Analysis
Here comes the juicy part. Since it's my thread, I'm allowed to say whatever I want and there's noting you can do to stop me :p (except maybe the mods lmao rip).

Here are the average scores for all four cognitive subtests and SJ:

VR: 655
DM: 750
QR: 809
AR: 768

SJ: 639


Fact of the matter is, no matter how you put it, there are SUBSTANTIAL disparities in achievement between VR and the rest of the cognitive subtests. Y'all were flaming me back in July for advocating for a more difficult QR section (how many times do I say that a harder test for you will be a harder test for EVERYONE ELSE, so your relative position won't be affected by much), but just consider the following:

QR (max = 900, n =12)
  • We had 12, YES, 12 900 QR scores reported on MSO, in addition to 26 880 QR scores (the next highest score, as it appears that 890 cannot be obtained by scaled score rounding). Oh yeah, did I forget to mention, we also had 24 870 QR scores?? Literally 40% of ALL MSO users reported a near-perfect to perfect QR score (30-32/32).
    • So now I pose the question: how do we differentiate between these 40 (and most likely hundreds more 870-900 QR scorers) scorers? After all, the UCAT is meant to be an aptitude test, there should always be some degree of differentiation between the best and the next-best. The answer is, we can't. The only way to do this would be to either increase the difficulty or the quantity of questions, so that we achieve a greater splitting of the highest achievers (i.e. 900 scorers who finished 10 minutes early this year will be separated from 900 scorers who finished 5 minutes early and so on). I remain firm in my stance.
  • 56.1% (87) of all MSO QR scores were 800 or above, this is in stark contrast to 3.9% (6) of all MSO VR scores being 800 or above.

AR (max = 900, n = 6)
  • Same deal as QR, 6x 900 scores, 6x 890 scores, 6x 880 scores, 15x 870 scores, 6x 860 scores and 2x 850 scores.
  • 39.4% of all MSO AR scores were above 800.

DM (max = 900, n = 1)
  • 28.4% of all DM scores were above 800, but we did have our first 900 score report (and possibly the first in all of Australia as last year's highest was 890).

VR (max = 890, n = 1)
  • As I mentioned earlier only 6 VR scores broke the 800 mark. While the 62nd ranked QR score was 870, the 62nd ranked VR score was 670, a whopping 200 lower. VR is, as always, much harder relative to the other sections, but I would argue that this is not necessarily a bad thing as it allows the top-performing VR candidates (700+) to be differentiated from their peers, giving them a well-deserved boost to their overall score.

BONUS GRAPH
I also plotted the individual section scores (y-axis) against the overall cognitive score (axis), and added lines of best fit.
1597914639577.png

Interestingly, the four lines don't intersect each other at all within the range of the data, and they go in ascending order VR < DM < AR < QR. No surprises here, but look at the size of the vertical gap between the QR and AR lines of best fit - about a 150 gap at all scores. Using the equations, you can also predict the individual section scores when the overall score is known :p

Anyways, that's the end of my lengthy post, congrats to Nardwuar for getting the highest MSO score this year (3410) and frays for coming in at a close second with 3380. Wishing everyone the best for their Year 12/uni studies, as well as interviews and place offers, y'all gonna make it 🥰🥰
 

Attachments

  • MSO UCAT Statistics 2020 by ucatboy.zip
    26.8 KB · Views: 325
Last edited:

Unluckydude

Regular Member
Thanks for the data. Do you have last year's section score? If the average was higher this year due to changes in one or two sections, then I doubt the difference was a result of reporting bias.
 

ucatboy

final year eek
Valued Member
Thanks for the data. Do you have last year's section score? If the average was higher this year due to changes in one or two sections, then I doubt the difference was a result of reporting bias.
Unfortunately I do not :( The entire Post-UCAT Discussion 2019 thread was taken down earlier this year for repeated references to prep companies (it was literally one of the conversation starters lmao). For what it's worth, I don't recall any particular section average being much lower or higher last year (lots of 800+ QR scores, single digit 800+ VR scores), so that 120 overall increase in the average could just be attributed to a small score increase in each section.
 

dande

Regular Member
Fact of the matter is, no matter how you put it, there are SUBSTANTIAL disparities in achievement between VR and the rest of the cognitive subtests. Y'all were flaming back in July for advocating for a more difficult QR section (how many times do I say that a harder test for you will be a harder test for EVERYONE ELSE, so your relative position won't be affected by much), but just consider the following:

Oh my I remember that. this mans legit predicted the future.
 

Registered  members with 100+ posts do not see Ads

Top