The Realities of Studying and Practicing Medicine

Havox

Sword and Martini Guy!
Emeritus Staff
Basically the main challenge of internship is adjusting to the work environment itself, and the expectations of the role, both of which are very different to being a medical student.
Basically this. By the end of internship and most residency I found the job pretty cruisey. The first few months of adjustment can be rough and expect to get things wrong as well as be told off for doing it wrong.
 

MyHeadee

Member
I'm not sure if this belongs here, but here goes.

I just watched a Bloomberg video about "China's race for AI supremacy", and it got me thinking. Everyone says that doctors and nurses and all workers who require a form of physical and emotional interaction with their "customers" will have high levels of job security in the future, regardless of significant technological progression. But do you really think that will be the case? Especially when considering the exponential growth of AI learning and the seemingly reduced (at least anecdotally) desire for newer generations to prefer tech to people?

I feel like theres a significant divide that happens as years go on, in terms of the reliance/need for technology in our everyday life, with younger kids simply taking tech for granted and so incorporating it much more into their daily functioning, as if it becomes 2nd nature. For me personally, I'd think I "depend" on it a lot more than most people older than myself, but less than those younger.
When looking around at younger kids (and I dont' mean to sound like I'm that old or wise lol, I'm only 17), there's a strong pattern of them picking technology over interactions with others. Most of the time when I walk into maccas, there are significantly more people ordering off their screen rather than off the counter with another person, I find this particularly true for those who are younger (guilty 🙋🏽‍♂️). Many younger siblings of friends prefer to blindly scroll through social media rather than see their friends in person. And when I've raised this with some of my friends and peers, there are a significant portion of them who would, if they got the choice, pick intelligent technology over a human treating them. I'm sure there are countless other examples. I just feel like in the future, not too far away, in the case of getting a diagnosis, or treatment, people would prefer a form of AI to help them, meaning even the health industry becomes I guess "at risk" in terms of job security.

Hopefully my thoughts on this aren't naive. Just curious to hear other people's thoughts on the topic, (although fingers crossed that these thoughts are conflicting to mine, because I sure hope that docs/nurses and those affiliated don't become redundant in the future).
 

Nonstandard983

Regular Member
I'm not sure if this belongs here, but here goes.

I just watched a Bloomberg video about "China's race for AI supremacy", and it got me thinking. Everyone says that doctors and nurses and all workers who require a form of physical and emotional interaction with their "customers" will have high levels of job security in the future, regardless of significant technological progression. But do you really think that will be the case? Especially when considering the exponential growth of AI learning and the seemingly reduced (at least anecdotally) desire for newer generations to prefer tech to people?

I feel like theres a significant divide that happens as years go on, in terms of the reliance/need for technology in our everyday life, with younger kids simply taking tech for granted and so incorporating it much more into their daily functioning, as if it becomes 2nd nature. For me personally, I'd think I "depend" on it a lot more than most people older than myself, but less than those younger.
When looking around at younger kids (and I dont' mean to sound like I'm that old or wise lol, I'm only 17), there's a strong pattern of them picking technology over interactions with others. Most of the time when I walk into maccas, there are significantly more people ordering off their screen rather than off the counter with another person, I find this particularly true for those who are younger (guilty 🙋🏽‍♂️). Many younger siblings of friends prefer to blindly scroll through social media rather than see their friends in person. And when I've raised this with some of my friends and peers, there are a significant portion of them who would, if they got the choice, pick intelligent technology over a human treating them. I'm sure there are countless other examples. I just feel like in the future, not too far away, in the case of getting a diagnosis, or treatment, people would prefer a form of AI to help them, meaning even the health industry becomes I guess "at risk" in terms of job security.

Hopefully my thoughts on this aren't naive. Just curious to hear other people's thoughts on the topic, (although fingers crossed that these thoughts are conflicting to mine, because I sure hope that docs/nurses and those affiliated don't become redundant in the future).
I think its always a possibility however a lot of jobs will be automated well before medicine becomes automated so if it gets to the point where medicine is completely automated were in some serious strife when it comes to jobs hahaha
 

breadman

Regular Member
That's actually a really interesting point and one which is definitely relevant in today's society. However, I just did come across an article which did explain why AI or any other advanced technology won't be able to 'replace' or make the roles of doctors and other healthcare professionals 'redundant' in the future.

I'll attach a link below, but basically, 5 reasons it goes through are that

1) Empathy cannot be replaced and replicated by AI or machines

2) Doctors are able to show versatility and work 'against the conventional norm', if needed

3) In order for these complex technologies to even be used, doctors and other professionals are required

4) There will always be certain tasks and jobs which only humans will be able to do

5) With the advancements in AI and technology, it shouldn't be considered as 'AI vs human', but rather, 'AI collaborating with human'

 

MyHeadee

Member
That's actually a really interesting point and one which is definitely relevant in today's society. However, I just did come across an article which did explain why AI or any other advanced technology won't be able to 'replace' or make the roles of doctors and other healthcare professionals 'redundant' in the future.

I'll attach a link below, but basically, 5 reasons it goes through are that

1) Empathy cannot be replaced and replicated by AI or machines

2) Doctors are able to show versatility and work 'against the conventional norm', if needed

3) In order for these complex technologies to even be used, doctors and other professionals are required

4) There will always be certain tasks and jobs which only humans will be able to do

5) With the advancements in AI and technology, it shouldn't be considered as 'AI vs human', but rather, 'AI collaborating with human'

I thought about some of those too, particularly the 1st point that you've written. However I'm saying that (in my hypothesis at least), people in the future, particularly the younger generations, will prefer the interaction with artificial intelligence rather than a human interaction, even with the empathy, love and care that comes with being treated by a person. The values and what is simply required by a person for their treatment might be changing because of this reliance of tech?

Also, this is all with the assumption that AI cannot do those things, the capabilities of a human, even those who are extremely trained and nuanced by the end of a degree and/or specialisation, is still prone to significant restraints. We are, after all, just human, mistakes happen and judgement is often clouded by pressure, don't get me wrong I'm sure AI would also be susceptible of these things, it is constructed by us initially. Even with AI collaborating with humans, that would still make it less secure, no? I'm sure if AI has gotten to a stage where it is capable of diagnosis/treatment then it would be able to replace multiple people at once, meaning the demand for healthcare workers plummets. Maybe what constitutes a doctor, in terms of their roles and expectations in the workplace change, but still, this would inevitably result in them being less in demand (maybe software technicians are the new doc in the future 😎).
 

breadman

Regular Member
I thought about some of those too, particularly the 1st point that you've written. However I'm saying that (in my hypothesis at least), people in the future, particularly the younger generations, will prefer the interaction with artificial intelligence rather than a human interaction, even with the empathy, love and care that comes with being treated by a person. The values and what is simply required by a person for their treatment might be changing because of this reliance of tech?

Also, this is all with the assumption that AI cannot do those things, the capabilities of a human, even those who are extremely trained and nuanced by the end of a degree and/or specialisation, is still prone to significant restraints. We are, after all, just human, mistakes happen and judgement is often clouded by pressure, don't get me wrong I'm sure AI would also be susceptible of these things, it is constructed by us initially. Even with AI collaborating with humans, that would still make it less secure, no? I'm sure if AI has gotten to a stage where it is capable of diagnosis/treatment then it would be able to replace multiple people at once, meaning the demand for healthcare workers plummets. Maybe what constitutes a doctor, in terms of their roles and expectations in the workplace change, but still, this would inevitably result in them being less in demand (maybe software technicians are the new doc in the future 😎).
That's a fair argument you make. I like to consider the role of a doctor as 2 halves (although in reality, it would comprise of much more): how they can communicate with patients and their 'active' roles in a clinical setting outside of patient interaction, so more hands-on tasks.

I definitely agree that when it comes to more practical settings which require dexterity and precision, then with further development of AI, it may be much more time effective and not require the collaboration of multiple healthcare professionals. Basic examples would include surgery (elective or emergency) or more simple procedures like examinations or blood tests.

However, in situations like having to break bad news to a patient (for e.g cancer, a terminal illness, tragic passing away of a family member), the most essential trait needed by anyone, let alone a doctor, in this situation would 100% be empathy and compassion. Who knows, AI may be enhanced enough to a stage where it can depict empathy and it can almost completely resemble the characteristics of a doctor or a healthcare professional. I do think though that to get to this stage will be quite a while, due to the programming which will have to be integrated into their development. I also think being able to show empathy and compassion can derive from a human's past experience in doing so, as I'm sure almost everyone would have been involved in a moment in their life where they would have witnessed a very sad and emotional experience by someone.
 

chinaski

Regular Member
I thought about some of those too, particularly the 1st point that you've written. However I'm saying that (in my hypothesis at least), people in the future, particularly the younger generations, will prefer the interaction with artificial intelligence rather than a human interaction, even with the empathy, love and care that comes with being treated by a person.
I think that's probably the weak bit of your argument. Younger generations are yet to interact with artificial intelligence in the way you are imagining, so you have no way of knowing what they'd prefer given the choice. The current preference of young people today to spend inordinate time on the internet/gaming/whatever still means a preference for human interaction. The individual with whom you're gaming, communicating online and the like are people, not artificial intelligence.

Medicine is as much a humanities discipline as it is a scientific entity. I don't see AI surpassing the ability of humans in that regard, nor do I think people will get to the point of preferring contact and communication with AI over that of fellow human beings. If we do, I sincerely hope I'm long dead by then - what an awful prospect to think of.
 
Last edited:

MyHeadee

Member
I think that's probably the weak bit of your argument. Younger generations are yet to interact with artificial intelligence in the way you are imagining, so you have no way of knowing what they'd prefer given the choice. The current preference of young people today to spend inordinate time on the internet/gaming/whatever still means a preference for human interaction. The individual with whom you're gaming, communicating online and the like are people, not artificial intelligence.

Medicine is as much a humanities discipline as it is a scientific entity. I don't see AI surpassing the ability of humans in that regard, nor do I think people will get to the point of preferring contact and communication with AI over that of fellow human beings. If we do, I sincerely hope I'm long dead by then - what an awful prospect to think of.
Have you seen any glimpses for the redundancies/inefficiencies in healthcare to be cut out by tech in the future through your working career? It would be cool to hear how advancements might have changed things :)
 

chinaski

Regular Member
Have you seen any glimpses for the redundancies/inefficiencies in healthcare to be cut out by tech in the future through your working career?
No, not really. Put it this way: hospital systems not uncommonly use obsolete IT platforms like Windows XP and still rely on fax machines for outside communications. We're a millennia away from the hope that we could routinely have investment in infrastructure which is contemporaneously efficient and up to date, let alone adopting really cutting-edge (and prohibitively expensive) AI technologies.
 

LMG!

MBBS IV
Administrator
Have you seen any glimpses for the redundancies/inefficiencies in healthcare to be cut out by tech in the future through your working career? It would be cool to hear how advancements might have changed things :)
Where I am (a capital city hospital), we are still doing paper notes and hand-written drug charts for inpatients… (just as an example of how ‘advanced’ we are becoming)
 

chinaski

Regular Member
Yeah, but things like e-records and prescribing systems are not really "AI" in the context of this discussion.
Regardless, even when those systems do exist, they invariably create their own inefficiencies so you're just swapping one old inefficient system for a newer, slightly less inefficient system.
 

threefivetwo

less gooo
Hopefully my thoughts on this aren't naive. Just curious to hear other people's thoughts on the topic, (although fingers crossed that these thoughts are conflicting to mine, because I sure hope that docs/nurses and those affiliated don't become redundant in the future).
Some interesting things that you may want to read:
I reckon that AI is a buzzword that's been thrown around too much recently, when in fact it isn't quite there yet. As an example, I was experimenting with OpenAI's GPT-3 model recently. It's probably as cutting edge as generally available AI gets right now - yet it still requires you to direct its input, train it on data, and adjust parameters. Even so it's prone to repeating output and running off on tangents. AI is best applied to things with specific use cases where a human is in the loop to correct errors. There will always be a need for people to code the algorithms and train AIs on input, and when it gets to something like Medicine that inherently involves specialists.

I do think that AI will take a significant role in Medicine and healthcare at some point, but it'll be to improve the accuracy of a diagnosis, speed up workflows for identifying things and such: think of it as a partnership. Doctors will always be there to provide input, verify, make on-the-fly decisions based on intuition and play the empathy role. In the US there have been cases where robots with tablets mounted on them were driven into a patient's room to deliver a cancer diagnosis - as you can imagine it didn't go down well due to a sense of detachment being felt, and I don't imagine it'd be any better if it's an 'AI'.

As a Zoomer myself I'm not sure if the technology vs. F2F interaction has played out as much as you think. I actually think there's a greater awareness of the importance of real-world interactions, self-care, mental health that has spread due to things like Instagram and TikTok, and anecdotally many of my friends would prefer meeting up in person rather than hopping on a Zoom call. I think you might be mistaking using new social media for avoiding interaction when it can be a medium of self-expression (albeit one with many problems).
 

MyHeadee

Member
No, not really. Put it this way: hospital systems not uncommonly use obsolete IT platforms like Windows XP and still rely on fax machines for outside communications. We're a millennia away from the hope that we could routinely have investment in infrastructure which is contemporaneously efficient and up to date, let alone adopting really cutting-edge (and prohibitively expensive) AI technologies.
Well then, i guess technological advancement threatening doctors' job security wont be an issue for the next century. Yay !
Some interesting things that you may want to read:
I reckon that AI is a buzzword that's been thrown around too much recently, when in fact it isn't quite there yet. As an example, I was experimenting with OpenAI's GPT-3 model recently. It's probably as cutting edge as generally available AI gets right now - yet it still requires you to direct its input, train it on data, and adjust parameters. Even so it's prone to repeating output and running off on tangents. AI is best applied to things with specific use cases where a human is in the loop to correct errors. There will always be a need for people to code the algorithms and train AIs on input, and when it gets to something like Medicine that inherently involves specialists.

I do think that AI will take a significant role in Medicine and healthcare at some point, but it'll be to improve the accuracy of a diagnosis, speed up workflows for identifying things and such: think of it as a partnership. Doctors will always be there to provide input, verify, make on-the-fly decisions based on intuition and play the empathy role. In the US there have been cases where robots with tablets mounted on them were driven into a patient's room to deliver a cancer diagnosis - as you can imagine it didn't go down well due to a sense of detachment being felt, and I don't imagine it'd be any better if it's an 'AI'.

As a Zoomer myself I'm not sure if the technology vs. F2F interaction has played out as much as you think. I actually think there's a greater awareness of the importance of real-world interactions, self-care, mental health that has spread due to things like Instagram and TikTok, and anecdotally many of my friends would prefer meeting up in person rather than hopping on a Zoom call. I think you might be mistaking using new social media for avoiding interaction when it can be a medium of self-expression (albeit one with many problems).
I am under the assumption that technological growth continues exponentially, we now have phones with the multiple times the power of gaming consoles which were released less than a decade ago. Although, I am very amateur in my understanding of tying AI with these advancements. Granted, trying to model a 'life-like' being which is capable of diagnosis and treatment doesn't sound very easy, or achievable in a short time frame. If it were to materialise and become threat for job security, I'm sure everyone aspiring to be a doc or reading this would be considering retirement in whatever career we are in (or maybe im completely wrong and AI progress is just like the hardware process, in which case, i guess it could be a lot sooner then we think). But with that said, as chinaski mentioned, if hospitals are still currently using fax machines....well then....might be centuries before hospitals get on board with it lol.
 

Benjamin

ICU Reg (JCU)
Emeritus Staff
Have you seen any glimpses for the redundancies/inefficiencies in healthcare to be cut out by tech in the future through your working career? It would be cool to hear how advancements might have changed things :)

Off-tangent from the rest of the thread but here goes anyway.

There are a couple within the ICU but the reality is that most of them are not in regular use. A reasonable overview of AI / machine learning in the ICU is this article: Artificial Intelligence in the Intensive Care Unit

Out of the technologies listed there the only ones I have seen in clinical practice / use myself are closed-loop ventilation systems. In short, there are hundreds of variables that change on a moment-to-moment basis when it comes to ventilation - far too many for a human being to actually parse and use to make changes. The result is that ventilation as performed by a clinician is fairly "dumbed down" & usually the ventilation settings remain fairly static / would be changed every few hours or when there is an obvious change in the patients ventilation requirements. On top of that typically the changes that occur are fairly "gross": pressure being delivered, total tidal volume, rate of respiration; it is uncommon to frequently adjust things like the inspiratory time / flow rates / expiratory cut-off times etc.

This is further complicated by the relatively newer trend (last 10-15 years) of keeping patients fairly awake in the ICU despite undergoing mechanical ventilation which usually requires a patient controlled ventilation mode / non-mandatory or mixed ventilation mode. In this setting - and in the setting of weaning ventilatory support - dyssynchrony between the ventilator and the patient can occur.

In terms of machine learning there are numerous systems that have been developed to identify dyssynchrony and a number of them have been demonstrated to perform substantially better than clinicians at the bedside. Similarly, there are now "closed-loop" ventilation modes on most modern ventilators where a range of patient values are input and the ventilator makes the changes to achieve those goals. Again, the ventilator has been shown to outperform clinicians in achieving certain goals: the closed loops system is better at achieving protective lung ventilation in ARDS, has a shorter duration of ventilation when weaning difficult to wean patients and so on.

A number of the trials in the past have been considered "negative" due to not demonstrating mortality benefit but the reality is that in ICU it is very unlikely that a single intervention will ever demonstrate significant mortality benefit. Similarly, most of these older trials were done based on what were potentially the wrong goals - i.e. looking at tidal volume instead of mechanical power delivery in ARDS etc.

Overall it's unlikely that closed-loop ventilatory modes will become the 'norm' in the coming years at least in part due to it only being available on newer ventilators. I use these modes when they are available to me for patients weaning off the ventilator & also with a high amount of dyssynchrony and try it out in patients that require a lot of sedation +/- paralysis just to ventilate but it has not changed the amount of work I need to do as there is still a need for clinician oversight just like any ventilation mode.

Another good paper reviewing closed-loop systems is here: The dawn of physiological closed-loop ventilation—a review
 

Caffeine

Regular Member
Is a general practitioner a 9-5 job?
Whilst there is freedom to decide generally it is also quite dependent on location and practice. For example some GP's work weekends in addition to weekdays and can have extended hours during the days work such as till 7:30pm. This however depends on the practice and also personal circumstance. From what I have been told though, it also depends a lot on the financial situation and patient flow with GP's in metro locations having to work longer at times to make up income loss. It does however in general have much better stability in hours than most other specialities.
 
Last edited:

JackRussel

Lurker
Off-tangent from the rest of the thread but here goes anyway.

There are a couple within the ICU but the reality is that most of them are not in regular use. A reasonable overview of AI / machine learning in the ICU is this article: Artificial Intelligence in the Intensive Care Unit

Out of the technologies listed there the only ones I have seen in clinical practice / use myself are closed-loop ventilation systems. In short, there are hundreds of variables that change on a moment-to-moment basis when it comes to ventilation - far too many for a human being to actually parse and use to make changes. The result is that ventilation as performed by a clinician is fairly "dumbed down" & usually the ventilation settings remain fairly static / would be changed every few hours or when there is an obvious change in the patients ventilation requirements. On top of that typically the changes that occur are fairly "gross": pressure being delivered, total tidal volume, rate of respiration; it is uncommon to frequently adjust things like the inspiratory time / flow rates / expiratory cut-off times etc.

This is further complicated by the relatively newer trend (last 10-15 years) of keeping patients fairly awake in the ICU despite undergoing mechanical ventilation which usually requires a patient controlled ventilation mode / non-mandatory or mixed ventilation mode. In this setting - and in the setting of weaning ventilatory support - dyssynchrony between the ventilator and the patient can occur.

In terms of machine learning there are numerous systems that have been developed to identify dyssynchrony and a number of them have been demonstrated to perform substantially better than clinicians at the bedside. Similarly, there are now "closed-loop" ventilation modes on most modern ventilators where a range of patient values are input and the ventilator makes the changes to achieve those goals. Again, the ventilator has been shown to outperform clinicians in achieving certain goals: the closed loops system is better at achieving protective lung ventilation in ARDS, has a shorter duration of ventilation when weaning difficult to wean patients and so on.

A number of the trials in the past have been considered "negative" due to not demonstrating mortality benefit but the reality is that in ICU it is very unlikely that a single intervention will ever demonstrate significant mortality benefit. Similarly, most of these older trials were done based on what were potentially the wrong goals - i.e. looking at tidal volume instead of mechanical power delivery in ARDS etc.

Overall it's unlikely that closed-loop ventilatory modes will become the 'norm' in the coming years at least in part due to it only being available on newer ventilators. I use these modes when they are available to me for patients weaning off the ventilator & also with a high amount of dyssynchrony and try it out in patients that require a lot of sedation +/- paralysis just to ventilate but it has not changed the amount of work I need to do as there is still a need for clinician oversight just like any ventilation mode.

Another good paper reviewing closed-loop systems is here: The dawn of physiological closed-loop ventilation—a review
It may have no effect on mortality, but it makes patients and doctors much more comfortable :)
 
Top