The Neurobiology and Superficial Traits of Psychopathy


The word ‘psychopath’ can often be used as a throw-away term to describe someone violent and cruel, but this a minimised view on the disorder. Additionally, psychopathy tends to be confused with sociopathy – but in an interview with the psychologist Ramani Durvasula, PhD, she says “the key difference: a psychopath is born, and a sociopath is made.” (2). While psychopathy does have environmental factors, it also has a strong genetic component  – but interestingly, individuals can be genetically predisposed to having psychopathy, yet the disorder remains dormant unless exacerbated by external factors. However, psychopathy is more common than typically believed – as it is estimated to affect 1% of the global population and is observed to be more prevalent in men than women. Interestingly enough, psychopathy affects between 15 and 25% of the prison population worldwide, thus implying an association between mental illness and criminality (3).

Psychopathic Traits

Psychopathy is characterised by various personality traits and behaviours, especially a lack of empathy, impulsivity, pathological lying, manipulative behaviour, and high intelligence (1). The reduced empathetic response is considered to be the most common psychopathic trait and can be observed through a willingness to engage in anti-social behaviour, a disregard for the impacts of their actions on other people, and a decreased physiological response to emotional stimuli (3). The latter is believed to be due to hyporeactivity of the autonomic nervous system to stimuli in psychopathic individuals compared to non-psychopathic individuals. However, psychopathy in itself is not an ‘official’ psychiatric diagnosis, and instead the term Antisocial Personality Disorder is used instead.

(8)Further, psychopathy and sociopathy are frequently used interchangeably in the media and film especially – and while these conditions share similarities, there are several differences. The main connection between psychopathy and sociopathy is engagement in anti-social behaviour, for instance, physical violence, harassment, vandalism, and other more serious offences (9). Other key similarities include: aggression, deceitfulness, anti-social behaviour, irresponsibility, impulsivity, and a lack of remorse and guilt. As mentioned earlier, it is the consensus that psychopaths are born while sociopaths become so as a result of their environmental factors. Having difficulty forming emotional attachments is a common psychopathic trait, as well as appearing charming and trustworthy by others. As well, psychopaths tend to be more strategic when engaging in anti-social behaviour so as to minimise the risk to themselves – but have little or no feeling of guilt as to the repercussions of their actions for others. Sociopaths are generally more erratic than psychopaths, and act more impulsively. Like psychopaths, sociopaths can also struggle to form emotional attachments – but this is not the case for all.

The Neurobiology of Psychopathy

Psychopathy is also believed to be greatly associated with the amygdala, and it has been hypothesised that amygdalar changes could be a source of deficit processing of fear-related responses in psychopathic individuals (4). In an article entitled ‘Localization of deformations Within the Amygdala in Individuals With Psychopathy’ (5), research was carried out with the purpose to detect amygdalar anatomical abnormalities in psychopathic individuals. The study consisted of 27 individuals who were psychopathic, and their amygdalar volumes were determined using volumetric analysis and surface-based mesh modelling methods. This meant any regional surface abnormalities would be able to be detected. The results showed that the individuals with psychopathy had significantly lower bilateral volumes of the amygdala compared with the control group – 17.1% on the left and 18.9% on the right. As the amygdala is necessary to incite the feeling of fear and fear conditioning, abnormalities in this anatomical structure can explain a lack of fear conditioning and response to dangerous situations in psychopathic individuals. Moreover, another function of the amygdala is social interaction and moral reasoning – and when there are structural abnormalities in the amygdala, this can show why psychopaths can lack the ability to recognise emotions in others, as well as poor moral judgement. 

In addition, psychopathic individuals can present with dysfunction of the ventromedial prefrontal cortex (vmPFC), which can impair the function of emotion and emotion regulation (6). The neuroscientist Antonio Damasio carried out research to investigate the connection between damage to the vmPFC and various emotion and decision- making deficits. Associations were made between dysfunction of the vmPFC and diminished shame, guilt, and empathy, as well as irritability and irresponsibility. The findings of this study also showed that psychopaths and patients who had suffered damage to their vmPFC had reduced autonomic arousal to emotional stimuli – thus showing that there are neurological explanations behind the lack of fear and acknowledgement of consequence in psychopathic individuals.

Another region of the prefrontal cortex potentially involved in psychopathy is the anterior cingulate cortex (ACC). Activity in the ACC has been associated with functions such as pain, empathy, negative affect, and performance. Patients who had lesions of the ACC have been shown to often exhibited greater irritability than those without, in addition to social disinhibition (6).

The case of the railroad construction worker Phineas Gage in the 19th Century (7) can further build on the link between the dysfunction of or injury to the prefrontal cortex and psychopathic traits. In a horrific accident at the railroad, Gage had an iron rod shot at his head – this had the effect of damaging his prefrontal cortex. Gage survived, but the most notable effect of this accident was the substantial changes in his personality; before he had been kind and dependable, but then became impulsive, rude, and disrespectful. These changes helped to identify the function of the prefrontal cortex, and these new personality traits that Gage had are very similar to those with psychopathy. After a number of similar cases during the 20th Century with damage to the prefrontal cortex resulting in personality changes, the term ‘pseudopsychopathy’ was coined to describe this association.

Diagnosis of a Psychopath

An important diagnostic tool for psychopathy is the Hare psychopathy checklist, created by the psychologist Dr Robert Hare (11). This checklist has been revised and is now formally known as the Hare PCL-R (Hare Psychopathy Checklist – Revised). In the PCL-R, there is an interview and a review of the patient’s history as two separate parts. The PCL-R evaluates to what extent an individual fulfils twenty psychopathic traits, such as lacking remorse or guilt, and having a grandiose sense of self-worth; the figure below categorises all of the traits being examined. When the PCL-R has been completed, the individual will have a score between 0 and 40 – 0 means they have no psychopathic tendencies or traits, whereas the latter means they are the paragon of a psychopathy. For each of the twenty psychopathic traits that the individual demonstrates, they are given a score between 0 and 2, depending on how greatly it is applicable to them. If the individual has a score greater than 30, this means they are a psychopath – and so qualify for a diagnosis, though the diagnosis would be of Antisocial Personality Disorder. (10,12)

Case Studies of Psychopathic Traits and Individuals

Hervey Cleckley, M.D., was an American psychiatrist and arguably the most influential historical figure in the field of psychopathy(13). Cleckley researched psychopathy throughout his academic career and compiled 15 case studies of prototypic psychopaths in his novel ‘The Mask of Sanity’ (14). One of the case studies in this novel is that of Tom, who is described by Cleckley as being an intelligent and healthy young man whose family were hoping for him to be diagnosed with some psychiatric disorder so that he would not serve jail time for his stealing. Cleckley describes how Tom would often skip school – suggestive of the psychopathic traits of a need for stimulation and irresponsibility – and that he would frequently steal from his family – embodying the psychopathic trait of criminal versatility. Furthermore, as a child, Tom often partook in delinquent behaviour, including shoplifting, setting fire to a privy in his local area, and throwing rocks at squirrels in the park – and again, these poor behavioural problems from an early age are also listed as psychopathic attributes in the Hare Psychopathy Checklist (Revised). When he became a teenager, Tom’s behaviour worsened, and he escalated from petty theft to car stealing and breaking into homes. Additionally, Tom was described as lying pathologically and doing so with sufficient charm to be convincing. For the next several years up to the age of 21 when Cleckley met him, Tom spent frequent spans of time in prison for stealing, initiating fights and various other instances of anti-social behaviour. The other fourteen case studies discussed in his novel detail people exhibiting very similar behaviour and characteristics. In analysis of these cases, Cleckley wrote “some of these patients I believe are definitely psychopaths but to a milder degree”. However, Cleckley also wrote that a person who engaged in anti-social behaviour was not definitively a psychopath, and through his research and analysis of psychopathy he worked to compile ideas about psychopaths as specific individuals – such as ‘the psychopath as a gentleman’ and ‘the psychopath as a scientist’. Overall,‘The Mask of Sanity’ formed the basis of what is now known as ‘psychopathy’, and led to considerable development in the study of this condition.

Another example of case studies of psychopathy can be found in the article ‘Incurable Psychopaths?’ by Marianne Kristiansson, MD (15). The first case detailed in this article is one of a 38-year-old man, who exhibited hyperactivity, restlessness and had engaged in criminal behaviour for many years – and had 13 convictions, including for assault. He had a history of drug and alcohol abuse and was admitted for a forensic psychiatric evaluation after he was suspected of assault again. This man was examined using the Hare Psychopathy Checklist Revised (PCL-R) and scored 36. This resulted in him being diagnosed with antisocial personality disorder and was thereafter treated with lithium. This treatment method appeared to stabilise him and led to a drop in his criminal behaviour and restlessness. Modern day treatments for antisocial personality disorder, however, often do not involve medication but rely on therapy – such as cognitive behaviour therapy – instead. 


In the modern world, the term ‘psychopath’ tends to be used more in the legal sense – often in cases of forensic psychiatric evaluations of criminals – than in the medical sense, and this is why psychopathy has a stigma of being equated to violent criminality. While the two are not mutually exclusive, psychopaths can lead relatively normal lives without anyone, including themself, being aware that they are a psychopath. Arguably it is difficult to achieve this though, as psychopaths often lack the ability to form long-term emotional connections – and as a fundamental basis of human nature this can complexify and isolate their lives. Further, the previous diagnosis of psychopathy as antisocial personality was solely based on the superficial traits listed in the PCL-R, modern research into the neurobiology of this condition can make diagnosis more accurate. Moreover, it shows how being a psychopath is an innate condition rather than singularly the product of the environment in which the individual is surrounded by – and especially that the prefrontal cortex is the region of the brain most closely linked with psychopathy to date. Despite this progress, the quote from Cleckley’s ‘Mask of Sanity’ still stands: “I do not believe that the cause of the psychopath’s disorder has yet been discovered and demonstrated. Until we have more and better evidence than is at present available, let us admit the incompleteness of our knowledge and modestly pursue our inquiry.” (14).

Samara Macrae, Youth Medical Journal 2022


1.   Psychiatric Times: “The Hidden Suffering of the Psychopath” –

2.   YouTube – MedCircle: “Narcissist, Psychopath, or Sociopath: How to Spot the Differences” –

3.   Science Direct: “Psychophysiology of Mental Health” by B.F. O’Donnell, W.P Hetrick –

4.   Journal of Young Investigators: “The Fear Factor: Fear Deficits in Psychopathy as an Index of Limbic Dysregulation” –

5.   Journal of the American Medical Association: “Localization of Deformations Within the Amygdala in Individuals With Psychopathy” –

6.   US National Library of Medicine: “The role of prefrontal cortex in psychopathy” –

7.   Smithsonian Magazine: “Phineas Gage: Neuroscience’s Most Famous Patient” –

8.   Mental Health America: “Psychopathy vs Sociopathy” –

9.   WDH: “Examples of antisocial behaviour”  –

10.   Encyclopaedia: “Hare Psychopathy Checklist” –

11.   Wikipedia: “Robert D. Hare” –

12.   The European Journal of Psychology Applied to Legal Context: “A contrastive analysis of the factorial structure of the PCL-R: Which model best fits the data?” –

13.   APA PsycNet: “Cleckley’s psychopaths: Revisited” –

14.   ‘The Mask of Sanity’ by Hervey Cleckley, M.D. (5th edition) –

15.   ‘ResearchGate: “Incurable Psychopaths?” by Marianne Kristiansson, MD –

Health and Disease Neuroscience

Magic Mushrooms: A Revolutionary Clinical Treatment


Psychedelics, additionally known as hallucinogens,  are a class of drugs that are hallucinogenic, and can cause changes in perception, mood, and cognitive processes when they are taken by individuals (1). The most well-known psychedelic is psilocybin  – the key compound in magic mushrooms, which are prominent for the hallucinogenic effects they induce when ingested (2). There are over 180 species of mushrooms that contain psilocybin and can be found all over the world (2). The most widespread species of these mushrooms found in Europe is the liberty cap mushroom (Psilocybe semilanceata), which is small but incredibly potent (2).

LSD (Lysergic acid diethylamide) is another type of psychedelic, which is derived from the ergot fungus which infects rye. Psychedelics have been used for centuries, predominantly in relation to mysticism and spirituality amongst non-western cultures (1). Recently, however, there has been a re-emergence of psychedelic drugs in medicine, more specifically as a treatment method for conditions such as depression, anxiety, and obsessive-compulsive disorder (OCD). Psychedelics have further clinical applications in psychiatry, such as in psychotherapy (3). Despite the promising prospects of using psychedelics as a treatment method – particularly psilocybin – there currently are laws in place that prohibit the distribution and sale of such substances, and as a result mean that research into its clinical usage is heavily restricted. 

How Psilocybin Affects the Brain

Once ingested, psilocybin is rapidly metabolised into psilocin inside the body – and the latter is a chemical that has psychoactive properties. Psilocybin is therefore considered a prodrug – as it is a relatively inactive drug until it has been converted into an active drug (psilocin) after ingestion. Both psilocybin and psilocin are psychedelic tryptamines that have structural similarities to serotonin – a hormone which has a multitude of functions including stabilising mood, aiding digestion, and controlling sleep (4). This similarity allows psilocin to bind to receptors in the brain that serotonin is able to bind to, and one of these receptors which is especially important is the 5-HT2A receptor, which the psilocin will bind to (2). 

It is when the psilocin binds to the 5-HT2A receptor that its hallucinogenic properties are activated in the brain. The activation of this specific receptor is associated with cognitive changes in the brain – including tactile perceptions (such as a feeling of warmth), auditory and visual hallucinations, and synaesthesia (the amalgamation of senses) (12). 

Serotonin is often colloquially referred to as the ‘happiness molecule’, as when it binds to its receptors, serotonin can regulate mood and happiness – and having stable, healthy levels of serotonin means that you should feel more emotionally stable and happier (4). However, when psilocin binds to the receptors that serotonin connects to, the same effect is not induced. Instead, psilocin can impact other regions of the brain, and this is why psilocybin is being researched as a treatment method for conditions such as depression. This is known as psilocybin-assisted therapy and is conducted using dried magic mushrooms (6). 

The Chemical Structure of Psilocybin and Psilocin

(5) Psilocybin (left)                                                       

(6) Psilocin (right)

Psilocybin to Treat Depression

Johns Hopkins University has conducted research into the clinical use of psychedelics in the treatment of depression and is continuing to pursue this. In 2016, Johns Hopkins carried out research in this field, where psilocybin was used to relieve anxiety and depression in patients who had recently been diagnosed with terminal cancer (7). 

(8) In November 2020, a new study by Johns Hopkins Medicine suggested that psilocybin could be incredibly effective in treating individuals with chronic depression. This finding was based on the results of a clinical trial involving 24 patients who all had a major depressive disorder. The patients involved in the study were selected randomly, and all adhered to a list of eligibility criteria: having a diagnosis of major depressive disorder (MDD), not taking antidepressant medication, without history of a psychotic disorder, and without prior hospitalisation. 

(8) The 24 participants of the study each underwent two sessions of psilocybin. In the first session, 20mg/70kg was administered via a gelatine capsule alongside approximately 100ml of water, and 30mg/70kg in the second session via the same method. Some of the participants began treatment immediately while others waited eight weeks – and these groups were selected randomly. The severity of the participants’ depressive disorders was assessed before and after the trial using the GRID-Hamilton Depression Rating Scale (GRID-HAMD). The group which received immediate treatment was assessed again in weeks 1 and 4, while the group which received the delayed treatment were assessed at weeks 5 and 8 – in order for the time between these stages to correlate for the two groups. Out of the 24 participants, 12 at week 1 and 17 at week 4 showed over a 50% reduction in their GRID-HAMD score, and 14 participants at week 1 and 13 participants at week 4 were in remission from their major depressive disorder. 

(8) Ketamine is another psychedelic drug that has previously been studied for its effects on treating severe depression. However, the effects of ketamine have been found to typically only last for up to two weeks, while psilocybin therapy has been shown to remain effective for at least four weeks. Furthermore, ketamine is considered to be more addictive than psilocybin – and so the latter has potentially lower risk for treatment. 

In a TED talk by the clinical psychologist Rosalind Watts (14), she discusses her part in a psychedelic research group at Imperial College, London,  investigating the use of psilocybin to treat depression. This clinical study was conducted in 2016 and consisted of 20 participants who all had treatment-resistant depression – and individually they had all tried between 3 and 11 different types of antidepressants, none of which they found to be effective. These participants were all given high doses of psilocybin in a therapeutic session, and six months after the first dosage 6 of the participants had remained in remission from depression – with no symptoms. For 11 of the participants, their depressive symptoms had been significantly reduced for 2 months following the psilocybin treatment, but then they began to return. Compared to antidepressant medications which have to be taken daily and take several weeks to become effective, psilocybin as a treatment method for depression is incredibly promising – not only being more active for longer periods of time but also reducing symptoms almost immediately. 

Psilocybin to Treat Anxiety

Psilocybin has also been researched into treating anxiety, particularly in conjunction with terminal illnesses. For instance, in 2011, there was a pilot study of using psilocybin to treat anxiety in patients with advanced-stage cancer between June 2004 and May 2008 (9). This pilot study consisted of twelve participants who all had advanced-stage cancer and a diagnosis of any of the following conditions: acute stress disorder, adjustment disorder with anxiety, anxiety disorder due to cancer, or generalised anxiety disorder. Each participant took part in two treatment sessions; they were informed prior to them that in one of these sessions they would receive active psilocybin (0.2mg/kg) and the placebo (niacin, 250mg) in the other. The study was conducted as a double-blind trial. (9)The participants were assessed with a variety of measures, including with the Brief Psychiatric rating Scale and State-Trait Anxiety Inventory (STAI) before the treatment sessions, two weeks after the sessions and then monthly for the following six months. This study did yield some positive results, including the majority of the participants describing having improved mood two weeks after receiving the dose of psilocybin, and a ‘sustained decrease in STAI trait anxiety was observed for the entire 6-month follow-up’. However, the article documenting this study did comment that more studies involving a greater number of participants and a higher dose of psilocybin would likely result in more significant results in reducing anxiety. 

Psilocybin to Treat Obsessive Compulsive Disorder

Moreover, psilocybin has also been utilised to treat obsessive compulsive disorder (OCD). A double-blind study carried out between November 2001 and November 2004 led by Dr Francisco A. Moreno, Professor of Psychiatry at the University of Arizona College of Medicine (10) in which the effects of psilocybin on symptoms of OCD were investigated (11). There were nine participants in the study, who all had been diagnosed with OCD and had no other major psychiatric disorder. Each participant underwent four sessions of exposure to psilocybin. The dosages were either very low (25 mg/kg), low (100mg/kg), medium (200 mg/kg) or high (300mg/kg) – each participant received one of each dose. The low, medium, and high dosages were administered in that order, but the very low dosage was used at a randomised point for each participant. There was a period of one week between each session.

(11) The participants were all examined before the study began using the Yale-Brown Obsessive Compulsive Scale (YBOCS), and after the psilocybin sessions it was reported that all the participants exhibited significant decreases in their OCD symptoms. This decrease was wide-ranging amongst the participants, from 23 to 100%. 

Legality of Magic Mushrooms and Psilocybin

(13) Psilocybin is derived from what are colloquially known as ‘magic mushrooms’, and these are prohibited in most countries. The 1971 UN Vienna Convention attempted to introduce stricter legislation regarding synthetic drugs like LSD, and this led to the instigation of laws in many countries that banned many drugs and other hallucinogenic substances. Examples of such laws include the US Psychotropic Substances Act and the Misuse of Drugs Act in the UK. (13) Currently psilocybin is only legal in the British Virgin Islands (though only for personal use), the Bahamas, Samoa, the Netherlands, Nepal, Brazil, and Jamaica. Even though psilocybin is obtained from magic mushrooms, many countries have different laws regarding the two substances, and these laws are often not straightforward. For instance, in Portugal magic mushrooms are decriminalised – so that one can possess them but if caught has to undergo court-mandated rehabilitation. Additionally, in Mexico, psilocybin is illegal to possess or sell, unless It is for sacramental usage. Furthermore, in the Czech Republic, it is not a criminal offence to grow a ‘small’ quantity of magic mushrooms for personal use but possessing a ‘large’ quantity is illegal – yet the exact amounts that constitute ‘small’ or ‘large’ are not clearly defined.


Despite psilocybin (and magic mushrooms) being strictly prohibited in many locations across the world, this is slowly changing – for instance, in 2020, the state of Oregon in the USA instigated the decriminalisation of possessing all drugs (15). The global attitude towards unconventional treatment methods, such as psychedelics in clinical settings, is shifting – and can mean that it is possible in the future that further research into psychedelic therapy is carried out. It is indisputable that psilocybin does have a profound effect on reducing symptoms of conditions such as depression, anxiety, and OCD – and this is incredibly important as current treatment medications for these conditions are not always effective and can have taken a long time to work. Psilocybin in magic mushrooms has been used for centuries in traditional practices in many cultures, and yet it is only recently that the application of psilocybin in a professional setting has been considered. The small number of participants in the studies involving psilocybin treatment to date have exhibited significant positive results, and future clinical trials and studies should optimistically yield the opportunity to benefit many more individuals. 


1. Alcohol and Drug Foundation (ADF): “Psychedelics” –

2. Drug Science: “Psilocybin (Magic Mushrooms)” –

3. Royal College of Psychologists: “From Sacred Plants to Psychotherapy – The History and Re-Emergence of Psychedelics in Medicine” by Dr Ben Sessa –

4. Hormone Health Network: “What is Serotonin?” –

5. Psychedelic Science Review: Psilocybin” –

6. Psychedelic Review: “Psilocin” –

7. Johns Hopkins Medicine: “Psychedelic Treatment with Psilocybin Relives Major Depression, Study Shows” –

8. Journal of the American Medical Association; Psychiatry: “Effects of Psilocybin-Assisted Therapy on Major Depressive Disorder” –

9. Journal of the American Medical Association; Psychiatry: “Pilot Study of Psilocybin Treatment for Anxiety in Patients With Advanced-Stage Cancer”-

10. The University of Arizona; Cancer Centre: “Francisco A Moreno, MD” –

11. National Institute of health; National Library of Medicine: “Safety, tolerability, and efficacy of psilocybin in 9 patients with obsessive-compulsive disorder” –

12. Synthesis: “Psilocin” –

13. EntheoNation: “Magic Mushroom Legality Around the World” –

14. TEDx Talks: “Can Magic Mushrooms Unlock Depression¦Rosalind Watts¦TEDxOxford” –

15. US News: “Oregon Just Decriminalized All Drugs – Here’s Why Voters Passed This Ground-breaking Reform” –

Biomedical Research

Thalidomide: Horrifying Tragedy of the Past, Auspicious Treatment of the Future


Thalidomide is a medicinal drug that was developed in the 1950 by the Western German Company Chemie Grünenthal, and was sold and distributed in 46 countries, marketed by 14 pharmaceutical companies1. This medicine created catastrophic impacts on the lives of individuals, and it was only 5 years after thalidomide became widely available over the counter that the connection was made between the drug and its effects on pregnant women and their children, who had birth defects due to thalidomide. Since the ‘thalidomide scandal’, as it became known, led to a series of legal battles and settlement disbursements, as well as huge changes in the way in which drug trials are now conducted and the safety of all drugs.

Intended use

The original intended use of thalidomide was as a sedative or tranquilizer, but then began to be used to treat a variety of other illnesses – such as colds, flu, nausea, and morning sickness in pregnant women1. While thalidomide was being researched and developed into a drug suitable for human use, it did not undergo any clinical trials involving humans. Instead, testing was solely carried out on animals, and it was determined in the early stages of this research the rodents were supposedly able to be unaffected by a dose of thalidomide which was over 600 times the normal human dose2. Unfortunately, it was only after the link between thalidomide and birth defects was made that it was questioned how this drug was even made available for human use, especially as it was readily available from pharmacies without the need for a prescription. These queries led to an investigation in which it was discovered that extensive tests beyond animal testing into thalidomide had not taken place – and thus the drug should not have been declared safe to use. However, thalidomide was deemed to be harmless to humans, and was licensed in Germany in July 19561.

Treatment for morning sickness

Although the most infamous large-scale side effect of thalidomide is the foetal birth defects it caused, this drug additionally caused a multitude of serious health hazards – which were ignored by Chemie Grünenthal from when many reports about these problems began inundating the company from as early as 1959. One such side-effect of thalidomide is peripheral neuritis3, which is a type of nerve injury that can be anywhere in the body. This damage will start with a tingling sensation in the feet and hands, then numbness and following by feeling cold. Severe muscular cramps are often another symptom of peripheral neuritis, and other effects can include limb weakness and loss of coordination. Whereas some of the symptoms can be treated to improve them or remove them completely, it is more common that they are irreversible.

While thalidomide was not intended to be used specifically as a treatment for morning sickness for women in the early stages of their pregnancy, this drug was found to be relatively effective in relieving this problem. When thalidomide started to be produced and sold in the UK from 1958, it was produced by ‘The Distillers Company (Biochemicals) Ltd’. One of the brand names which thalidomide was sold under was Distaval. On the advertisement for this brand, it was stated that: “Distaval can be given with complete safety to pregnant women and nursing mothers without adverse effect on mother or child.”1 Not only was there a lack of evidence to prove this claim, but also pregnant women were taking it under false guidance, and many were even prescribed it.

Chemistry of the drug

The chemical formula of thalidomide is C13H10N2O4 and has the scientific name ‘a-(N-Phthalimido)glutarimide3.


Thalidomide exists as two isomers, which are mirror-images of each other. The (R)-Enantiomer has sedative effects and explains why thalidomide was so effective when it was used as a medication for sedation and tranquilisation. The (S)-Enantiomer, however, is teratogenic. Teratogenic drugs are agents which can affect an embryo or fetus’ development and can cause congenital malformations5. The thalidomide that is sold as medication is a mixture of these two forms, as these isomers interconvert under biological conditions, rendering the process of separating them before the drug is distributed and used ineffective.

The infamous consequences of thalidomide

Tragically, thalidomide is most well-known for the widespread birth defects it caused – and the first child who was affected by the taking of thalidomide was born on 25th December 1956 to an employee of Chemie Grünenthal. If a pregnant mother took thalidomide, this could culminate in a series of disabilities including, but not limited to: shortened limbs, missing limbs, sensory impairment, facial palsy, damage to the eyes and ears or lacking ears and ears, brain damage, and impacts on the skeletal structure6. As to the location of the birth defect on the body pertained to the day or days which the pregnant mother took thalidomide – and even a single day could be the difference between lacking limbs and brain damage. In a video produced by the Science Museum about what it is like to be affected by thalidomide, Dr Martin Johnson (Chairman of The Thalidomide Trust) speaks about how the days on which thalidomide was taken can cause specific congenital defects7. He discusses that, if thalidomide were taken around day 20 of the pregnancy, this would cause central brain damage in the child. If the drug were taken on day 21, the eyes would be impacted, and if it were taken on days 22 to 23, the ears and face would be affected, and hearing would almost certainly be greatly impaired. Thalidomide was only found to affect the foetus if the mother took the drug between 20 and 37 days after conception – and outside of this window, thalidomide would not have any effect1.

Thalidomide was available as a medication from 1956, though only in Germany, and became obtainable throughout much of Europe and some countries in Asia (like Japan) later in the 1950s. It was on the 26th of November 1961 that thalidomide, under all brand names, was formally withdrawn by its creator Chemie Grünenthal. In less than five years this drug was widely distributed and sold, it is estimated that over 100,000 babies were affected by it globally – approximately half of which died only months after birth1. Currently there are less than 3,000 people who were babies affected by thalidomide.

Individuals affected by thalidomide

The Thalidomide Trust, a charity which supports those born with birth defects as a result of thalidomide, carried out a survey amongst its beneficiaries, in which it showed that over 90% of them experienced severe and/or continuous pain often. Low mental health is, unfortunately, another health problem typically associated with those living with thalidomide-related disabilities, especially depression, anxiety, and loneliness.

Of the babies affected by thalidomide that survived beyond the first few months of infancy, Louise Medus was one. Louise was born on the 23rd of June 1962, to her father David Mason, and her mother Vicki Mason – who had been prescribed thalidomide during her pregnancy only weeks before the drug had been recalled en masse. Louise states in an article by the Guardian8: “Like the other parents of thalidomide parents, I’m sure they [her parents] were expecting a fully formed baby and some of us didn’t have arms, some of us didn’t have legs, some of us didn’t have arms of legs.” Louise published her memoir in 1988, entitled ‘No Hand To Hold & No Legs To Dance On’.

As a result of the fatal and irreversible damage attributed to the drug thalidomide, large settlements were due to those affected. The Distillers Company (Biochemicals) Ltd, which was the company that sold thalidomide as a drug in the UK, agreed to a final settlement in 1973 to pay damages to 429 people in the UK who had been affected by thalidomide from birth. Additionally, the Thalidomide Trust was established, in which thalidomide survivors are able to receive support and annual grants6.

9In New Zealand and Australia, Diageo (the parent company of The Distillers Company (Biochemicals) Ltd) paid AU$89 million to approximately 100 individuals in  compensation. As recently as in 2014, it was ruled by a court that in Spain, the company Grunenthal (another seller of thalidomide) would pay €35 million to 22 people affected there. Yet there are approximately 180 people in Spain still seeking compensation.

(10) In June 1961, an article promoting the taking of thalidomide in the third trimester of pregnancy was published, claiming to be written by a ‘R.O. Nulsen, M.D.’ but was in fact written by the medical director of Chemie Grünenthal. In the early 1960s when birth defects were being noticed and linked to thalidomide, Chemie Grünenthal obscured these findings from public view and refused to take responsibility for creating thalidomide – which was the cause of them. It was around 6 months after the article allegedly written by a Dr Nulsen that this evidence was made public in Germany. 

Horrifyingly, the drug company Chemie Grünenthal did not publicly apologise to those affected by thalidomide until 2012 – approximately 50 years after the drug was first released.

Current research and future treatment prospects

Despite the tragic results that transpired from the use of thalidomide, this drug continues to be researched presently and now presents auspicious results in treating conditions such as types of cancers and leprosy. According to the Mayo Clinic11, it has been determined that thalidomide helps regulate the body’s immune system, control inflammation and slows the processes of creating new blood vessels – which cancers utilise in order to grow and spread throughout the body. The US Food and Drug Administration (FDA), on the basis of research like this, has approved thalidomide in use for treating erythema nodosum leprosum (skin lesions caused by leprosy) and multiple myeloma.

Additionally, in 1964, a doctor in Jerusalem called Dr Jacob Sheskin, administered thalidomide to a patient with leprosy in order to act as a sedative and help the patient to sleep2. Surprisingly, the effects of the thalidomide were remarkable – and the leprosy appeared to have been cured. However, the condition returned once the patient stopped taking thalidomide, and so it was determined that thalidomide was suppressing the disease rather than treating it.

Furthermore, another example of the positive effects of thalidomide is when it was utilised as a treatment method against mantle cell lymphoma11. In 1996, a man named Garry Edling was diagnosed with this condition, but the cancer became progressively worse despite five rounds of chemotherapy and a stem cell transplant. Garry was treated by the consultant Dr Simon Rule, who prescribed him thalidomide based on Dr Sheskin’s plan: to improve symptoms and to prospectively alleviate pain. Garry receives thalidomide as part of a drug trial, as this drug remains unlicensed in the UK and can only be prescribed under severest caution. When he began taking the drug, Garry’s tumours began shrinking, and Dr Rule said, “his response is nothing short of remarkable”. On the other hand, Garry has suffered from side effects such as muscular pain, and numbness in his hands and feet – yet this is the cost of prolonging his life.

Consequential legislation

The thalidomide tragedy also led to the creation of the 1968 Medicines Act in the UK, as the UK government wanted a greater stronghold over the regulation of the drug industry. This act classes medical drugs into three categories12: general sales list medicines, pharmacy medicines, and prescription only medicines. General sales list medicines are able to be sold by any shop, but pharmacy medicines can only be sold in a pharmacy – though a prescription is not necessary. Prescription- only medicines are the category with the highest level of restriction: they can only be sold by a pharmacist if prescribed by a doctor.

In the USA, after thalidomide was prohibited from being distributed and sold, the Kefauver-Harris Amendments were made in 1962 to the 1938 Food, Drug, and Cosmetic Act13. This meant that there were strict guidelines instigated for the process of drug approval in the United States – crucially requiring drugs to be safe as well as effective before being approved for medical use.


To conclude, thalidomide is an infamous drug with a fatal and horrifying history, causing thousands of birth defects and deaths in infants. This drug was not researched and studied thoroughly enough to warrant it being approved, licensed, and widely distributed – and even prescribed – to unknowing individuals. In spite of this horror, the thalidomide tragedy has meant that there has been improved legislation in place to improve the safety of future drugs and prevent the same consequences from occurring again. Furthermore, thalidomide is now being utilised in an effective way to treat conditions like leprosy and is yielding positive results. The consequences of thalidomide are tragically irreversible, but fortunately this drug can now be directed towards improving lives, and with anticipation another tragedy like it will not arise again.

Samara Macrae, Youth Medical Journal 2022


1.   Science Museum: “Thalidomide” –

2.   BBC one: “The return of Thalidomide” –

3.   Victims Association of Canada: “What is Thalidomide?” –

4.   Wikimedia Commons: “Thalidomide enantiomers” –

5.   MedicineNet: “Medical Definition of Teratogenic drugs” –

6.   The Thalidomide Trust: “About Thalidomide” –

7.   Science Museum: “What it’s like to be affected by Thalidomide” –

8.   The Guardian: “My thalidomide family: Every time I went home I was a stranger” –

9.   The Conversation: “Why thalidomide survivors have such a tough time getting compensation” –

10.   Retro Report: “Thalidomide: Return of an Infamous Pill” –

11.   Mayo Clinic: “Thalidomide: research advances in cancer and other conditions” –

12.   DrugWise: “Drug laws” –

13.   The Embryo Project Encyclopaedia: “US Regulatory Response to Thalidomide (1950-2000)” –

Health and Disease

HIV/AIDS is More Than a Disease: Epidemiology, Stigma, and Future Targets


HIV, or Human Immunodeficiency Virus, is a highly stigmatized disease that, if not treated for a significant period, can develop into AIDS (Acquired Immunodeficiency Syndrome). HIV is currently an incredibly prevalent disease and is classed as a ‘global epidemic’ by the World Health Organisation, due to the huge numbers of people affected: there are approximately 38 million people1 globally who are living with HIV now. In 2018 alone, around 770,000 individuals died from AIDS2. Many people with HIV, unfortunately, have minimal access to any form of prevention or treatment – and even if it is available, it may not be economically accessible or the severe prejudice against HIV and AIDS will prevent people with these conditions from seeking medical help. Additionally, there is still no cure for either HIV or AIDS. These issues are compacted with the fact that HIV disproportionately affects developing and emerging countries – for instance, eastern and southern Africa is most predominantly affected, where it is estimated that 54% of all people with HIV live. South Africa specifically has the highest prevalence of HIV cases, containing 7.5 million people who live with HIV. After eastern and southern Africa, the western and central regions of Africa are the most severely affected, where there are approximately 4.9 million people with HIV1.

The Origin of HIV

While the Human Immunodeficiency Virus was only first identified and diagnosed in people in the 1980s, it is suggested that it originated in the 1920s in the Democratic Republic of Congo3. HIV developed from SIV (Simian Immunodeficiency Virus), which is a virus that can be contracted by monkeys and apes, and like HIV, attacks the immune system in these primates. Both HIV and SIV are primate lentiviruses, and share neuropathological features including causing white matter lesions, subtle white matter astrocytosis and viral macrophages invading the brain4. The strain of SIV which can infect humans is known as SIVcpz, and it is not fully known how this viral strain was transferred from chimps to humans. One theory of how this may have occurred is humans hunting and eating the chimps, who were affected by SIV, or the infected blood of the chimps entering open wounds of the humans, while they hunted the chimps or otherwise5. The SIVcpz then mutated inside the human host cells to produce the new strain: HIV-1. There are multiple different strains of HIV-1, and the four main groups of such strains are M, N, O and P. HIV-1 Group M is the most studied and most widespread strain of HIV to date5.


The most commonly known method of transmitting HIV is sexual transmission, though semen or vaginal secretions from the infected host to an unaffected individual – but this virus can also be transmitted through many other bodily fluids. Examples include blood, for instance during blood transfusions, and breast milk. An infected mother can also transmit HIV to her offspring throughout pregnancy across the placenta, and also during delivery. This is called a perinatal transmission and is the main way in which children are infected with HIV, but this method of transmission is decreasing in prevalence due to medical developments. If a pregnant mother takes HIV medicine daily throughout her pregnancy, and the child is given HIV medicine for 4 to 6 weeks following delivery, the risk of the child contracting HIV is below 1%6.

Furthermore, HIV can be transmitted via contaminated needles, predominantly through intravenous drug use but can also be through tattoo needles, for example. The latter, however, is very rare – and indeed there are no known cases of HIV being transmitted in this way6. It remains a possible method of spreading HIV though, as the unsterilized needles could be contaminated with the blood of an infected host with HIV.

Additionally, HIV is not spread through shaking hands, hugging, sweat, saliva nor through the air – these were and still are perceived as methods of transmitting HIV7. This exacerbates the stigma surrounding those living with HIV and AIDS as it can lead them to feel isolated if people purposefully avoid any close or physical contact with them.


8After an individual contracts HIV, they will likely experience a flu-like illness anytime between 2- and 6-weeks following infection, typically only lasting between 1 and 2 weeks. Approximately 80% of people who contract HIV experience this and are likely to have symptoms such as fever, sore throat, rash, muscle pain, joint pain, tiredness, and swollen glands. As these symptoms are not limited to HIV, it can mean that people may not realize they are infected – and afterwards, HIV often will not cause symptoms for many years. This is a key reason why HIV is so underdiagnosed, as the virus will be actively damaging the host’s immune system while they will still feel and appear healthy – and this process can last up to ten years. When the immune system has been significantly damaged, other symptoms can follow, such as weight loss, night sweats, chronic diarrhea, and recurrent infections. Improved diagnosis and earlier treatment of HIV can prevent the disease from causing greater damage and developing into AIDS.

AS HIV is a virus, when it is transmitted to an individual it will bind to their host cells, as viruses are unable to replicate outside of living cells. HIV specifically binds to T-helper cells, which are a type of white blood cell and can also be referred to as CD4 cells, and fuse with the DNA inside the cell9. 10The HIV life cycle has seven stages, the first stage being the binding of the HIV to the receptors on the cell surface membrane of a CD4 cell. The second stage is fusion, where the HIV envelope fuses with the cell membrane of the CD4 cell as the HIV particle permeates the cell. Reverse transcription is the next stage and involves an HIV enzyme called reverse transcriptase. This converts HIV RNA into HIV DNA, which then allows the HIV DNA to bind with the genetic material of the CD4 cell. Integration is the fourth stage, involving integrase (another HIV enzyme) being released within the nucleus of the CD4 cell so that the HIV DNA can fuse with the cell’s DNA. Replication is the following stage, and this consists of HIV utilizing the CD4 cell’s machinery to synthesize HIV proteins. During the sixth stage (assembly), new viral proteins and HIV RNA move further away from the nucleus and towards the surface of the cell. It is at this point in the cycle where immature HIV is assembled. In the final stage (budding), the assembled viral particles leave the CD4 cell and release protease (an additional HIV enzyme). This enzyme breaks up the immature viral particles to form mature viral particles, which are infectious.

A person is considered to have AIDS when the CD4 cell count drops below 200 cells per cubic millimetre of blood. In a healthy person, a CD4 cell count is between 500 and 1600 cells per cubic millimetre of blood. A person can also be considered to have AIDS when they develop one or more opportunistic infections – due to their severely weakened immune systems11. Common opportunistic infections include a salmonella infection, where bacteria affect the intestines, and toxoplasmosis, which is a parasitic infection of the brain12.

Stigma and Global Crisis

HIV and AIDS are shrouded in stigma, especially due to common methods of contracting HIV being sexual intercourse or intravenous drug use. It was during the AIDS epidemic in the 1980s in the United States, where cases were reported on a large scale as they had not been before. The AIDS crisis in the 1980s was the first large-scale instance of HIV and AIDS being recognized, and there was an arduous struggle to quickly determine the causes, risk factors and modes of transmission for these conditions. One of the groups of people where there was the highest prevalence of HIV and AIDS was gay men. In 1983, the most ‘at-risk’ groups of contracting HIV were colloquially referred to as the ‘4H Club’13, consisting of gay males, hemophiliacs, heroin users (as well as other intravenous drug users) and those of Haitian origin.

Heroin and other intravenous drug users were at risk of being infected with HIV as blood from an infected host could be transmitted to them via a contaminated needle. People with hemophilia also could easily become infected when they received the clotting factors they lacked from donated blood – which was not screened for HIV and thus could be infectious. In 1985, blood screening for donated blood was introduced and greatly decreased the transmission of HIV via this treatment method for haemophilia14.

13Since 1982, the prevalence of AIDS in Haiti has been higher than in any other country in the Caribbean. Being Haitian or of Haitian descent does not increase the risk of becoming infected with HIV – as there is no genetic risk factor – and the modes of transmission for this virus are the same for Haitians as for all other people. It is suggested that HIV and AIDS was, and is, widespread in Haiti due to migrants arriving there from the Democratic Republic of Congo, where HIV is thought to have originated from.

Gay men were considered to be the most at-risk group of HIV and AIDS during the 1980s AIDS epidemic in the United States, but that has not yet disappeared. In June 1982, there were several cases of severe immune deficiency amongst the gay male population of Southern California, which led to the disease being called ‘gay-related immune deficiency’ or GRID15. There were approximately 1.2 million people in America living with HIV in 2018, and 740,400 of them were gay or bisexual men. A key reason why this is apparent is that anal sex is considered to be the form of sexual intercourse where there is the highest risk of transmitting HIV, due to the thin rectal lining making it easier for HIV to enter the body6.


While there remains no cure for either HIV or AIDS, the former can be treated using antiretroviral therapy (ART)16. The key purpose of this treatment is reducing the individual’s viral load to an undetectable level – and if this can be maintained, the individual will have an almost zero risk of transmitting HIV to their partners (who do not have HIV) via sexual intercourse. ART consists of a combination of HIV medications that must be taken daily, and these medications work by preventing HIV particles from multiplying. This will reduce the number of HIV particles in the body, thus reducing the viral load. ART should preferably begin as soon as possible after infection, but this can be difficult if the affected person is unaware they have HIV. Combivir is a combination of two antiretroviral drugs taken as part of ART by those with HIV, and this treatment was approved by the FDA in September 199715.

Targets and Future Development

The significant social stigma surrounding HIV and AIDS undoubtedly persists, yet there is an increasingly global movement to tackle this and advocate for improved treatment – possibly even a cure in the future. The Joint United Nations Program on AIDS, known as UNAIDS was established in 1996 to coordinate responses to HIV and AIDS across the UN15. There are currently global targets in place to work towards the eventual goal of ending HIV and AIDS. These form part of the Sustainable Development Goals (SDGs), where target 3.3 is to ‘end AIDS as a public health threat by2030’17. Target 16 is ‘Peace, justice and strong institutions, including reduced violence against key populations and people living with HIV’18. The Millennium Development Goals were outlined by the United Nations in 2000, and within these goals, targets are addressing HIV and AIDS – for instance, goal 6 is to ‘combat HIV/AIDS, malaria and other diseases”19. Perhaps the most challenging targets outlined by the UN concerning HIV and AIDS are within the ‘Getting to Zero Strategy’ between 2011 and 2015; the objective was to achieve: zero new HIV infections, zero AIDS-related deaths, and zero discrimination against sufferers of HIV or AIDS20.


Arguably the stigma surrounding HIV and AIDs, in addition to limited access to healthcare in less economically developed regions of the world where HIV and AIDS tend to be more prevalent, is one of the main limiting factors of the global fight against HIV and AIDS. Prominent societal figures such as Freddie Mercury and Princess Diana of Wales have been instrumental in addressing this societal prejudice. Freddie Mercury was the lead singer of the British Band ‘Queen’, and he died from AIDS. Princess Diana opened an AIDS ward in Middlesex Hospital, London, in 1987, and she was photographed shaking hands with a person who had AIDS21. This was to break down the idea that HIV or AIDS was spread through skin contact, and work towards reducing the social isolation that many people living with HIV or AIDS are often forced to endure. The connections between intravenous drug use and sexual intercourse between men and HIV/AIDS has worsened the public view of these conditions due to homophobia and general discrimination. These global issues need to be confronted simultaneously with research into further treatment for HIV and AIDS, and potentially a cure.

Samara Macrae, Youth Medical Journal 2022


1.   KFF: “the Global HIV/AIDS Epidemic” –

2.   Wikipedia: “Epidemiology of HIV/AIDS” –

3.   Faria, N.R. et al (2014) ‘The early spread and epidemic ignition of HIV-1 in human populations’ Science 346(6205):56-61

4.   National Library of Medicine: “Comparison of simian immunodeficiency virus and human immunodeficiency virus encephalitides in the immature host” –

5.   Avert: “Origin of HIV & AIDS” –

6.   Centers for Disease Control and Prevention: “Ways HIV Can Be Transmitted” –

7.   Centres for Disease Prevention and Control: “Ways HIV is Not Transmitted” –

8.   NHS: “Symptoms: HIV and AIDS” –

9.   Avert: “How HIV Infects the Body and the Lifecycle of HIV” –

10.   National Institute of Health: National Institute of Allergy and Infectious Diseases: “HIV Replication Cycle” –

11.   HIV gov: “What are HIV and AIDS” –

12.   HIV gov: “Opportunistic infections” –

13.   Microbiology Book: “Microbiology and Immunology On-Line” –

14.   The New York Times: “Hemophilia and AIDS: Silent Suffering” –

15.   Avert: “History of HIV and AIDS Overview” –

16.   HIV info/ “HIV Treatment: The Basics” –

17.   United Nations: “Sustainable Development Goals” –

18.   UNAIDS: “HIV Preventions 2020 Road Map” –

19.   United Nations: “Millennium Development Goals” –

20.   UNAIDS: “2011-2015 Strategy/ Getting to Zero” –

21.   Tatler: “How Diana, Princess of Wales was instrumental in trying to stop the stigma against HIV/AIDS” –

Biomedical Research

Classifying Blood Groups – and the Danger of the Unknown

The concept of blood groups is well-known, and there are a total of eight main blood group types (within the ABO group). The four main types are Type O, Type A, Type B and Type AB, and each type can be either positive or negative. The blood type an individual has is determined by the genes they inherit from their parents, and the gene for type O blood is recessive while the genes for types A and B are both dominant. Blood types are categorised based on the antibodies in the plasma and antigens on the cell surface membrane of the erythrocytes. The knowledge of the different blood types plays a vital role in medicine–and one significant reason for this is ensuring blood compatibility for blood transfusions–otherwise this could result in the death of the recipient.

However, beyond the ABO group, there are 34 other blood group systems, and within these systems there are over 300 variants. Examples of such lesser-known and rarer blood group systems are the MNS blood group system and the Duffy Blood Group.

Blood Group ABO

Blood transfusions were common before the discovery of human blood groups in the year 1900, but it was not understood why some were unsuccessful while others were fatal. It was in 1900 that Karl Landsteiner, who was working at the University of Vienna, discovered blood groups through his scientific experimentation1. Karl Landsteiner took blood samples from his staff members and mixed them together, finding in some cases that the erythrocytes would be agglutinated when combined with the blood serum from a separate person. The results of these initial experiments led Karl Landsteiner to conclude that there were three blood types: A, B, and C (which would later be renamed O) and this then became known as the ABO system. The blood group AB was discovered in 1901. In 1930, Karl Landsteiner was awarded the Nobel Prize in physiology and medicine2.

Classifying Blood Groups

Blood types are classified into distinct groups based on the presence and form of antibodies and antigens in the blood. Antigens are structures found on the cell surface membrane of cells, here it is the erythrocytes, and they trigger an immune response. As part of this immune response, antibodies are produced by lymphocytes, and they bind to specific antigens. Some forms of antibodies attack the antigens by disabling processes in the cells which they are attached to, while others cause the foreign cells to clump together – and so facilitate the eradication of them, for example, by phagocytosis.

A person with blood type A will have Anti-B antibodies in their blood plasma, and A antigens. A person with blood type B will have Anti-A antibodies in their blood plasma, and B antigens. A person with blood type AB will have both A and B antigens but will not have any antibodies in their blood plasma. On the other hand, a person with blood type O will have both Anti-A and Anti-B antibodies in their blood plasma, but no antigens on their erythrocytes. Due to the presence of these antibodies and antigens, not all blood types are compatible with one another. For instance, if a person with blood type B received a blood transfusion where the donor has blood type A, the Anti-B antibodies in the recipient’s plasma will attack the erythrocytes from the donor blood. This will cause the erythrocytes to clump together, leading to clots and ultimately culminating in the death of the recipient.

Blood type AB is considered to be the universal recipient due to the absence of antibodies in the plasma of individuals with this blood type. Thus, when donor blood enters the circulatory system, antibodies will not attack the erythrocytes. Inversely, blood type O is considered to be the universal donor, as there are no antigens to be recognised nor stimulate an immune response3.

However, the rules of blood compatibility also depend on whether the individual has a positive or negative blood type. Whether an individual has a positive or negative blood group type is determined by the presence of the rhesus protein – named after the rhesus monkey, which also carries genes to code for this specific protein4. The rhesus protein is otherwise known as the D antigen or the Rh factor, and an absence of Rh antigens means that a person is Rh negative while the presence of such antigens makes a person Rh positive. Just as blood ABO group is inherited from parents, the positive or negative Rh factors are also genetically inherited. For an individual to have a Rh-negative blood type, both of their parents must have at least one negative Rh factor each in their genetic material. Therefore, a Rh-negative blood group is less common than a Rh-positive blood group, as a person has to have at least one negative Rh factor to have Rh negative blood themselves4.

Rh Protein and Pregnancy

A Rh-positive individual is able to receive blood from an Rh positive or Rh-negative donor, but a Rh-negative person is only able to receive Rh negative blood. This presents a problem during pregnancy and delivery when the mother and the foetus may be opposing Rh groups. For example, the mother is Rh negative, but the foetus is Rh positive, and if the mother is exposed to the foetus’s blood during birthing (or at any point during the pregnancy) then this can have fatal consequences. The result of this would be that the mother’s body would be stimulated to produce antibodies to the Rh antigens. If the mother had subsequent pregnancies, these antibodies her body has produced would attack the erythrocytes of the foetus’s should they also have Rh positive blood. This can be done as the mother’s antibodies can cross the placenta and reach the foetus, where they attack its erythrocytes. This leads to a condition called ‘haemolytic disease of the new-born’ (HDN) and can cause anaemia, seizures, jaundice, brain damage or potentially even kill the foetus5.

The difficulties of Rh groups of the mother and foetus during pregnancy and delivery – as well as later pregnancies – are greatly reduced. This is done through injecting the (Rh negative) pregnant woman with Rh antibodies, which eliminates the immune response in the mother that the Rh antigens from the foetus’s blood could potentially trigger. These antibodies are injected into the pregnant woman as ‘RhoGam’, which was only approved by the FDA in 1968. RhoGam is only given if the pregnant mother is Rh negative, but her foetus is Rh positive, and this injection is typically given at an early stage in the third trimester of the pregnancy. The mother can also receive a second RhoGam injection within 72 hours of giving birth.

Before the development and use of RhoGam, HDN affected approximately 1% of all new-born infants and led to the death of 1 out of every 2200 births6, according to an article entitled ‘Management of pregnancies with RhD alloimmunisation”. Also, in this article is stated that “in England and Wales, about 500 foetuses develop haemolytic disease each year, and about 25-30 babies die from haemolytic disease of the new-born”.

MNS and Duffy Blood Groups:

The MNS blood group is categorised by the presence of MNS antigens, which are carried by glycophorin proteins and are located on the cell surface membrane of erythrocytes. MNS antigens are carried by the glycophorins A and B, which can also be receptors for pathogens such as plasmodium falciparum, one of the deadliest malarial parasites. The MSN blood group was discovered shortly after the ABO blood group, in 1927, and the M and N antigens were identified then. It took approximately 20 years for the S and s antigens to be detected. While the M, N, S, and s antigens are the most frequently occurring within the MSN blood group, there are more than 40 other antigens7.

Like the glycoproteins for the MSN antigens, the Duffy glycoprotein is also a receptor for Plasmodium vivax – another malarial parasite8. This means that individuals lacking the Duffy antigens could potentially be immune to this strain of malaria, as the Plasmodium vivax parasites would not be able to connect to the Duffy antigens if they are absent. As well, the Duffy glycoprotein is a receptor for chemicals which cells secrete as the result of inflammation. The Duffy blood group is a blood type classification determined by the presence of the Duffy glycoprotein, otherwise known as Fy antigens, on the cell surface membranes of erythrocytes. Fy antigens can also be located on endothelial cells, epithelial cells in the alveoli and in the collecting ducts in nephrons. The four possible Fy phenotypes are: Fya+b+, Fya+b-, Fya-b+, and Fya-b-, and the first Duffy antigen discovered was Fya, in the year 19509.


While the general knowledge of blood groups is well known by most, there are many more blood types and variants within these groups that are newly discovered and less-commonly known about. Despite the fact that the ABO blood group system is at least 20 million years old, and has mutated and been inherited since its development, it remains a scientific mystery as to why exactly humans (and other primates) have distinct blood types in the first place. It is hypothesised that the frequency and distribution of blood types worldwide is linked to where diseases/infectious organisms are endemic. For instance, an article titled “Why do people have different blood types?”10 written by Harvey G. Klein (Chief of the Department of Transfusion Medicine for the National Institutes of Health) comments that people with blood type A are more susceptible to smallpox. Klein notes the correlation between this and the fact that there is a higher frequency of blood type B across China, India, and Russia – where there were prolific epidemics of smallpox in the past. The full extent of why a variety of blood groups exist remains unknown but linking it to evolution and global distribution of both people and pathogens seems an auspicious theory.

Samara Macrae, Youth Medical Journal 2022


1.   US National Library of Medicine National Institutes of Health: “A Brief History of Human Blood Groups” –,is%20given%20in%20Table%202.

2.   National Library of Medicine: “A brief history of human blood groups” –

3.   Smithsonian Magazine: “The Mystery of Human Blood Types” –

4.   Carter BloodCare Blog: “The Significance of Being Rh Negative or Rh Positive” –

5.   Dallas Obgyn PA: “RhoGam: The triumph of medical science over Rh disease” –

6.   US National Library of Medicine National Institutes of Health: “Management of pregnancies with RhD alloimmunisation” –

7.   NCBI: “The MNS blood group” –

8.   NCBI: “The Duffy blood group” –

9.   Britannica: “Duffy blood group system” –

10.   Scientific American: “Why do people have different blood types?” –

Biomedical Research Health and Disease Neuroscience

Dissociative Identity Disorder: Exploring the Reality Behind Having Multiple Personalities


Multiple personality disorder is the term that was previously used to describe what is now known as dissociative identity disorder (DID). This is a psychological condition which the brain instigates as a method of self-preservation and is often the result of prolonged and habitual abuse. DID is, according to WebMD, “a severe form of dissociation”1 where an individual becomes mentally disconnected from their thoughts, memories, and even their self-identity. Although severe, this is one way that the human body tries to protect itself from traumatic and difficult situations – by shutting the primary consciousness away and creating other consciousnesses to deal with the present trauma. When each alteration is in control of the individual’s body, this is referred to as fronting. According to the American Psychiatric Association2, approximately 90% of those in Europe, Canada, and the United States suffering from DID have experienced abuse and neglect during childhood. Sufferers of DID have at least two separate and distinct personalities or consciousnesses, and the other personalities cannot remember what happened when they were not the fronting consciousness.

DID is an example of a dissociative disorder, and sufferers of such disorders perpetually feel disconnected from their reality. Approximately 2% of the US population have dissociative disorders3, and women are more likely to have such conditions than men. The three primary dissociative disorders are dissociative identity disorder (DID), derealisation disorder, and dissociative amnesia. Post-traumatic stress disorder (PTSD) and acute stress disorder share similar symptoms with dissociative disorders – including memory loss and depersonalization – but are not considered to fully be dissociative disorders in their own right.

An article published in the International Journal of Social Psychiatry4 entitled “Dissociative Disorders in a Psychiatric Institution in India – a Selected Review and Patterns over a Decade” discusses research into DID. The purpose of this study was to examine patterns of DID sufferers across ten years and included inpatients and outpatients who attended a psychiatric hospital between the years 1999 and 2008. The research discovered that between 1.5 and 15.0 per 1,000 outpatients were diagnosed with DID, while between 1.5 and 11.6 per 1,000 of the inpatients were diagnosed. This review concluded by stating that “dissociative motor disorder and dissociative convulsions are the most common disorders” and that DID is especially under-diagnosed outside of Western regions.

What Causes DID?

Although there is no single definitive cause of DID, the main factor for this condition is severe and repeated abuse – including physical, emotional, and sexual abuse. This abuse often will begin in childhood, and, compounded with the fact that the child often does not have a safe refuge from such abuse because it is typically, though not always, carried out by a family member, can cause the child to develop DID. In rarer cases, a person can develop DID due to experiencing a violent and traumatic event, such as living in a combat zone.

Signs and Symptoms

For a person to be categorized as having DID, they have to have at least two distinct personalities. The predominant identity of the individual is known as the ‘core’ identity, and the personalities created are the ‘alters’5. Someone with DID can have many alters, possibly over 100. These alters, if there are many, tend to vary in age, gender, ethnicities, and characters even within a single person – and for some people with DID their alters are able to interact with one another.

The main symptom of DID is sudden and involuntary transitions between these alters. As a result, this can mean that the core identity has many long-term gaps in their memory, as they can only remember details from when they are fronting. Self-harm and suicide attempts are unfortunately very prevalent amongst DID sufferers, and over 70% of outpatients with DID have attempted suicide at least once3.

6Some common symptoms of DID include sudden flashbacks, feeling detached from one’s own body/out-of-body experiences, hallucinations, and inability to be aware of one’s surroundings – for example, finding yourself in a place with no memory of how you got there. Long periods of memory loss are also typical for DID sufferers, and this is known as dissociative amnesia – which is a type of memory loss that is greater than forgetfulness. Dissociative fugue is an episode of amnesia that can cause the person to not remember personal information or to experience emotional detachment7. In addition to these symptoms of DID, sufferers of this condition may also endure mood swings, anxiety, panic attacks, unexplained phobias, insomnia, night terrors, migraines, severe pain anywhere on the body, sexual dysfunction, and the increased likelihood of developing eating disorders.

Not only does DID cause intense emotional and psychological difficulty for an individual with the condition, but also it can physically change the brain. A paper published in The American Journal of Psychiatry8 examined the results of a study involving 38 women. Fifteen (15) of these women had DID, while the other 23 did not have DID, nor any other psychiatric condition. Each woman underwent MRI scanning to measure the volumes of their hippocampus, which controls memory, and amygdala, which controls emotions. The results were compared between these two groups and showed that the volume of the hippocampus was 19.2% smaller for those with DID. The volume of the amygdala was 31.6% smaller in the DID patients too. Overall, this suggests that people with DID will generally have a lower hippocampal and amygdalar volume compared to people who do not have DID. This leads to impacts on memory and emotions, which is common for sufferers of DID – who typically have long periods of memory loss or significant gaps in their memory, in addition to frequent mood swings and swiftly changing emotions.

In a paper published in the National Library of Medicine9, a study was carried out to measure hippocampal volume in 21 women who had been severely sexually abused during childhood, as well as 21 women who had not previously been abused. Again, magnetic resonance imaging (MRI) was used to determine the hippocampal volume for each of the 42 women, and the results of this study showed that the left-sided hippocampal volume was, on average, 5% smaller in the women who were sexually abused compared to the women who had not been. While the same test was carried out on the right-sight hippocampus for each woman, the article states, “hippocampal volume was also smaller on the right side, but this failed to reach statistical significance.” The results of this study show how abuse – of any kind, but in this research, it was specifically sexual abuse – can physically damage the brain, as it tries to protect the individual by having extensive gaps in their memory. As discussed previously, sustained abuse can cause DID to manifest in the individual, as sufferers of this condition present with similar changes to the hippocampal volume.

Intervention and Treatment

Like other psychological conditions, there is no cure for DID, but there are several treatment methods that have proved effective. Treatment for DID can take several years, and the most common method is psychotherapy. Throughout this process, the aim is to work with the patient so that their individual alters can merge to form a single, cohesive identity5. This is an arduous process, as it involves the patient working through the trauma and/or abuse that caused them to develop DID in the first place. Family therapy can also be helpful for sufferers of DID, as it can educate friends and family members about the difficulties of living with DID and how best to support that person. Less frequently, clinical hypnosis can be used as a possible treatment method for DID, in which patients can access repressed memories that they experienced as one of their alters and so cannot remember when any other alter is fronting6. Furthermore, cognitive behavioral therapy is another commonly used method in an attempt to treat DID.


The American Psychiatric Association led a question-and-answer panel with an expert of psychiatry: Dr. David Spiegel, Professor and Associate Chair of Psychiatry & Behavioural Sciences at Stanford University School of Medicine10. When he was asked the question, “are people with dissociative identity disorder often misdiagnosed?” Dr. Spiegel said: “they are sometimes misdiagnosed as having schizophrenia. Another common misdiagnosis is borderline personality disorder.” Later in the article, Dr. Spiegel says, “typically those with dissociative identity disorder experience symptoms for six years or more before being correctly treated.” This shows not only how difficult it is to acquire treatment but, even when treatment is given it can frequently be based on a misdiagnosis. If a person with DID is misdiagnosed as having schizophrenia, they may be prescribed antipsychotic medications, and their emotions will be dulled when they take this. This will lead to further increases in this antipsychotic medication on the belief that it is an effective treatment for the individual. Dr. Spiegel remarks that “dissociation is a common coping mechanism,” saying that “many rape victims experience the crime as though they were floating above their bodies.” While dissociation is a natural human response to a traumatic event, it is when this trauma is sustained and repeated that this dissociation can develop into DID.

Individuals’ Stories

Jeni Haynes is a woman who has DID and has over 2500 distinct personalities/alters – though only six predominant ones. Jeni developed DID due to her intensely traumatic childhood, where she was subjected to horrific physical and sexual abuse from her father, Richard Haynes. In the trial against Richard Haynes for this abuse, Jeni testified through her multiple personalities, allowing each one to front her in turn. After the core personality of Jeni Haynes, there was Symphony, a four-year-old girl who endured much of the abuse – and described by Jeni to be her most significant alter. Jeni addressed the court, telling them: “Symphony intended to testify in court for the whole thing. When my father raped Jennifer Haynes, he raped Symphony.” Jeni’s other alters included an eleven-year-old boy called Judas and a 17-year-old boy called Muscles11. This trial was the first trial in Australia where an individual was allowed to provide evidence through their alters. While Richard Haynes was convicted, that does not mean that Jeni has a normal life now. Her DID causes her struggle every day, and in her victim impact statement, she said, speaking about her and all of her different alters: “we have to hide our multiplicity and strive for a consistency in behavior. Having 2,500 different voices, opinions and attitudes is extremely hard to manage”12.

Another example of an individual’s struggle with DID is that of a 25-year-old soldier. In an article written by the American Psychiatric Association13, she is referred to only as “Sandra”, which is a pseudonym to maintain confidentiality. Sandra was hospitalized due to her sudden behavioral changes and episodes of acute memory loss. As she underwent clinical hypnosis, it was discovered that she had a series of significantly large gaps in her memory, and she was also found to have swift and severe changes in her emotions. She then began to have psychotherapy, where she worked through the memories of sexual abuse she had endured since the age of 11. Sandra was diagnosed with DID, and now she continues to have psychotherapy as well as take anti-depressants. She reportedly rarely dissociates and has now been able to establish stable relationships.


To conclude, dissociative identity disorder (DID) is a psychiatric condition that can present immense difficulty to those with it, as they can feel completely disconnected from their surroundings – and it can be terrifying when individuals cannot remember what has happened to them and feel like they do not have autonomy over their own body. Furthermore, DID is hugely underdiagnosed or misdiagnosed and can mean that the treatment people need – which is already a lengthy process of many years at least – is delayed. There needs to be more information about DID, as some do not believe it is even an actual condition. Educating people about DID can mean that they can recognize if their friends or families show signs of this condition and can potentially aid them to attain treatment and support faster than if they waited to do so themselves.

Samara Macrae, Youth Medical Journal 2022


1.   WebMD: “Dissociative Identity Disorder (Multiple Personality Disorder) –

2.   American Psychiatric Association: “What Are Dissociative Disorders?” –

3.   Cleveland Clinic: “Dissociative Disorders” –

4.   International Journal of Social Psychiatry: “Dissociative Disorders in a Psychiatric Institution in India – a Selected Review and Patterns over a Decade” –

5.   Cleveland Clinic: “Dissociative Identity Disorder (Multiple Personality Disorder)” –

6.   American Association for Marriage and Family Therapy: “Dissociative Identity Disorder” –

7.   Healthline: “Dissociative Identity Disorder” –

8.   The American Journal of Psychiatry: “Hippocampal and Amygdalar Volumes in Dissociative Identity Disorder” –

9.   National Library of Medicine: “Hippocampal volume in women victimised by childhood sexual abuse” –

10.   American Psychiatric Association: “Expert Q&A: Dissociative Disorders” –

11.   The Sydney Morning Herald: “Woman to use multiple personalities in evidence against abusive father” –

12.   BBC News: “Dissociative Identity Disorder: The woman who created 2,500 personalities to survive” –

13.   American Psychiatric Association: “Patient Story: Dissociative Disorders” –

Health and Disease

The 10/90 Gap and Its Impact on Malaria

By Samara Macrae

Published 11:18 EST, Thurs October 21st, 2021


Epidemic is defined by the Oxford English Dictionary as: “a widespread occurrence of an infectious disease in a community at a particular time”. Malaria is classed as an epidemic, currently affecting over 100 countries predominantly in the tropical regions – and is the fourth highest cause of death in children under the age of five years1. Despite the extensive death and casualty toll, as well as the havoc wreaked on socio-economic conditions in areas of outbreak, these epidemics receive significantly less media coverage and humanitarian attention as they are affecting the developing world. Even after the original health threat has been managed, developing countries continue to face serious long term effects compared to more economically developed ones.  This can be shown by what activists refer to as the ‘10/90 Gap’: this is the idea that only 10% of global health research is allocated towards diseases responsible for 90% of preventable deaths globally. Diseases that make up this 10% can be referred to as ‘neglected diseases’, typically because they predominantly affect lower-income countries where poverty and malnutrition are rife – and so exacerbate the spread of such life-threatening communicable diseases. According to the World Health Organisation, 45% of the ‘disease burden’ in the most under-developed and poorest countries is derived from poverty. Therefore, these diseases cannot merely be treated medically, but the deeper social problems in the regions need to be tackled as well.

Malaria is an example of a disease which can be understated and disregarded for its impact due to societal prejudice against less economically developed regions of the world. Malaria mostly affects tropical regions, including (but not limited to) large zones of Africa, particularly in sub-Saharan Africa, South America, the Dominican Republic, the Caribbean and Central America2. While there are recorded cases of malaria in developed countries such as the USA and the UK, this is almost exclusively the result of travellers returning from holiday from countries where malaria is prevalent. 3Malaria disproportionately affects the continent of Africa and, in 2019, 94% of malaria cases and deaths as a result of this disease were in the African continent. Nearly 50% of the global population was at risk of contracting malaria in 2019, and that same year there were approximately 229 million recorded cases of the disease across the world. Despite the fact that malaria is both curable and preventable, this disease has significantly less funding compared to diseases and conditions which disproportionately affect the developed world – such as obesity, which is the fifth most important risk factor for disease amongst developed countries. In 1986, the US spent approximately $39 billion on tackling obesity, and this rose to $190 billion by 20054.

How malaria affects the body:

Malaria is caused by a plasmodium, and there are five different parasite species which can spread malaria – the two most predominant species being P. falciparum and P. vivax3. The former is the main cause of malaria in Africa, south-east Asia, and the Pacific while the latter presents the greater threat in South and Central America. These species of plasmodium parasite are transmitted by an animal vector: the female Anopheles mosquito. In rarer instances, malaria can be transmitted by sharing unsterilised needles, via blood transfusion or from the mother to the foetus. When the female Anopheles mosquito takes a blood meal from a host who is infected with malaria, it will inject saliva into the host’s skin while sucking blood through its proboscis. This saliva acts as a form of antiseptic and is why it is difficult to notice when the mosquito is doing this. As the mosquito takes up this infected blood, the male and female gametes of the malaria-causing plasmodium will fuse in the mosquito’s stomach. Cell division will be carried out then, which will lead to the formation of thousands of immature malarial parasites – these parasites will then invade the salivary glands of the mosquito. This means that when the mosquito takes a blood meal from an uninfected host, it will inject its saliva – containing the plasmodium parasites – into the host’s skin. Thus, the parasites are then able to infect the host’s bloodstream and this results in the manifestation of the disease. One reason why the majority of malaria cases and deaths are in the continent of Africa is due to the long lifespan of the African vector species of female Anopheles mosquitoes. This means that the parasite has a longer time to develop inside the mosquito, and so more parasites can be produced. 


3The symptoms of malaria typically show 10 to 15 days after an individual is bitten by a mosquito vector. Some early symptoms can include a headache, fever, and chills – but these are all typically mild and so not necessarily immediately discernible as malaria. However, the severity of the disease soon increases, as failure to receive treatment within 24 hours of the first symptoms when infected by the P. falciparum parasite can mean that the disease progresses swiftly and often culminates in death. Severe cases of malaria can also include the following symptoms: severe anaemia, liver damage and multiple-organ failure. Children are the group most greatly affected by malaria, especially under the age of five years – and in 2019, 67% of all malaria deaths globally (approximately 274 000 people) were children below five years of age. Other high-risk groups of people are pregnant women, non-immune migrants/travellers and individuals with HIV or AIDS.


As malaria is transmitted by mosquito vectors, the most straightforward and accessible method of prevention is mosquito nets – which can be made more effective if treated with insecticide. This, in conjunction with personal use of insect repellent has been shown to reduce the risk of malarial infection by up to 80%5. Also, wearing clothing that limits skin exposure can help to reduce the chance of receiving a mosquito bite which could potentially be fatal. Individuals who are travelling to countries where there are cases of malaria can obtain medication to prevent them contracting the disease. 

Malarial chemoprophylaxis is only available in European countries, and only for travellers to countries where malaria is prevalent – and not for the inhabitants of the areas affected6. Malarial chemoprophylaxis is classified into 3 groups in order to determine the most suitable drug for the individual. The drug recommended depends on factors such as duration of potential exposure, age, and climate of the destination.7 Antimalarial tablets can reduce the chances of becoming infected with malaria by approximately 90%. The main types of antimalarial medication are: atovaquone plus proguanil, doxycycline, mefloquine/larium and chloroquine plus proguanil. Chloroquine plus proguanil is still available for travellers but is rarely recommended now due to its ineffectiveness against P. falciparum but can still be prescribed if the individual is visiting an area where this plasmodium is less common, as in Sri Lanka. 

Recently, there has been promising research into a new malaria vaccine, developed at the University of Oxford’s Jenner Institute. In a small clinical trial, involving 450 children, this vaccine showed up to 77% efficacy – a dramatic increase from the current vaccine’s efficacy8. Undoubtedly larger clinical trials are needed to ensure the safety and effectiveness of this vaccine, but surely this research should be pushed ahead – as the Covid-19 vaccines were – when millions of people die every year from malaria. This is not to dispute the urgent global need for vaccines against Covid-19, but surely when a disease is so widespread and life-threatening as malaria is it demands the same urgency to tackle the endless death toll each year. Yet this is not the case, and one key reason why is the fact that it mainly affects lower-income, under-developed countries.

9There is presently only one vaccine against malaria and has the brand name Mosquirix. It requires four injections but even then, only offers approximately 30% protection against severe malaria, and only for up to four years. This raises arguments as to whether this vaccine is cost efficient. Additionally, there are further concerns over the safety of this vaccine; in a clinical trial for Mosquirix, the children who had received the vaccine had a risk of contracting meningitis that was 10 times higher than the children who had received the placebo. While there is not sufficient evidence to show that causation, this does hinder the potential safety of this vaccine and has, as a result, impeded rollouts of it. The new vaccine for malaria could gradually replace Mosquirix, and can mean that more individuals are protected against this disease.


Malaria is a curable disease and is treated using antimalarial drugs. The most common of these drugs is Chloroquine phosphate, but unfortunately this treatment is gradually being rendered ineffective due to increasing resistance of malarial parasites to it. Another type of antimalarial drug treatment is Artemisinin-based combination therapies (ACTs); ACT is actually a combination of antimalarial drugs and is used mostly where there is resistance to chloroquine phosphate. Primaquine phosphate is another frequently used antimalarial drug, in addition to quinine sulfate with doxycycline10. Noticeably, many antimalarial drugs contain the word ‘quine’ or ‘quinine’, as they often contain quinine, which is a chemical compound naturally derived from the bark of the cinchona tree.

Currently the Mayo Clinic is carrying out a clinical trial entitled “A Study to Evaluate Intravenous Artesunate to Treat Severe Malaria in the United States”11, and this study hopes to make intravenous artesunate available for treatment in cases of severe malaria. As of yet, there are no publications for this clinical trial.

The global management of malaria:

Currently there are global initiatives attempting to end the spread of malaria. One such example is the Mekong Malaria Elimination (MME) programme organised by the World Health Organisation (WHO)12. The MME programme is working towards eliminating malaria in Myanmar, Cambodia, the Lao People’s Democratic Republic, China, Vietnam, and Thailand, and began in 2017 in response to the increasing ineffectiveness of specific antimalarial drugs as a result of drug-resistant malarial parasites. On the 3rd of November 2020, Cambodia committed to completely eradicating P. falciparum by 202313. Dr Li Ailan, who is the WHO Representative to Cambodia stated: “. Cambodia, being very close to the goal, can be the first country in the region to eliminate P. falciparum malaria, serving as a champion in the Greater Mekong Subregion.” As part of this auspicious commitment, three main interventions have been set out: the distribution of mosquito screens and nets, malaria screening involving weekly fever screening for each household – and any individual with a fever will be tested for malaria and then receive treatment if they are positive for the disease, and furthermore, improved preventive measures for travellers to areas which are at-risk of the spread of malaria.


Arguably malaria, as a disease alone, is theoretically relatively easy to prevent and treat: through insecticides, mosquito nets, vaccinations and antimalarials. In reality, the countries most affected by this disease do not have sufficient funds to provide this, let alone tackle the deep-rooted causes of poverty, poor access to clean drinking water and sanitation infrastructure, and malnutrition in addition to food insecurity – which collectively establishes an environment in which diseases, like malaria, can reach levels of epidemic. There needs to be a greater collective, global effort to tackle so-called ‘poverty-related diseases’ like malaria, but also cholera, typhoid and diphtheria to name more, as there was with smallpox and as there is now with Covid-19. That is not to discriminate against developed countries using their funds to further research healthcare and medicine in their own countries – as this is essential for advancing medicine and understandably they want to improve healthcare in their own countries first and foremost. Furthermore, it is also not to say that wealthier nations should extensively increase the amount of aid they give to ‘fix’ the healthcare problems in other nations – but, when aid is given, it should perhaps be directed to try and address the larger and more long-term socio-economic problems that allow diseases to manifest.

When aid is given in order to reduce the spread of malaria in the worst-impacted regions, it should simultaneously be used to improve the conditions which allow the disease to manifest and become so widespread. However, simply providing money to ‘fix’ these socio-economic problems is not a straightforward answer – as it ignores the presence of factors such as corruption, debt and prioritising immediate humanitarian aid over the longer-term social problems which are more difficult to fix due to their longevity and severity. Referring back to the ‘10/90 Gap’, this balance between the funding of disease mainly affecting the developed world and the ‘neglected diseases’ more prevalent in the developing world needs to be readdressed. When globally the disease burden is significantly greater in less developed regions, more aid needs to be directed towards this rather than disproportionately towards diseases that are less widespread and prevalent which impact more developed regions. 

There is no single, clear solution to the problem of malaria – and more broadly the ‘10/90 Gap’. However, it is undeniable that if malaria were as rampant across the developed world as it is presently in the developing world then there would undoubtedly be global upheaval to tackle this disease, and such a gap in funding would not be so significant. 

Samara Macrae, Youth Medical Journal 2021


1.   Medscape: “What is the mortality rate of malaria?” –

2.   Travel Health Pro: “Malaria” –

3.   World Health Organisation: “Malaria Key Facts” –

4.    Harvard TH Chan; School of Public Health: “Obesity Prevention Source” –

5.   Hill N, Lenglet A, Arnéz AM, Carneiro I. Plant based insect repellent and insecticide treated bed nets to protect against malaria in areas of early evening biting vectors: double blind randomised placebo controlled clinical trial in the Bolivian Amazon. BMJ. 2007;335(7628):1023.

6.   European Centre for Disease Prevention and Control: “Facts about malaria” –

7.   NHS: “Antimalarials” –

8.   TheScientist: “New Malaria Vaccine Shows Most Efficacy of Any to Date: Small Trial” –

9.   Sciencemag: “First malaria vaccine rolled out in Africa – despite limited efficacy and nagging safety concerns” –

10.   Mayo Clinic: “Malaria” –

11.   Mayo Clinic: “A Study to Evaluate Intravenous Artesunate to Treat Severe Malaria in the United States” –

12.   World Health Organisation: “Mekong Malaria Elimination Programme” –

13.   World Health Organisation: “Cambodia commits to eliminating Plasmodium falciparum malaria” –


The present and future role of 3D printing in medicine

By Samara Macrae

Published 11:23 EST, Weds October 13th, 2021


3D printing, also known as additive manufacturing, is a process that holds enormous potential – and is not only currently used in medicine, but it will undoubtedly continue to revolutionise this field in the future. The process of 3D printing began in the 1980s, and this technology has been implemented into various areas of medicine – for example, medical imaging apparatus can often be fed into a 3D printer to form a physical model of the digital image. In 2016, the use of 3D printing in medicine was valued at $713.3 million, but this is predicted to rise to $3.5 billion by only 20251.  Within the field of medicine, 3D printing can additionally be used to produce implants as well as in bio-printing. Other major applications of 3D printing in medicine include producing artificial human organs for transplants and making surgical procedures faster and more efficient.

How does 3D printing work?

To begin with, before a physical 3D model can be created, a graphic model has to first be designed. This can be done using programs such as TinkerCAD and Fusion360. This digital model then needs to be ‘sliced’ in order for the printer to process the designs for the many layers – as it cannot fully conceptualize a 3D model in its entirety. This process is called ‘slicing’2. Once divided into layers, the design for each individual layer is fed into the printer, typically via a USB stick or can be done wirelessly. This is an example of an additive process, which is where a 3D object is created through placing many layers of material on top of one another. Each layer is a cross-section of the 3D object that has been created. 3D printing began as creating prototypes but has escalated into large-scale manufacturing due to how rapid the process is compared to other forms of industrial production.3 Manufacturing using a 3D printer can also be cheaper, as iterations are easier and there is no need for expensive tools nor high labour costs to manage the machines. 3D printing is utilized extensively in the car manufacturing industry, in order to produce individual vehicle parts on demand and en-masse. 3D printing is used in a multitude of other industries, including aviation and consumer products, such as eyewear and footwear.

Bioprinting and organ transplantation:

Bioprinting is a process similar to 3D printing, as it is an additive manufacturing process through which cells and other biomaterials are ‘printed’ to create biological structures in which living cells are able to divide and multiply4. The cells used to create complex bodily structures – such as skin, bones, and other organs – can be extracted directly from a patient. Adult stem cells can also be used, and they are cultivated into a bioink; this is a material used to produce artificial living tissue via 3D printing. Bioink can consist solely of the cells but can also contain a carrier material – typically a biopolymer gel. This will provide a 3D framework which the cells are able to attach to and spread out as they multiply5. The result of this scaffolding being in place means that the cells can be moulded into the desired shape. 

Bioprinting is a technique that is being researched currently, and Swansea University in the UK, has recently developed a bioprinting process6 by which bone matrix can be artificially produced using a regenerative biomaterial. This material is comprised of calcium phosphate, polycaprolactone, gelatine, agarose, and collagen alginate. This can potentially be used to correct severe and complex bone fractures, where otherwise the missing or damaged bones would be replaced with synthetic materials. This is part of the surgical procedure known as ‘bone grafting’. If the 3D-printed bone matrix is used instead, over time it will fuse with the patient’s bones and result in greater strength, compared to when synthetic materials would have been used instead.

In addition, the prospects of bioprinting extend further: for instance, the development of artificial corneas. Globally, there were approximately 12.7 million people in 2013 awaiting a corneal transplant, with 7 million of these individuals in India alone7. 8 In South Korea, in 2019, there were approximately 2000 people requiring a cornea donation – and the average wait time for surgery there is 6 years. This is due to the lack of cornea donations in the country as well as the problems associated with the current synthetic corneas available. These synthetic corneas are made from recombinant collagen or other chemical substances, like synthetic polymers, and one predominant problem with them is the fact that they are not always transparent after being implanted. This is due to the present inability to synthetically replicate the natural structure of the cornea being that of a lattice of collagen fibrils, which affects its transparency.

8However, a research team at the Pohang University of Science and Technology in South Korea, in conjunction with researchers at the Kyungpook National University School of Medicine also in South Korea, have worked to 3D print a cornea. This was done using a tissue-derived bioink, and this meant it is biocompatible with an individual’s eye. Bioprinting was utilised to create this artificial cornea in such a way that its transparency is akin to that of a natural human cornea. The joint research teams noticed, while working to develop a 3D printed cornea, that the collagen fibrils which were produced by the process of 3D bioprinting were similar to the lattice pattern found in human corneas.

9In other areas of bioprinting, the accomplishment of developing artificial organs suitable for transplantation remains a more futuristic hope. An example of this is a research project using 3D bioprinting of stem cells in order to create artificial, biocompatible kidney tissue. This research was led by the Murdoch Children’s Research Institute (MCRI) in Australia, alongside the American biotech company, Organovo. A 3D bioprinting process was used, in which a bioink created from stem cells was formed, and this produced an artificial kidney approximately the size of a human fingernail. Despite the small size, these bio-printed kidneys did contain very similar structures to human kidneys – including having nephrons and the division between the cortex and medulla being identifiable. While the research needs to continue to create artificial kidneys suitable for human transplantations, these kidneys are still functional for drug testing, predominantly for toxicity, instead of animal testing. Professor Melissa Little from the MCRI stated: “The pathway to renal replacement therapy using stem cell-derived kidney tissue will need a massive increase in the number of nephron structures present in the tissue to be transplanted.” This shows that the research is auspicious but requires considerably more time and effort.

The use of 3D printing in surgery:

3D printing is currently in use for many surgical procedures, and this will continue to increase as this technology develops. An example of this is using 3D printing to create patient-specific implants (PSIs) which are the exact complementary shape for the patient. For example, 10craniomaxillofacial reconstruction implants, which are used predominantly in head and neck surgery. These implants have to be bent into shape during surgery, which is time-consuming and is likely to place unnecessary stress on the implant as it has to be bent multiple times. In an article published in ScienceDirect entitled ‘A Systematic Approach for Making 3D-Printed Patient-Specific Implants for Craniomaxillofacial Reconstruction’10, the researchers discuss how they have devised an approach to this form of surgery, which has resulted in 41 successful surgeries using patient-specific implants which have been 3D-printed. This approach begins with using SolidWorks software to create a graphic design model to then print. The 3D-printed product undergoes a series of treatments – including heat and tension treatments – before being sterilised. The implant is then used in the surgery, and this article furthermore states that the use of these 3D-printed patient specific implants “reduces surgery time and shortens patient recovery time”.

A specific example of the use of 3D printing patient specific implants is a lower jaw implant, which was created for a child in China in 201811. This child had a mandibular tumour in his lower jaw which, if removed, would cause a severe facial malformation. However, this child needed to have this tumour removed as he struggled greatly with tasks such as talking, eating, and even opening his mouth. This led to him undergoing a surgery in which the tumour was removed, and the part of his lower jaw which was also removed was replaced using a titanium alloy implant. This implant had been 3D printed, using models of the child’s jaw, in order to create a patient-specific implant for him.

A further example is the use of a 3D printed patient-specific implant of an ossicle, in 2019. This implant was made, again, of titanium, and replaced the ossicles of the patient – as they had been damaged during a car accident and led to the patient losing their hearing. The medical team carrying this innovative surgical procedure was led by Professor Mashudu Tshifularo, a professor at the University of Pretoria in South Africa. As a result of this work, the patient’s hearing was restored11. This 3D-printed middle-ear replacement surgery was the first in the world, and according to the news platform ‘Good Things Guy’, Professor Mashudu Tshifularo said: “By replacing only the ossicles that aren’t functioning properly, the procedure carries significantly less risk that known prostheses and their associated surgical procedures”12.


To conclude, while the technology of 3D printing in medicine can certainly progress in the future, it is still in use and being researched further currently. The promising nature of this process means that surgical procedures can continue to develop, becoming safer and more time-efficient, and there are the hopes of artificially creating biocompatible tissues and organs. This could revolutionise organ transplantation – not only reducing waiting times but additionally decreasing the risks of rejection. Furthermore, this technology could mean that implants are a better fit for the patient – as hip and knee replacements are some of the most common surgical procedures performed worldwide. The research for this technology is boundless and is one of many examples of computer technology merging with, and arguably, dominating the field of medicine in order to improve every aspect of it.

Samara Macrae Youth Medical Journal 2021


  1. Medical Device Network: “3D printing in the medical field: four major applications revolutionising the industry” –
  2. Interesting Engineering: “How Exactly Does 3D Printing Work?” –
  3. 3DPrinting.COM: “What is 3D Printing?” –
  4. Cellink: “Bioprinting Explained” –,that%20let%20living%20cells%20multiply.
  5. All3DP: “What Exactly is Bioink?” –,as%20a%203D%20molecular%20scaffold.
  6. Medical device Network: “The future of bioprinting: A new frontier in regenerative healthcare” –
  7. JAMA Network: “Global Survey of Corneal Transplantation and Eye Banking –
  8. Medical Device Network: “3D-printed artificial corneas could replace donor transplants” –
  9. XINHUANET: “Aussie research on bioprinting mini kidney raises hope for lab-grown transplantation –
  10. ScienceDirect: “A Systematic Approach for Making 3D-Printed Patient-Specific Implants for Craniomaxillofacial Reconstruction” –,quality%2Dcontrol%20procedure%20is%20needed.
  11. 3Dnatives: “Top 12 3D Printed Implants” –!
  12. AFROTECH: “Mashudu Tshifularo Makes History By Performing World’s First 3D-Printed Middle-Ear Transplant” –

The Complexities of Colour Blindness and its Impacts on Individuals

By Samara Macrae

Published 12:33 PM EST, Tue August 17, 2021


While commonly referred to using the term ‘colour blindness’, colour deficiency vision instead is the term used when an individual’s colour vision is impaired, and as such they may not be able to distinguish between different colours1. Colour blindness is only where the individual cannot see any form of colour, and their vision is exclusively in black and white (monochromacy). This condition is very rare, while colour deficiency vision (a form of colour blindness) can affect up to 1 in 12 men and 1 in 20 women. The most common forms of colour blindness are protanopia and deuteranopia.

2Monochromacy, or complete colour blindness, can be caused by the individual having two sets of cones (out of short wave, medium wave, and long wave) which either do not function correctly or are simply not present in the retina. This results in the individual not being able to see a full spectrum of colour as a person with normal vision (which is also known as trichromatic vision) would be able to. Achromatopsia is where there are no functional cone cells at all, and so vision is only in varying shades of black and white.

Variations of Colour Deficiency Vision

3Red-green colour blindness is the most common form of colour blindness and is divided into two types: protan colour blindness is reduced sensitivity to red light, and deuteranopia is sensitivity to green light. Colour vision is controlled by cones in the retina, a layer of the eye onto which light is focused on, and when some of these cones are ineffective or not present, this will affect the individual’s colour vision. Protanopia is the result of missing long-wavelength cones (L-cones) in the retina and affects 1.01% of men but only 0.02% of women. People with protanopia as a result of missing L-cones are ‘dichromats’, and they have cones which can only detect short and medium wavelengths. Red-green colour blindness can also occur due to L-cones being defective but still present (protanomaly) and means that individuals can have varying strengths of colour blindness – this is referred to as anomalous trichromats, as the individual can still detect short, medium, and long wavelengths using their cones. 4Deuteranopia is the second form of red-green colour blindness and is also called green-blind. In cases of deuteranopia, the medium wavelength sensitive cones are missing – and so the individual can only differentiate between 2 or 3 different shades (typically blue, yellow, and brown), while a person with normal vision can distinguish between the 7 hues of visible light. As with deuteranopia the specific cones are missing, people with this condition are dichromats. Anomalous trichromats are individuals with deuteranomaly (green-weak), which is where the green-sensitive cones are deficient. Deuteranomaly can be very mild and is any form of colour deficiency vision between (very close to) normal vision and deuteranopia. Deuteranomaly affects 5% of the global male population, but only 0.35% of the global female population. 5Tritanopia (blue-yellow colour blindness) is where the short-wavelength cones are missing or otherwise impaired. Tritanopia is where these cones are completely missing, and only long and medium-wavelength cones are in the retina – and individuals with tritanopia are dichromats. Tritanomaly is where the short-wavelength cones are deficient in some way, often due to a mutation.


Colour blindness is a sex-linked genetic disorder and is carried on the X-sex chromosome. This is why men are more likely to have colour blindness, as they only have one X-sex chromosome and so only require one recessive allele coding for colour blindness. Women have two X-sex chromosomes though, and so two recessive alleles are needed for colour blindness – meaning that if a woman only has the allele for colour blindness on only one X chromosome, she will be a carrier but not have the condition herself. As a result, a man with colour blindness can pass this condition onto his daughter, who will inherit an X-sex chromosome from him as well as the mother but cannot pass the condition onto his son – who will inherit an unaffected Y-sex chromosome from him. Unlike both protanopia and deuteranopia, tritanopia/tritanomaly is not a sex-linked genetic trait – and thus men and women are affected equally by it, though it is a rare form of colour deficient vision – as it is carried on the 7th chromosome instead of the 23rd (the X-sex chromosome). Additionally, though less commonly, colour blindness or colour deficiency vision can also be the result of damage to the eye or optic nerve – as so is not necessarily solely congenital.


While people who have colour blindness/colour deficient vision may not ever realise they have the condition, tests to ascertain whether a person is colour blind are often widely accessible. An example of such a test is the Ishihara Test for Colour Blindness. This diagnostic test was created by Dr Shinobu Ishihara, an ophthalmologist, as he was asked by the Japanese Army (in which he served as a military doctor) to devise such a test to use on those conscripted for the Army. The Ishihara Test can be used to detect red-green colour blindness/deficiencies but not the rarer form of yellow-blue colour blindness6. As part of this test, an individual is shown a series of coloured circles consisting of multiple small circles to make up a larger one. Within this larger circle, some of the smaller circles are differently coloured to make the shape of a specific number (which is different for each image). Depending on whether or not a person is able to ascertain what the number is within each image can help to indicate whether or not they have normal vision or colour deficiency vision.


Colour blindness is incurable, although some forms of colour deficiency can be lessened using corrective lenses or glasses. 7Dr Ivan Schwab, Professor of Ophthalmology at the University of California, says that such glasses or lenses “[enhance] the distinction between red and green” for the person wearing them, although full colour vision is not achievable using them. He also states: “Colour blindness glasses are made with certain minerals to absorb and filter out some of the wavelengths between green and red that could confuse the brain”. This can result in fewer colours being detected by the person’s cones, and so can allow for easier distinction between them. However, these corrective lenses or glasses do not have any effect on the optic nerve, brain, or cone cells – and furthermore, these lenses/glasses are often expensive yet yield minimal to no results, and additionally can worsen vision at night due to the fact that they work by reducing the amount of light entering the eyes and being detected.

Difficulties and Lack of Accessibility

Although many individuals with colour blindness or colour deficiency vision, as well as charities such as Colour-Blind Awareness, are campaigning for these conditions to be classified as disabilities, they are not currently. Under the 2010 Equality Act (UK), a disability is defined as “a physical or mental impairment that has a ‘substantial’ and ‘long-term’ negative effect on your ability to do normal daily activities”8. A simple but common example of how colour blindness can affect activities of everyday life is a person with deficient colour vision not being able to differentiate between unripe and ripe fruit, or raw and cooked meat. In addition, children especially can struggle in education as a result of their colour deficiency vision – and exam papers especially may not be fully accessible to them – and later in life, a person with colour blindness cannot become a pilot nor enter the army. Colour vision deficiency can also limit other future career options – particularly jobs involving heavy machinery, a job in aviation or any job predominantly based around driving. The consequences of colour blindness can also be potentially fatal – such as problems with traffic lights leading to road accidents. A person with protanopia may not be able to distinguish between the red and green traffic lights – for a person with protanopia, the lights can all look white/pale yellow – and so is potentially more likely to be involved in road accidents as a result. In Australia, since 1994 individuals with either protanopia or protanomaly have not been able to obtain a driving licence due to the increased risk of accidents.


To conclude, colour blindness is a complex condition which can be frequently misunderstood due to the multiple variations of the condition. Furthermore, there are frequent misconceptions as to what ‘colour blindness’ actually entails – as it is not any form of difficulty to distinguish between colours, but rather no colour vision at all. Education systems should work to help diagnose more children who have colour deficiency vision or colour blindness, as it can impede their daily life and schoolwork if they are unaware of their condition – and thus education facilities are unable to make education/work more accessible to them without this knowledge. Improved testing and diagnosis earlier on – especially for children in early years – can additionally mean that they do not suddenly find themselves unable to pursue a specific career path or obtain a driving licence when they are older, as they were not aware of their colour deficiency vision/colour blindness beforehand. While not life-threatening, colour blindness and colour deficiency vision can have significant impacts on daily life, and simply diagnosing these conditions earlier can help improve accessibility for all aspects of life for these individuals.

Samara Macrae, Youth Medical Journal 2021


1.   Mayo Clinic: “Colour Blindness” –

2.   Cambridge Cognition: “Could Colour Blindness be Affecting the Results of your Study?” –

3.   Colblindor: “Protanopia” –

4.   Colblindor: “Deuteranopia” –

5.   Colblindor: “Tritanopia” –

6.   Eye Magazine: “Nine decades on, a Japanese army doctor’s invention is still being used to test colour vision” –

7.   American Academy of Ophthalmology: “Do Colorblindness Glasses Really Work?” –

8.   GOV.UK: “Definition of disability under the Equality Act 2010” –