Biomedical Research

mRNA vaccines


mRNA vaccines are one of many types of vaccines used to provide us with artificial active immunity. Despite having been in development for several decades, they gained traction after the use of the technology in Moderna and Pfizer-BioNTech’s COVID-19 vaccines. Following their success throughout the pandemic, clinical research into mRNA vaccines has been accelerated, with the ambition to create cancer vaccines. 

How the COVID-19 mRNA vaccine works

In the example of the COVID-19 vaccine, the genetic material that codes for the spike protein on the COVID-19 virus surface is isolated and used to create an mRNA molecule. The mRNA is then inserted into either a viral vector or a lipid nanoparticle so that once inside the body, it can enter cells. mRNA can then be translated at ribosomes to produce spike proteins, which are displayed on the cell surface membrane. Therefore, an immune response can be stimulated, producing both antibodies and memory cells, for a faster and stronger secondary immune response.1

Using mRNA vaccines against cancer

More recently, there has been an influx in clinical trials aimed at utilising mRNA vaccines to fight cancer. Rather than preventing the development of cancer, they are aimed at helping the immune system to recognise and remove it. There have been two approaches to this, with some companies focussing on a general vaccine and others on a personalised one.

A general vaccine would contain mRNA coding for proteins found commonly in a range of cancers, stimulating an immune response against their respective antigens once they are synthesised. The two main advantages of this type of vaccine is the low cost, as well as the scope for large-scale production and distribution. Current trials involving them are targeting cancers such as advanced melanomas, prostate cancer and ovarian cancers. 

However, greater strides have been made in creating a personalised vaccine, which codes for neoantigens- proteins which are specific to an individual’s cancer. Since each cancer tends to present uniquely in different cases, it is important to identify the relevant antigens to ensure a high level of effectiveness; however this process can be expensive, costing several thousand pounds per dose. Ongoing phase two trials with personalised vaccines are targeting diseases such as melanoma and colorectal cancer.2

The image below, from BioNTech, summarises how a personalised cancer vaccine is created and subsequently stimulates an immune response:

There are numerous options for the delivery format of the mRNA vaccine, one of which is introducing mRNA into dendritic cells (DCs), in a process known as transfection, creating a cell-based cancer vaccine. Another alternative is injecting the mRNA directly into the body, without the use of a carrier. Whilst this technique is cheaper and quicker, without any protective carrier, the mRNA is prone to degradation by enzymes thus limiting their benefits. Currently, the most promising delivery method is through implementing a similar method to the COVID-19 vaccines by using lipid nanoparticles. These can be easily taken in by cells through endocytosis, followed by endosomal escape, enabling the mRNA to be translated at ribosomes and produce the desired antigen.3

Advantages and disadvantages

One of the greatest advantages of mRNA vaccines is the speed with which they can be developed, for example Moderna completed the whole process from design to manufacturing in just 7 weeks. In order to adapt the technology for other vaccines, all that would need to be changed is the mRNA base sequence to produce the desired protein. Although this makes it seem as if we could develop mRNA vaccines for every disease, it is the necessary, lengthy clinical trials which mean that very few are approved for use today.4

Another advantage is that such vaccines cannot be integrated into the genome, as the mRNA is broken down quickly after translation, as well as the absence of the enzyme reverse transcriptase. This reduces the risk of insertional mutagenesis (mutations in DNA caused by extra base pairs), which could otherwise have drastic effects on polypeptide synthesis and impact phenotype.5

However, one of the main limitations of mRNA is its lack of stability at high temperatures, and therefore the need for it to be transported in freezers. This is a major obstacle for those living in rural areas, as well as developing countries, both due to high transportation costs, as well as a lack of suitable infrastructure. Another limitation is potential long-term impacts, which are yet to be seen, due to the technology being relatively new compared to other forms of vaccines.4


Overall, the field of mRNA vaccine research is continually growing, with large strides being made towards a future where we can use it to treat a range of diseases. Although as of yet none have been approved for cancer treatment, there are several ongoing trials by a number of pharmaceutical companies, many of which are showing promising results. This technology has the potential to transform the way we tackle cancer and is definitely something to look out for in the near future.


  1. Understanding covid-19 mrna vaccines [Internet]. [cited 2023Mar11]. Available from: 
  2. Sanderson K. How close are we to developing an mrna cancer vaccine? [Internet]. The Pharmaceutical Journal. 2022 [cited 2023Mar11]. Available from: 
  3. Vishweshwaraiah YL, Dokholyan NV. MRNA vaccines for cancer immunotherapy. Frontiers in Immunology. 2022;13. 
  4. MRNA vaccines – here’s everything you need to know [Internet]. World Economic Forum. [cited 2023Mar11]. Available from: 
  5. Lorentzen CL, Haanen JB, Met Ö, Svane IM. Clinical advances and ongoing trials of mrna vaccines for cancer treatment. The Lancet Oncology. 2022;23(10). 
Biomedical Research

The HeLa cells: Exploring Their Importance and Ethical Issues


Historically, scientists had attempted for many decades to create a cell line which could be multiplied indefinitely, a feat which, if achieved could revolutionise science. In spite of this, all endeavours resulted in the same futile outcome. However this was soon to change, when, in 1951, 31-year-old Henrietta Lacks visited John Hopkins for cervical cancer treatment where a surgeon took a sample of her tumour¹. Little did anyone know that this sample would grow to become the first immortal human cell line (HeLa) and contribute so much to modern medicine.

Contributions to medicine: A timeline

Following the revelation that the HeLa cell line could multiply at a prolific rate and stayed alive in culture for long periods of time, their popularity grew. In fact, it is estimated that if all the HeLa cells ever grown were laid in a line, they would wrap around the Earth three times! Samples were sent across the globe for scientists to use in research, the results of which transformed treatment methods, many of which we still use today. Below is a short timeline covering a few of the many ways in which the cells revolutionised science.

1953: The polio vaccine

During the development of the polio vaccine, scientists required cells which could be used to test its efficacy, and the HeLa cells were found to be more susceptible to polio infection than other cell lines. The Tuskegee institute, also the site of the infamous syphilis studies, was chosen as the location to create a factory dedicated to manufacturing the cells. Culturing of the HeLa cells occurred at a never-before-seen industrial scale, contributing to rapid subsequent clinical trials and rollout of the vaccine². Since their implementation, 2 of the 3 strains of the disease have been globally eradicated, as well as a 99% decrease in wild poliovirus cases, saving millions of lives³.

1964: Treatment for blood disorders

Hydroxyurea is now a common treatment for a range of blood disorders, ranging from cancer to sickle cell anaemia and testing on the HeLa cells laid the groundwork for its approval. A group of researchers found that the drug was not only able to reduce cancerous growth rates, but also prevent ‘sickle-shaped’ red blood cells⁴. It achieves this by increasing the levels of foetal haemoglobin, making the erythrocytes larger and more flexible, alleviating symptoms of the disease⁵.

1983: HPV/cervical cancer

In the 1980s, a virologist analysed the HeLa cells to find that they contained HPV-18, which caused Henrietta Lacks’ cervical cancer by switching off her tumour suppressor gene. This particular strain is considered to be one of the most dangerous, and further research enabled the development of the HPV vaccine, which is now commonly administered to teenage girls⁶.

2020: Covid-19

To this day, the HeLa cells are being used in research, and they were particularly useful in studying the infectivity of the SARS-CoV-2 virus. It was found in a 2020 study that this form of the virus was unable to infect these cells as well as expected, which prompted research into the cause. It was previously known that other forms of coronavirus use a molecule called ACE2, commonly found on the surface of some body cells. Modification of the HeLa cells to produce and display the molecule enabled the virus to infect and therefore replicate inside the cells⁷. Studies such as this, among others, paved the way for the development of the vaccine, which was estimated to have saved 20 million lives in 1 year⁸. 

These are just a few of the many ways in which the HeLa cells have been used in scientific research in order to further develop our understanding of disease. However over the years, the HeLa cells have been subject to controversy due to ethical concerns.

Ethical issues:

The story of Henrietta Lacks was largely unknown until it was brought to public attention by author Rebecca Skloot through her bestseller ‘The Immortal Life Of Henrietta Lacks.’ The book raised awareness about informed consent in scientific research, since when the tissue sample was taken from Lacks, there was no legal or ethical requirement for doctors to obtain permission from the patient¹. As a result, it was only 25 years after Henrietta Lacks’ death that her family found out about the cell line, at which point the cells had contributed to the polio vaccine, cancer research and even been sent to space!

Since the 1950s, the situation with informed consent has vastly improved, with much tighter regulation on a patient’s rights. Yet, this leads one to think of what would have happened if informed consent had been required at the time. If Henrietta had the choice and declined scientists taking a sample, would we be living in a very different world? Would some of the advancements that we take for granted today not exist? The tradeoff between ethics and public benefit remains controversial, but further awareness enables better decisions to be made. 

Additionally, there stands the question as to who should benefit financially from the cells. Whilst large pharmaceutical companies profited off of the research on HeLa cells, the Lacks family remained in poverty, unable to afford proper health coverage and support themselves. Whether the family should profit from the cells remains controversial, however in 2013 a stride was made by the NIH (the National Institute of Health), which meant that two members of the Lacks family are now involved in deciding who is permitted to use the HeLa cells⁹.


Overall, the HeLa cell line has played a major developmental role for modern medicine, aiding us in curing and alleviating a range of diseases. Whether it be helping to eradicate polio or treat blood disorders, there is no doubt that many lives have been saved. However, this does not mean that the ethical issues should be overlooked, and although informed consent has come a long way since the 1950s, we still face several challenges. Notably, in developing countries there have been cases of violation of consent, often with regard to misinforming patients of potential dangers associated with procedures and trials. As we look into the future, we need to aim for a balance between obtaining tissues ethically, but still having a supply large enough to conduct meaningful research. Whether this can be achieved remains unknown, but at least we are taking strides in the right direction.

Nyneisha Bansal, Youth Medical Journal 2023


1. Skloot R. The immortal life of Henrietta Lacks. New York: Crown Publishers; 2010.

2. Turner T. Development of the Polio Vaccine: A Historical Perspective of Tuskegee University’s Role in Mass Production and Distribution of HeLa Cells. Journal of Health Care for the Poor and Underserved. 2012;23(4a):5-10.

3. Poliomyelitis (polio) [Internet]. [cited 24 August 2022]. Available from:

4. Significant Research Advances Enabled by HeLa Cells [Internet]. Office of Science Policy. [cited 24 August 2022]. Available from:

5. [Internet]. [cited 24 August 2022]. Available from:

6. Samuel L. 5 important ways Henrietta Lacks changed medical science [Internet]. STAT. 2017 [cited 24 August 2022]. Available from:

7. Jackson N. Vessels for Collective Progress: the use of HeLa cells in COVID-19 research – Science in the News [Internet]. Science in the News. 2020 [cited 24 August 2022]. Available from:

8. COVID-19 vaccines saved an estimated 20 million lives in 1 year [Internet]. CIDRAP. 2022 [cited 24 August 2022]. Available from:

9. Frequently Asked Questions | Johns Hopkins Medicine [Internet]. [cited 26 August 2022]. Available from:,Federal%20law%20requires%20informed%20consent.

Biomedical Research Health and Disease

Stem Cells: Why Are They So Crucial In Medicine?

Stem Cells constitute an intriguing and promising field of medicine because of their ability to regenerate and heal damaged tissue. This article covers the biology of stem cells, the pros and cons, and their promising abilities to further our study of medicine.


Stem cells are the raw materials of the body, the cells that give rise to all other cells with specialized roles. Under the appropriate conditions, these unique human cells can develop into a variety of cell types by dividing themselves to generate new cells known as daughter cells in the body or in a laboratory. This can include everything from muscle cells to brain cells. In some situations, they can also repair damaged tissues. These daughter cells can either continue to differentiate into new stem cells or can become specialized cells with a more narrowly defined function, such as bone, brain, heart, or blood cells. No other cell in the body has the capacity to naturally produce different cell types. Because of these distinguishing traits, stem cells have historically been assumed to be everlasting and ageless.

Types of Stem Cells 

Every organ and tissue in your body is built on stem cells. There are numerous types of stem cells that originate in various parts of the body or form at various stages throughout our lifetimes. These include embryonic stem cells, which exist only during the early stages of development, as well as other types of tissue-specific (or adult) stem cells, which appear during fetal development and remain in our bodies throughout our lives. All stem cells have the ability to self-renew (produce copies of themselves) and differentiate (develop into more specialized cells). Aside from these two fundamental capacities, stem cells differ greatly in what they can and cannot do, as well as the conditions under which they can and cannot do specific things. To begin with, there are three fundamental types of stem cells: Embryonic Stem Cells, Adult Stem Cells, and Induced Pluripotent Stem Cells.

  • Embryonic Stem Cells

Embryonic stem cells are extracted from the blastocyst, a mostly hollow ball of cells that forms three to five days after an egg cell is fertilized by a sperm in humans. The size of a human blastocyst is around the size of the dot above this “i.” During normal development, the cells within the inner cell mass give rise to the more specialized cells that give rise to the complete body—all of our tissues and organs. When scientists extract the inner cell mass and cultivate it in a particular laboratory environment, the cells retain the qualities of embryonic stem cells. Embryonic stem cells are pluripotent, which means they can give rise to every cell type in the fully developed body save the placenta and umbilical cord. These cells are extremely significant because they provide a sustainable supply for researching normal development and disease, as well as evaluating medicines and other therapies. Human embryonic stem cells were predominantly produced from blastocysts developed by in vitro fertilization (IVF) that were no longer required for assisted reproduction. (3) Scientists seek to learn more about how these cells differentiate as they develop. As we learn more about these developmental processes, we may be able to apply them to stem cells produced in a lab and maybe regenerate tissues like the liver, intestines, nerves, and skin for transplantation. (4).

Figure 1: The Journal Of Clinical Investigation 

  • Adult Stem Cells

Adult stem cells, also known as somatic stem cells, are undifferentiated cells found in nearly all creatures’ bodies, including humans, in a variety of organs. Adult stem cells, which have been found in a variety of tissues including skin, heart, brain, liver, and bone marrow, are normally restricted to becoming any type of cell in the tissue or organ in which they dwell, as opposed to embryonic stem cells, which can become any cell in the body. Adult stem cells are believed to be multipotent, meaning they can differentiate into certain types of body cells exclusively, not into any type of cell. (6). These adult stem cells, which can live in tissue for decades, replace cells that are lost in the tissue as needed, such as the daily generation of new skin in humans. (5). Most adult tissues, including bone marrow and fat, contain a tiny amount of these stem cells. Adult stem cells, in comparison to embryonic stem cells, have a more limited potential to give rise to various bodily cells. Adult stem cells were thought to only be capable of producing comparable types of cells until recently. For example, researchers previously believed that stem cells found in bone marrow could only give rise to blood cells. However, new research reveals that adult stem cells can generate a variety of cell types. For instance, bone marrow stem cells might be able to develop into heart or bone cells. This research has resulted in early-stage clinical trials to assess the usefulness and safety of the product in humans. (1)

Using genetic reprogramming, scientists successfully turned ordinary adult cells into stem cells. Researchers can reprogram adult cells to behave like embryonic stem cells by modifying their DNA. This new technology may allow for the use of reprogrammed cells rather than embryonic stem cells, as well as the prevention of immune system rejection of the new stem cells. However, scientists are unsure whether employing changed adult cells will have a negative impact on humans. Researchers were able to transform ordinary connective tissue cells to become functioning heart cells. In experiments, animals with heart failure who were injected with fresh heart cells had better heart function and survival time.(1)

Types of Adult Stem Cells 

  • Hematopoietic Stem Cells (Blood Stem Cells)
  • Mesenchymal Stem Cells
  • Neural Stem Cells
  • Epithelial Stem Cells
  • Skin Stem Cells

Figure 2: Sciencedirect

  • Induced Pluripotent Stem Cells 

Induced pluripotent stem (iPS) cells are lab-engineered cells that have been transformed from tissue-specific cells, such as skin cells, into cells that act like embryonic stem cells. They are a happy medium between adult stem cells and embryonic stem cells. iPSCs are generated by inserting embryonic genes into a somatic cell (such as a skin cell) and causing it to return to a “stem cell like” state. IPS cells are important tools for scientists to understand more about normal development, illness start and progression, and for creating and testing new medications and therapies. While iPS cells share many of the same properties as embryonic stem cells, such as the potential to give rise to all cell types in the body, they are not identical. Scientists are trying to figure out what these distinctions are and what they represent. For starters, the first iPS cells were created by inserting extra copies of genes into tissue-specific cells using viruses. Researchers are exploring a variety of methods for creating iPS cells, with the goal of eventually using them as a source of cells or tissues for medical therapies. One example is that the first iPS cells were created by inserting extra copies of genes into tissue-specific cells using viruses. (3). These cells, like ESCs, are thought to be pluripotent. This method of genetic reprogramming to make embryonic-like cells, discovered in 2007, is unique and requires several more years of research before it can be used in clinical therapy. (4). 

The Importance Of Stem Cells

Stem cells may benefit your health in a variety of ways and through a variety of novel treatments in the future. Stem cells, according to researchers, will be used to help build new tissue. For example, healthcare providers may one day be able to treat patients with persistent heart disease. They can accomplish this by cultivating healthy heart muscle cells in the lab and transferring them into damaged hearts. Other medicines could target type 1 diabetes, spinal cord injury, Alzheimer’s disease, and rheumatoid arthritis. New treatments could potentially be evaluated on pluripotent stem cell-derived cells.

Stem cells, with their unique regeneration powers, hold fresh promise for treating diseases such as diabetes and heart disease. However, substantial work in the laboratory and clinic remains to be done to understand how to employ these cells for cell-based therapies to cure disease, often known as regenerative or reparative medicine. Laboratory studies of stem cells allow scientists to understand about the cells’ basic features and what distinguishes them from other types of cells. Scientists are already employing stem cells in the lab to test novel medications and create model systems for studying normal growth and determining the reasons of birth abnormalities. Stem cell research continues to enhance understanding of how an organism grows from a single cell and how healthy cells replace damaged cells in mature creatures. Stem cell research is one of the most exciting areas of modern biology, yet, like with many burgeoning domains of scientific endeavor, it raises scientific concerns as quickly as it generates new discoveries. (4)

Although adult stem cell research is encouraging, adult stem cells might not be as adaptable and resilient as embryonic stem cells. The potential for using adult stem cells to cure diseases is constrained by the fact that not all cell types can be produced from adult stem cells. Adult stem cells are also more likely to have abnormalities because of chemicals or other environmental dangers, or because the cells made mistakes during replication. Adult stem cells, on the other hand, have been discovered to be more versatile than previously imagined. (1).

Stem Cell Line

A stem cell line is a collection of in vitro-grown cells that all descended from a single initial stem cell. A stem cell line’s cells continue to multiply without differentiating into other types of cells. Ideally, they continue to produce more stem cells and are genetically flawless. From a stem cell line, groups of cells can be extracted and shared with other researchers or frozen for future use.

Potential Therapies using Stem Cells 

Regenerative medicine, often known as stem cell therapy, uses stem cells or their metabolites to stimulate the body’s natural healing process in diseased, dysfunctional, or injured tissue. It is the next step in organ transplantation and employs cells rather than donor organs, which are in short supply. In a lab, researchers cultivate stem cells. Through manipulation, these stem cells can be made to specialize into particular cell types, such as heart muscle cells, blood cells, or nerve cells. The specialised cells can then be injected into the patient. If the patient has cardiac issues, the cells might be injected into the heart muscle, for example. The transplanted, healthy heart muscle cells could then help the injured heart muscle repair.

Embryonic Stem Cell (ESC) Therapies

ESCs have the potential to treat some diseases in the future. Scientists are still learning how ESCs differentiate, and once this process is more understood, the objective is to apply what they’ve learned to get ESCs to develop into the cell of choice that is required for patient therapy. Diabetes, spinal cord injury, muscular dystrophy, heart illness, and vision/hearing loss are among the diseases being treated by ESC therapy. (4) 

Adult Stem Cell Therapies

For more than 40 years, bone marrow and peripheral blood stem cell transplants have been used to treat blood illnesses such as leukemia and lymphoma, among others. Scientists have also discovered that stem cells may be found in nearly all parts of the body, and research is ongoing to learn how to identify, remove, and multiply these cells for future application in therapy. Scientists want to develop treatments for ailments such as type 1 diabetes and cardiac muscle restoration after a heart attack. Scientists have also demonstrated the potential for reprogramming ASCs to cause them to transdifferentiate (revert to a cell type other from the one it was replenishing in the local tissue). (4)

Induced Pluripotent Stem Cell Therapies

Therapies based on iPSCs are intriguing because recipient somatic cells can be reprogrammed to a “ESC like” state. The necessary cells could then be produced by using processes to differentiate these cells. This appeals to physicians because it avoids the issue of histocompatibility and lifelong immunosuppression, which is required when donor stem cells are used in transplants. iPS cells are pluripotent cells that mirror most ESC traits, but they do not currently carry the ethical baggage of ESC study and use because iPS cells have not been induced to grow the outer layer of an embryonic cell essential for the cell’s growth into a human individual. (4)

 Pros and Cons of Stem Cells

Figure 2: University of Nebraska Medical Center: Types of Stem Cells

Potential Problems in using Stem Cells

Stem cells require considerably more research before their use may be increased. Scientists must first discover more about how embryonic stem cells develop. They will learn how to manage the kind of cells that are produced from them thanks to this. Using adult pluripotent stem cells presents difficulties for scientists as well. Researchers are trying to find a better approach to cultivate these cells because they are challenging to do so in a lab. The body also contains trace amounts of these cells. There is a bigger possibility that they will have DNA issues.

 Another issue is that the embryonic stem cells that are now available are likely to be rejected by the body. Additionally, some people believe that using stem cells derived from embryos violates moral principles. For embryonic stem cells to be effective, researchers must be confident that they will develop into the required cell types. Researchers have identified ways to control stem cells to become specific types of cells, such as directing embryonic stem cells to become heart cells. In this field, research is underway. Additionally, embryonic stem cells have the capacity to develop erratically or innately specialize in certain cell types. Researchers are investigating how to control the proliferation and differentiation of embryonic stem cells. Embryonic stem cells could possibly set off an immunological reaction in which the body of the receiver assaults the stem cells as foreign invaders, or they could simply stop working as they should, with unknown repercussions. Researchers are still investigating how to prevent these potential complications. (1).

Akshaya Ganji, Youth Medical Journal 2022


  1. MayoClinic: Stem cells: What they are and what they do-
  2. Standard Medicine: What Are Stem Cells?- 
  3. A Closer Look at Stem Cells: Types of Stem Cells- 
  4. University of Nebraska Medical Center: Types of Stem Cell- 
  5. University of Notre Dame: Adult Stem Cells- 
  6. Yo Topics: What is a stem cell?- 
Biomedical Research Narrative Neuroscience

Brain Organoids: A Narrative Review of Potential, Limitations and Future


The rapid development of stem cell technology has opened up unprecedented avenues for studying human neurodevelopment. One of such avenue is the study of brain organoids, or “mini-brains”. These are three-dimensional, stem-cell derived suspension cultures, capable of self-assembling into organized forms with features resembling the human brain. 

While considerable progress has been made for in vitro models of organoid development for other systems—namely the intestine, pituitary and retina—three-dimensional culture modelling of the brain had for long remained out of reach, until a breakthrough study in 2013. In this study, led by postdoctoral student Madeline Lancaster, researchers developed innovative new methods to generate “cerebral” organoids, inspired by past work in the field with a focus on improving conditions for growth and higher-level development of cells. ‘Organoids’, in this sense, refer to stem-cell-derived, three-dimensional cultures that self organize to some extent and include multiple cell types and features of a particular organ. These developing tissues were placed in a rotational bioreactor. Within a few weeks, they yielded organoids containing anatomical brain structures resembling those of a 9-week-old human foetus. In the years since, developments in the field of stem cell research has allowed for other teams of researchers to give cerebral organoids increased degrees of structural complexity: from transplanting small organoids into mice to expose them to a greater supply of blood vessels, to making several organoids that mimic various parts of the brain and combining them for more complex cytoarchitecture. This provides immense potential for the study of human foetal brain development, neurodevelopmental disorders and degenerative diseases. 

However, it remains unclear precisely what cell types arise in these brain ‘organoids’, how much individual organoids vary, and whether mature neuronal networks can form and function in organoids. Many limitations and hurdles lie in the way of growth for this novel field, and even further, ethical questions await on the question of sentience and autonomy.

Technical Advances and Methodology 

To make an organoid in 2013, Lancaster’s team began with an embryoid body, floating aggregates of cells that resemble embryos. These could be obtained either from natural, embryonic stem cells (from the inner cell mass of a blastocyst) or from induced pluripotent cells, which were made from adult cells (typically skin cells) that would have been treated with four crucial biochemical factors which caused them to be reprogrammed to forego their original function and behave like embryonic cells. (See: Fig. 1) These embryoid bodies were differentiated into neural tissue and then transferred into three-dimensional gel matrix droplets. Once these ‘aggregates’ had reached a certain size, they were placed in a rotational bioreactor where they were spun to enhance to flow of nutrients into the medium without being shaped by the constraint of a vessel such as a Petri dish. 

With minimal external interference, this approach produced cerebral organoids possessing human pluripotent stem cells with the most freedom in regards to self-organisation and construction, exhibiting a variety of cell lineage identities, ranging from the forebrain, midbrain and hindbrain, to the retina, choroid plexus and mesoderm. 

This is known as the ‘unguided approach’ for the production of cerebral organoids. Although cell-type diversity offers a unique opportunity to model interactions between different regions of the brain, the high degree of variability and unpredictability present significant challenge for reproducibility and systemic studies.

On the other hand, in the ‘guided’ or ‘directed’ method for generating brain organoids, small molecules and growth factors are applied to developing organoids throughout the differentiation process to instruct human pluripotent stem cells to form cells and tissues resembling certain regions. These directed organoid cultures are sometimes capable of generating mixtures of cell types with relatively consistent proportions with less variation. However, they typically contain relatively small neuroepithelial structures and their architecture is often not well-defined. Nevertheless, the guided method remains the most common one for generating brain organoids today. 

There is also the avenue of advanced techniques that allow for greater complexity. This includes used organoid technologies, in which pluripotent stem cells are differentiated into region-specific organoids separately and then fused together, forming an end result with multiple distinct regional identities in a controlled manner. An example would be fused dorsal and ventral forebrain organoids, together forming an ‘assembloid’. These structures reveal the manner in which migrating interneurons connect and form microunits. 

The choice between guided and unguided methodologies will be dependent on the focus of the investigation. Where unguided organoids are suitable for exploring cell-type diversity during whole-brain development, brain region-specific organoids better mimic brain cytoarchitecture with less heterogeneity, and assembloids allow for the investigation of interactions between different brain regions.

With there being many routes to obtaining organoids that can then proceed to act as ‘models’, the logical next step in their development is their capability to, in fact, model the brain and study it, and what new avenues of treatment and application this can lead to.  

Potential Application 

As the organoids contain striking architectures strongly reminiscent of the developing human cerebral cortex (evolutionarily the most complex tissue), they display great potential for the effective modelling of neurodevelopmental brain disorders. As it would in the native brain, the cortical areas segregate into different layers, with radial glial cells dividing and giving birth to neurons in the innermost and subventricular zones, from which the quantity of neurons to develop the larger cerebral cortex is generated. 

This process presents fascinating opportunities for the study and treatment of microcephaly in particular. Microcephaly is a developmental conditions in which the brain of young infants remains undersized, producing a small head and debilitation. Replicating the condition is not suitable for mice models, as they lack the developmental stages for an enlarged cerebral cortex possessed by primates such as humans. Naturally, this means the disease would be impossible to show in a mouse model, as they do not have the developmental stage in which microcephaly is expressed in the first place. In this instance, brain organoids provide the most ideal model for study. 

Other studies involving brain organoids have been able to provide glimpses into the cellular and molecular mechanisms involved in brain development. For example, forebrain organoids derived from cells of individuals with ASD (autism spectrum disorders) display an imbalance of excitatory neuron and inhibitory neuron proportions. They have also developed great interest as potential neurodegenerative diseases models, even though attempts so far have had minimal success. This is mainly due to the fact that many neurodegenerative diseases, such as Alzheimer’s, are age-related and late onset, therefore brain organoids with mimic embryonic brain development may not possess the ideal characteristics to reproduce such development. 

In addition to genetic disorders, brain organoids can also provide models for neurotrophic pathogens such as the Zika virus. When brain organoids are exposed to the Zika virus, it results in preferential infection of neural progenitor cells (which suppress proliferation and cause an increase in cell death) leading to what is ultimately drastically reduced organoid size. They then also display a series of other characteristics identified in congenital Zika syndrome, such as the thinning of the neuronal layer, disruption of apical surface junctions and the dilation of the ventricular lumens. This highlights direct evidence of the causal relationship between exposure to the Zika virus and the development of harmful neurological conditions. In this way and many others, brain organoids provide optimistic prospects for the study of various neurodevelopmental diseases—though not without some considerations. 


The fundamental limiting factor that prevents organoids from being able to fully replicate the late stages of human brain development is their size. Cortical organoids are much smaller in size compared with the full human cerebral cortex. Whereas cortical organoids can at most expand to approximately 4mm in diameter containing 2-3 million cells (about the size of a lentil), the human neocortex is about 15cm in diameter, with the thickness of gray matter alone being 2-4mm. This is a difference of about 50,000 in order. Furthermore, owing to a lack of circulation due to the limited metabolic supply, lack of a circulatory system and the physical distance over which oxygen and nutrients must diffuse, the viable thickness of organoids is restricted.  

Notably, cortical folding (gyrification) remains an unachieved ‘holy grail’ for cortical organoids. Gyrification is an essential and unique stage in the development of the human cortical brain in which the cerebral cortex experiences rapid growth and expansion. Due to the stressed of spatial confinement, the cortical layer buckles into wave-like structures, with outward ridges known as gyri and inward furrows called sulci. This stage is unique to humans and some other primates, theorised to be essential to complex behaviours such as language and social communication. In contrast, the brains of small such as rodents exhibit little to no gyrification—and neither do cerebral organoids. This may be because they are unable to reach the stage at which gyrification occurs (the demarcation of ‘primary’ gyri and ‘secondary’ gyri does not occur in humans until the second and third trimester, which is a later stage than what most brain organoids can replicate). Attempts have been made to induce ‘crinkling’ or ‘pseudo-folding’ in early organoid differentiation, but this has not led to the formation of gyrus- and sulcus- like structures. 

A better understanding of the mechanism under with gyrification occurs could lead to progress in existing methodologies to engineer the phenomenon in cerebral organoids, however, it is unlikely that the current organoid structure can fully replicate the folding of the human neocortex soon. Statistical analyses have suggested that the degree of folding across mammalian species is scaled with the surface area and thickness of the cortical plate, and organoids—at least in their current form—may simply be too small to achieve this result.

Due to these limitations, many ethical considerations concerning sentience and consciousness remain premature. The vast majority of scientists and ethicists are in agreement that consciousness has never been generated in a lab. Still, concerns over lab-grown brains have highlighted a blind spot: neuroscientists have no agreed upon definition or measurement of consciousness. Furthermore, certain experiments have still drawn scrutiny. In August 2019, a paper in Cell Stem Cell reported the creation of human brain organoids that produced co-ordinated waves of activity, resembling those seen in premature babies. While this was to a very small degree, it still prompted a wave of questions in relation to ethics, autonomy and ownership. Regardless, the waves only continued for a few months before the team shut the experiment down. Though moderate amounts of electrical activity is a sign of consciousness, the vast majority of brain organoids developed today are too far away in sophistication to be considered conscientious, autonomous beings.


Despite compelling data and innovative methodology, the formation of ‘a brain in a dish’ remains out of reach. Current models of brain organoids remain far from reproducing the complex, six-tiered architecture of their natural counterpart, even a foetal one. Presently, the organoids stop growing after a certain period of time and areas mimicking different brain regions are randomly distributed, often lacking the shape and spatial organisation seen in a sophisticated brain. Furthermore, there is also an absence of a necessary circulatory system means their interiors can often accumulate dead cells deprived of oxygen and nutrients. 

Yet, even with significant limitations, the potential for cerebral organoids are great. For certain questions, the model provided by this innovation could provide interesting answers and mechanism with which to study early human brain development and the progression of neurodevelopmental disorders. The brain organoid field has made exciting leaps to empower researchers and scientists with new tools to address old questions, and while there is a long path before more faithful in vitro representation of a developing human brain is reached, it is important to consider that no model will likely ever be perfect. 

Ishika Jha, Youth Medical Journal 2022


[1] Sato, T., Vries, R. G., Snippert, H. J., van de Wetering, M., Barker, N., Stange, D. E., … Clevers, H. (2009). Single Lgr5 stem cells build crypt-villus structures in vitro without a mesenchymal niche. Nature, 459(7244), 262–265.

[2] Suga, H., Kadoshima, T., Minaguchi, M., Ohgushi, M., Soen, M., Nakano, T., … Sasai, Y. (2011). Self-formation of functional adenohypophysis in three-dimensional culture. Nature, 480(7375), 57–62.

[3] Nakano, T., Ando, S., Takata, N., Kawada, M., Muguruma, K., Sekiguchi, K., … Sasai, Y. (2012). Self-Formation of Optic Cups and Storable Stratified Neural Retina from Human ESCs. Cell Stem Cell, 10(6), 771–785.

[4] Lancaster, M. A., Renner, M., Martin, C.-A., Wenzel, D., Bicknell, L. S., Hurles, M. E., … Knoblich, J. A. (2013). Cerebral organoids model human brain development and microcephaly. Nature, 501(7467), 373–379.

[5] Science & Technology. (2013, September 18). An embryonic idea. Retrieved from The Economist website:

[6] Pastrana, E. (2013). The developing human brain—modeled in a dish. Nature Methods, 10(10), 929–929.

[7] Camp, J. G., Badsha, F., Florio, M., Kanton, S., Gerber, T., Wilsch-Bräuninger, M., Lewitus, E., Sykes, A., Hevers, W., Lancaster, M., Knoblich, J. A., Lachmann, R., Pääbo, S., Huttner, W. B., & Treutlein, B. (2015). Human cerebral organoids recapitulate gene expression programs of fetal neocortex development. Proceedings of the National Academy of Sciences of the United States of America112(51), 15672–15677.

[8] Xu, J., & Wen, Z. (2021). Brain Organoids: Studying Human Brain Development and Diseases in a Dish. Stem Cells International, 2021, e5902824.

[9] Sloan, S. A., Darmanis, S., Huber, N., Khan, T. A., Birey, F., Caneda, C., Reimer, R., Quake, S. R., Barres, B. A., & Paşca, S. P. (2017). Human Astrocyte Maturation Captured in 3D Cerebral Cortical Spheroids Derived from Pluripotent Stem Cells. Neuron95(4), 779–790.e6.

[10] Qian, X., Song, H., & Ming, G. (2019). Brain organoids: advances, applications and challenges. Development, 146(8), dev166074.

[11] Birey, F., Andersen, J., Makinson, C. et al. Assembly of functionally integrated human forebrain spheroids. Nature 545, 54–59 (2017).

[12] Brüstle, O. (2013). Miniature human brains. Nature, 501(7467), 319–320.

[13] Opitz, J. M., & Holt, M. C. (1990). Microcephaly: general considerations and aids to nosology. Journal of craniofacial genetics and developmental biology10(2), 175–204.

[14] Abner, E. L., Nelson, P. T., Kryscio, R. J., Schmitt, F. A., Fardo, D. W., Woltjer, R. L., Cairns, N. J., Yu, L., Dodge, H. H., Xiong, C., Masaki, K., Tyas, S. L., Bennett, D. A., Schneider, J. A., & Arvanitakis, Z. (2016). Diabetes is associated with cerebrovascular but not Alzheimer’s disease neuropathology. Alzheimer’s & dementia : the journal of the Alzheimer’s Association12(8), 882–889.

[15] Ooi, L., Dottori, M., Cook, A. L., Engel, M., Gautam, V., Grubman, A., Hernández, D., King, A. E., Maksour, S., Targa Dias Anastacio, H., Balez, R., Pébay, A., Pouton, C., Valenzuela, M., White, A., & Williamson, R. (2020). If Human Brain Organoids Are the Answer to Understanding Dementia, What Are the Questions?. The Neuroscientist : a review journal bringing neurobiology, neurology and psychiatry26(5-6), 438–454.

[16] Cugola, F., Fernandes, I., Russo, F. et al. The Brazilian Zika virus strain causes birth defects in experimental models. Nature 534, 267–271 (2016).

[17] Qian, X., Nguyen, H. N., Jacob, F., Song, H., & Ming, G. L. (2017). Using brain organoids to understand Zika virus-induced microcephaly. Development (Cambridge, England)144(6), 952–957.

[18] Rambani, K., Vukasinovic, J., Glezer, A., & Potter, S. M. (2009). Culturing thick brain slices: an interstitial 3D microperfusion system for enhanced viability. Journal of neuroscience methods180(2), 243–254.

[19] Del Maschio, N., Fedeli, D., Sulpizio, S., & Abutalebi, J. (2019). The relationship between bilingual experience and gyrification in adulthood: A cross-sectional surface-based morphometry study. Brain and language198, 104680.

[20] Lewitus, E., Kelava, I., & Huttner, W. B. (2013). Conical expansion of the outer subventricular zone and the role of neocortical folding in evolution and development. Frontiers in human neuroscience7, 424.

[21] Chenn, A., & Walsh, C. A. (2002). Regulation of cerebral cortical size by control of cell cycle exit in neural precursors. Science (New York, N.Y.)297(5580), 365–369.

[22] Xu, J., & Wen, Z. (2021). Brain Organoids: Studying Human Brain Development and Diseases in a Dish. Stem Cells International, 2021, e5902824.

[23] Reardon, S. (2020). Can lab-grown brains become conscious? Nature, 586(7831), 658–661.

[24] Trujillo, C. A., Gao, R., Negraes, P. D., Gu, J., Buchanan, J., Preissl, S., Wang, A., Wu, W., Haddad, G. G., Chaim, I. A., Domissy, A., Vandenberghe, M., Devor, A., Yeo, G. W., Voytek, B., & Muotri, A. R. (2019). Complex Oscillatory Waves Emerging from Cortical Organoids Model Early Human Brain Network Development. Cell stem cell25(4), 558–569.e7.

[25] Pastrana, E. (2013). The developing human brain—modeled in a dish. Nature Methods, 10(10), 929–929.

Biomedical Research

AI and Moral Status


The question of moral status has been one that has plagued humans since Aristotle in 300 BCE [1]. In broad terms, it is the status attributed to an entity if and only if their “interests morally matter to some degree for the entity’s own sake”[2]. In other words, an entity of moral status will have certain considerations that need to be taken into account when deciding what is owed to them. Moral status has become increasingly important in areas such as bioethics, medical ethics, and even environmental ethics. For instance, is it morally correct to perform experiments on mice for the sake of scientific advancement? Are we right in consuming farmed animal products? Does a human embryo hold any moral status, and how does this play a role in abortion rights? These are all questions worth debating, especially considering that as humans, we only make up 0.01% of the Earth [3]. Yet until recently, these questions have only been limited to biological beings. However, as advancements in science and technology continue to increase, philosophers and scientists are expanding their scope to consider novel and unfamiliar beings, namely, artificial intelligence. 

Artificial Intelligence

Artificial Intelligence (AI) is a branch of computer science that concerns creating smart machines with human capabilities. AI already exists in today’s world, from Netflix recommendations to Alexa and Siri. But when it comes to moral status, experts are focusing on a type of AI called Artificial General Intelligence (AGI). AGI introduces the capacity to perform any intellectual task with the efficiency of a human, including  thinking rationally, and acting humanly. While currently non-existent, there are a number of companies and researchers working on creating such an entity [4]. 

Since the beginning of AI as a whole, academic scholars were already explaining that the brain could be simplified to a systematically engineered structure of complex nonlinear systems [5]. Considering the fact that neuroscientists and engineers work closely together on technology, this doesn’t come as a surprise. For instance, the processes AI use have become more  human-like – with the way they integrate sensory information and learn from previous mistakes. Moreover, numerous neurological terminologies, such as neural networks and input/output channels, imply that AI is becoming more and more anthropomorphic [6].

Moral Status

Currently, our conception of moral status remains largely binary – an entity is either deemed worthy, or not. However, the absence of a universal criteria has made it difficult for experts to determine which entities, including AGI, deserve this significant label. Most scientists and philosophers name sentience and sapience as the two main factors to consider. Loosely defined, sentience is the capacity for qualia, meaning the  capacity to experience pain and suffering. Sapience, on the other hand, is a set of abilities attributed to higher intelligence, including reasoning, responsiveness, and self-awareness [7]. 

Nevertheless, the only model of sentience and sapience that we have is that of ourselves, making it challenging to fathom what these phenomena  might look like in other beings [8]. Therefore, while research is focusing on attempts to understand sentience, sapience, moral status, and how the three intertwine,  it is equally important for experts to consider the public when forming their decision. 

The Importance of Public Opinion

Unfortunately, there is very little research regarding the public’s opinion about an AGI, or any similar entity, with moral status. In fact, a recent research paper outlined the lack of information, finding only 294 relevant research or discussion pieces on the topic [1]. This is extremely concerning. A moral status in AGI may initiate changes into the social and legal systems, meaning that society would have to increase their interactions with AGI immensely. These interactions may appear or feel oddly intimate [7].  Not knowing where the public stands can lead to future problems.  

Figure 1: Sophia the robot [10]

Hanson Robotics’s Sophia serves as an example. In 2016, Hanson Robotics launched Sophia the robot, who eventually became the first robot in history to achieve citizenship in a country. Sophia is capable  of engaging in general conversation, human-like movement, and the expression of emotion. While this may seem like AGI behavior, numerous experts have confirmed that this isn’t true, because technology isn’t there yet. But her appearance and dialect continue to fool the public, many of whom describe Sophia’s citizenship with adjectives  like ‘weird’ and ‘creepy’. Others have brought up that Sophia received better rights than women in Saudi Arabia because she didn’t have to wear a headscarf, upon numerous other reasons [9]. Similarly, the situation with citizenship also brought up a discussion of whether to refer to Sophia as “she” or “it”, suggesting that the line between humans and robots is only getting blurrier [10].   

This example goes to show the importance of gauging public opinion. Especially in the future, experts must consider the public’s opinion before declaring AGI with moral status. Additionally, previous studies have shown that attitudes towards proposed technologies can be influenced by the way they are framed, in terms of context [11]. In order to lessen the chance of potential public outrage, professionals must also consider the way they introduce moral status in AGI into society. 


With the rapid advancements in neuroscience and technology, it seems almost inevitable that Artificial General Intelligence will come to be. As a result, moral status needs to be considered; it influences the way we interact with our environment and ourselves, ultimately contributing to our definition of morality. Thus, it is extremely important that experts factor in the public’s opinion when making their decision. After all, the introduction of AGI into society will majorly affect society. 

Saanvi, Youth Medical Journal 2022


[1] Harris, Jamie et al. (20/07/2021). The Moral Consideration of Artifcial Entities: A Literature Review. Retrieved: 06/20/22.

[2] Stanford Encyclopedia of Philosophy (03/03/2021) The Grounds of Moral Status. Retrieved June 20, 2022, 

[3] Ritchie, Hannah (24/04/2019) Humans make up just 0.01% of Humans make up just 0.01% of Earth’s life – what’s the rest?, Retrieved: 06/20/22

[4] JavaPoint Types of Artificial Intelligence  Retrieved: 06/20/22

[5] Long, N. Lyle, Troy D. Kelley (02/2010) Review of Consciousness and the Possibility of Conscious Robots Retrieved: 06/20/22

[6] Tyagi, Neelam (27/02/2022) When Artificial Intelligence (AI) And Neuroscience Meet Retrieved: 06/22/22

{7] Hurley, M. (2021). Should AI Have Moral Status? The Importance of Gauging Public Opinion. The Neuroethics Blog Retrieved: 07/01/22

[8] Bostrom, Nick, Eliezer Yudkowsky (2011) The Ethics of Artificial Intelligence Retrieved: 07/01/22

[9] Skynet Today (2016) Sophia the Robot, More Marketing Machine Than AI Marvel Retrieved: 07/06/22

[10] Weller, Chris (26/10/2017) We couldn’t figure out whether to call the first robot citizen ‘she’ or ‘it’ — and it reveals a troubling truth about our future Retrieved: 07/06/22

Biomedical Research Health and Disease

The Evolution of Sulfonylureas as Hypoglycaemic Drugs Over Time, Their Mechanisms and how they Treat Symptoms of Type II Diabetes Mellitus.


Type 2 diabetes mellitus can be a difficult disease to live with and can severely affect one’s quality of life. Diabetes mellitus is a chronic condition in which your body cannot regulate your blood glucose levels, the two main types being type 1 and type 2. These are due to either an inability to produce insulin (type 1) or when the insulin produced is ineffective (type 2). Type 2 diabetes, or non-insulin dependent diabetes mellitus, can occur as a result of lifestyle factors, such as diet and obesity. These lead to insulin resistance or the inability to produce enough insulin as necessary. Currently, there are 4.1 million people in the UK with diabetes, with 90% of these cases due to type 2 diabetes. It is estimated that 1 in 10 adults will develop type 2 diabetes by 2030 (Lacobucci, 2021)

One treatment for type 2 diabetes is the use of sulfonylureas – a group of oral drugs with hypoglycaemic effects (ability to lower blood glucose levels). Since their discovery in the 1940’s, medicinal chemists have changed the structure of these drugs, to make them more effective for clinical use. These modifications have led to more favourable properties in metabolism, potency, efficacy and safety, which have made the drugs a more effective, safe and convenient treatment for type 2 diabetes mellitus. These will be discussed later on in the article.

This article will explain the chemistry of sulfonylureas, the pharmacology behind them and how they have changed over time to make them more effective in the treatment of type 2 diabetes mellitus.

Type 2 Diabetes Mellitus Cause

Type 2 diabetes occurs when there is a deficiency in insulin secretion by the β-cells in the pancreas, or when cells develop a resistance to insulin action (Galicia-Garcia, et al., 2020). This is usually due to obesity and an unhealthy lifestyle, including lack of exercise, and a high fatty and sugar diet. Insulin is a peptide hormone that is secreted by β-cells in the pancreas. It is responsible for lowering blood glucose levels by stimulating the conversion of glucose in the blood into glycogen to be stored in muscle, fat, and liver cells. When there is a deficiency or resistance of insulin it leads to hyperglycaemia (high blood glucose levels), due to the reduced ability to convert glucose into glycogen. This would lead to symptoms such as vomiting, dehydration, confusion, increased thirst, and blurred vision to name a few.

Physiology Behind Insulin Secretion and Structure

To understand the pharmacology of the sulfonylurea compounds, one must first understand the physiology behind the secretion of insulin.

As stated above, insulin is a peptide hormone, so it is made from a polypeptide chain. Transcription of the insulin gene (found on chromosome 11) occurs and the resulting mRNA strands are translated to produce two peptide chains. These chains are held together in a quaternary structure by two disulfide bonds to form the hormone insulin (Brange & Langkjoer, 1993).

Insulin secretion must be tightly controlled to maintain efficient glucose homeostasis. To do so, the secretion of insulin is regulated precisely to meet its demand. The β-cells of the pancreas contain glucose transporter 2, a carrier protein that allows facilitated diffusion of glucose molecules across a cell membrane. These transporters allow glucose to be detected and enter the β-cells. Upon cytoplasmic glucose levels rising, the pancreatic β-cells respond by increasing oxidative metabolism, leading to increased ATP in the cytoplasm (Fridlyand & Philipson, 2010). The ATP in the cytoplasm of the β-cells, can bind to ATP sensitive K+ channels on the cell membrane, causing them to close. This leads to a build up of K+ ions within the cell as they are unable to leave the cell, leading to the depolarisation of the cell. The increasing positive membrane potential, leads to the opening of voltage gated Ca2+ channels, leading to an influx of Ca2+ ions.  This further depolarises the cell, which triggers the release of insulin from the cell, packaged in secretory vesicles, by exocytosis (Fu, et al., 2013).

Pharmacology of Sulfonylureas

Sulfonylurea’s act inside the pancreatic β-cells. On the ATP sensitive K+ channel, there are sulfonylurea receptors to which the drug binds, causing them to close. The cascade of events that follows leads to the release of insulin by the pancreatic β-cell. This mimics the activity that occurs when glucose is taken into the cell, as mentioned earlier. (Panten, et al., 1996). (possibly delete this instead as it is repeated)

This process allows more insulin to be released, to lower blood glucose levels when insufficient insulin is produced naturally. Sulfonylureas are only effective in type 2 diabetes, since insulin production is not impaired (as in type 1 diabetes), rather the release of or resistance to insulin is affected.

Common Chemistry of all Sulfonylureas

All the sulfonylurea drugs are characterised by their common sulfonylurea group. This functional group allows this unique group of compounds to bind to SUR on ATP sensitive K+ channels, giving it its hypoglycaemic properties. The common structure of sulfonylureas is shown in figure 1 (Fvasconcellos, 2011), with the blue R groups indicating replaceable side chains, which fluctuates between each drug development over time giving slightly different properties between the drugs. Over time, scientists have improved the drugs efficacy by changing the side compounds. Additionally, scientific research has led to development of other drugs from the same pharmacological group, but with altered side chains (again, giving them different properties) which have also improved the efficacy of the drug. These changes have altered properties of the drug such as potency, metabolism, half-life, tolerance and safety, to make the drug more effective for clinical use.

A screenshot of a video game

Description automatically generated with low confidence

Figure 1 Sulfonylurea functional group

History and development of the drugs and their chemical structure

Sulfanilamide and IPTD

In 1935, a French research term discovered the active chemical in the antibiotic prontosil, known as sulfanilamide (Sorkhy & Ghemrawi, 2020). Sulfanilamide was found to be a poor antibiotic and so derivatives of it were synthesised and tested. These compounds, such as such as p-amino-sulfonamide-isopropylthiodiazole (IPTD), which was used as an antibiotic for the treatment of typhoid in 1942, revealed unexpected hypoglycaemic side effects. These were discovered by French physician, Marcel Janbon (Quianzon & Cheikh, 2012). However, scientists could not identify how these side effects were caused.

In 1946, Auguste Loubatières, investigated the effect of IPTD on dogs. He administrated the drug to fully pancreatectomized and partially pancreatectomized dogs and found that the drug was ineffective in the fully pancreatectomized ones but effective in the partially pancreatectomized ones. This later lead to his conclusion that the drugs’ hypoglycaemic property was due to its ability to stimulate insulin secretion directly in the pancreatic β-cells (Loubatières-Mariani, 2007).


The first sulfonylurea to be marketed as a drug for diabetes was Carbutamide. It was synthesised in East Germany by Ernst Carstens and in the early 1950’s, clinical trials for this sulfanilamide derivative Carbutamide were carried out, by Hellmuth Kleinsorge, for the treatment of urinary tract infections. However, during treatment, side effects of hypoglycaemia were also noted (Kleinsorge, 1998) – similar to those experienced by patients treated with IPTD for typhoid in 1942.

These findings were presented to Erich Haak, of the East German Ministry of Health, in 1952, which ultimately culminated in the ban of the drug. Haak later moved to West Germany where he patented the drug to be tested for antibacterial use, without disclosing the side effects of hypoglycaemia. Karl Joachim Fuchs, a doctor who was part of this drug testing, noticed symptoms of ravenous hunger and euphoria upon taking the drug himself, which were found to be due to hypoglycaemia. Following this, studies were undertaken, and a general conclusion was that Carbutamide was most effective in people over 45 years of age, who had had diabetes for less than 5–10 years and had not used insulin for more than 1–2 years (Tattersall, 2008). The use of Carbutamide was short lived as it was found to have fatal side effects in a small number of people, including toxic effects on bone marrow (National Center for Biotechnology, 2005).

The structure of Carbutamide is shown in figure 2 (Anon., 2021). It can be seen, attached to the benzene ring on the left-hand side of the sulfonylurea functional group, can be seen an amine group. Attached to a second amine group on the right side of the functional group is a four-carbon chain. As mentioned previously, it is the sulfonylurea functional group that gives rise to the drugs hypoglycaemic effects. This is the first drug to contain the sulfonylurea functional group (seen in figure 1) and the beginning of many discoveries into the treatment of non-insulin dependent diabetes mellitus.

Figure 2 Structure of Carbutamide


After the discovery of the fatal side effects of Carbutamide, the next sulfonylurea drug to be synthesised was Tolbutamide; it was one of the first sulfonylureas to be marketed for controlling of type 2 diabetes, in 1956 in Germany (Quianzon & Cheikh, 2012). There were minimal changes to the chemical structure in this next development of the sulfonylureas. The amine group on the left hand side of Carbutamide was swapped for a methyl group to give Tolbutamide, shown in figure 3 (Anon., 2021), which helped reduce the toxicity of the drug. However, as a result tolbutamide was subsequently being metabolised too quickly (Monash University, 2021), which led to low levels of the (active) drug in the blood. The drugs efficacy was therefore lower than expected, resulting in it having to be administered twice a day, which was an inconvenience for patients.


Figure 3 Structure of Tolbutamide


It was soon discovered that the methyl group attached to the benzene ring in Tolbutamide was the site of its metabolism (Monash University, 2021) and so it was replaced by medicinal chemists with a chlorine atom in the next drug, Chlorpropamide (see figure 4 ), (Anon., 2021). This helped reduce metabolism, giving the drug a longer half-life, so it was not cleared as quickly from the body. Indeed, a University of Michigan study found that chlorpropamide serum concentration declined from about 21 mg/100ml at 15 min to about 18 mg/100ml at 6 hours, whereas the tolbutamide serum concentration fell more rapidly from about 20 mg mg/100ml at 15 min to about 8 mg/100ml at 6 hours. Therefore, it could be seen that under experimental conditions, tolbutamide disappeared from the blood approximately 8 times faster than chlorpropamide (Knauff, et al., 1959). This would mean less frequent dosing with chlorpropamide, which would make the drug much more convenient for patients to treat type 2 diabetes. However, further research subsequently revealed that, due to the longer half-life of chlorpropamide, the hypoglycaemic effects were compounded and lasted longer than previously expected (Sola, et al., 2015). This meant that Chlorpropamide could not be administered for the safe treatment of type 2 diabetes.


Description automatically generated

Figure 4 Structure of Chlorpropamide


Glibenclamide is the first of what is known as the second-generation sulfonylureas. Introduced for use in 1984, these mainly replaced the first-generation drugs (Carbutamide, Tolbutamide, Chlorpropamide etc) in routine use to treat type 2 diabetes. Due to their increased potency and shorter half-lives, lower doses of these drugs could be administered and only had to be taken once a day (Tran, 2020).  These second-generation sulfonylureas have a more hydrophobic right-hand side, which results in an increase in their hypoglycaemic potency (Skillman & Feldman, 1981). In Glibenclamide, the left-hand side of the drug changed drastically from chlorpropamide, as seen in figure 5 (Anon., 2021). This suggested to medicinal chemists, an innumerable number of possible changes that could be made to the drug, simply by changing the left and right-hand sides, resulting in better potency, safety, efficacy and convenience (Monash University, 2021). Consequently, the metabolism of the drug varied between patients, and this in addition to increased hypoglycaemia and increased incidence of Cardiovascular events (Scheen, 2021), meant that the drug is not a first choice in recommendation to treat type 2 diabetes.


Figure 5 Structure of Glibenclamide


Glipizide, figure 6 (Anon., 2021), shares the same hydrophobic structure on the right-hand side as Glibenclamide, however a few changes have been made to the left-hand group, resulting in faster metabolism. Although it has similar potency to that of Glibenclamide; however, the duration of its effects was found to be much shorter (Brogden, et al., 1979). Glipizide has the lowest elimination half-life of all the sulfonylureas, reducing the risk of the long-lasting hypoglycaemic side effects found in previous developments (Anon., 2022).


Figure 6 Structure of Glipizide


Gliclazide is the most common sulfonylurea used in current medicine for the treatment of non-insulin dependent diabetes mellitus; it is part of the World Health Organisation’s most recent list of essential medicines (World Health Organisation, 2021). The chemical structure of Gliclazide can be seen in figure 7 (Anon., 2021). Fascinatingly, medicinal chemists returned to the use of a methyl group on the left-hand side of the drug, which was last seen in Tolbutamide. As mentioned before, the left-hand group on the drug, attached to the benzene ring, is responsible for the metabolism of the compound. Returning to the use of a methyl group, allows for a faster metabolism of the drug, which helped to remove the unwanted longer hypoglycaemic side effects, especially for use with elderly patients (Monash University, 2021). The right-hand group of gliclazide is comprised of two hydrophobic rings which, as mentioned previously, are responsible for its increased potency. Gliclazide has also been shown to be one of the most effective sulfonylureas. According to Harrower, three studies carried out concluded that gliclazide is a potent hypoglycaemic agent, which compares favourably with others of its type (Harrower, 1991).


Figure 7 Structure of Gliclazide


Sulfonylureas are one of several groups of drugs used to treat type 2 diabetes. Through research and trials, they have developed significantly over time, to become one of the most prescribed medications in the effective treatment of type 2 diabetes. 

The sulfonylureas discussed above represent significant developments in physiology and pharmacology of the group, since their initial discovery. Other sulfonylurea drugs have been synthesised and tested over the years, such as tolazamide and acetohexamide, however these are less commonly prescribed because of their disadvantages in potency and safety.  The discovery of the ability to modify the left and right sides of the drug’s common structure has led to many new forms within this class, with varying properties in potency, metabolism, efficacy, and safety. The experimentation of the chemical structures over time has led to the production of more effective treatments for the disease. Currently, Glipizide and Gliclazide are the two most commonly prescribed sulfonylureas, due to their high potencies and suitable half-lives, while maintaining minimal side effects. These now provide an effective treatment in helping reduce the symptoms of type 2 diabetes and thus improving quality of life for those suffering with the disease.

AliMahdi Meghji, Youth Medical Journal 2022


Anon., 2021. Carbutamide. [Online]
Available at:
[Accessed 27 March 2022].

Anon., 2021. Chlorpropamide. [Online]
Available at:
[Accessed 29 March 2022].

Anon., 2021. Gliclazide. [Online]
Available at:
[Accessed 30 March 2022].

Anon., 2021. Glipizide. [Online]
Available at:
[Accessed 29 March 2022].

Anon., 2021. Glyburide. [Online]
Available at:
[Accessed 29 March 2022].

Anon., 2021. Tolbutamide. [Online]
Available at:
[Accessed 29 March 2022].

Anon., 2022. Glipizide. [Online]
Available at:,glucose%2Dlowering%20therapy%20following%20metformin.
[Accessed 29 March 2022].

Brange, J. & Langkjoer, L., 1993. Insulin structure and stability, Bagsvaerd: Novo Research Institute.

Brogden, R. N. et al., 1979. Glipizide: a review of its pharmacological properties and therapeutic use. Drugs , 18(5), pp. 329-353.

Fridlyand, L. E. & Philipson, L. H., 2010. Glucose sensing in the pancreatic beta cell: a computational systems analysis. Theoretical Biology and Medical Modelling, 7(1), p. Article 15.

Fu, Z., Gilbert, E. R. & Liu, D., 2013. Regulation of Insulin Synthesis and Secretion and Pancreatic Beta-Cell Dysfunction in Diabetes. Current Diabetes Reviews, 9(1), pp. 25-53.

Fvasconcellos, 2011. General structural formula of a sulfonylurea, highlighting the functional group that gives the class its name and the side chains that distinguish its various members., s.l.: Wikipedia.

Galicia-Garcia, U. et al., 2020. Pathophysiology of Type 2 Diabetes Mellitus. International Journal of Molecular Sciences, 30 August, 21(17), p. 2.

Harrower, A. D., 1991. Efficacy of gliclazide in comparison with other sulphonylureas in the treatment of NIDDM. Diabetes research and clinical practice , 14(2), pp. 65-67.

Kent, M., . Advanced Biology. ed. (): Oxford University Press.

Kleinsorge, H., 1998. Carbutamide–the first oral antidiabetic. A retrospect. Experimental and clinical endocrinology & diabetes : official journal, German Society of Endocrinology [and] German Diabetes Association, 106(2), pp. 149-151.

Knauff, R. E., Fajans, S. S., Ramirez, E. & Conn, J. W., 1959. Metabolic studies of chlorpropamide in normal men and in diabetic subjects.. Annals of the New York Academy of Sciences , 74(3), pp. 603-617.

Lacobucci, G., 2021. The British Medical Journal. [Online]
Available at:
[Accessed 2 March 2022].

Loubatières-Mariani, M.-M., 2007. The discovery of hypoglycemic sulfonamides. Journal de la Société de Biologie, 201(-), pp. 121-125.

Monash University, 2021. The Science of Medicines MOOC, Melbourne: Future Learn.

National Center for Biotechnology, 2005. PubChem Compound Summary for CID 9564, Carbutamide. [Online]
Available at:
[Accessed 18 March 2022].

Panten, U., Schwanstecher, M. & Schwanstecher, C., 1996. Sulfonylurea receptors and mechanism of sulfonylurea action.. Experimental and clinical endocrinology & diabetes : official journal, German Society of Endocrinology [and] German Diabetes Association, 104(1), pp. 1-9.

Quianzon, C. C. L. & Cheikh, I. E., 2012. History of current non-insulin medications for diabetes mellitus. Journal of Community Hospital Internal Medicine Perspectives , 2(3), p. 19081.

Scheen, A. J., 2021. Sulphonylureas in the management of type 2 diabetes: To be or not to be?. Diabetes Epidemiology and Management, Volume 1, p. Article 100002.

Skillman, T. G. & Feldman, J. M., 1981. The pharmacology of sulfonylureas. The American journal of medicine, 70(2), pp. 361-372.

Sola, D. et al., 2015. Sulfonylureas and their use in clinical practice. Archives of medical science , 11(4), pp. 840-848.

Sorkhy, M. A. & Ghemrawi, R., 2020. Treatment: Projected Modalities for Antimicrobial Intervention. Microbiomics – Dimensions, Applications, and Translational Implications of Human and Environmental Microbiome Research, -(-), pp. 279-298.

Tattersall, R., 2008. Discovery of the sulphonylureas. TATTERSALL’S TALES, 7(2), p. 74.

Tran, D., 2020. Oral Hypoglycemic Agent Toxicity. [Online]
Available at:
[Accessed 27 March 2022].

World Health Organisation, 2021. WHO model list of essential medicines – 22nd list, 2021. [Online]
Available at:
[Accessed 30 March 2022].

Biomedical Research

Dostarlimab: Hope or Hype?


Cancer is very often placed at the forefront of medical research and with an estimated 1 in 2 people expected to develop cancer at some point in their lives1 it is becoming increasingly important that novel drugs and therapies are discovered to mitigate the impacts. Over the years, we have seen the development of powerful treatment methods, from chemotherapy to radiotherapy, however more recently there has been a rise in the use of immunotherapy. One recent form of immunotherapy, a drug called Dostarlimab, has taken the medical world by storm after a small study reported a 100% complete clinical response. 

How does it Work?

The drug works by enhancing the body’s immune response against tumour cells. It does this through two types of proteins PD-L1 and PD-L2 (programmed death ligand 1 and 2) which typically play a role in weakening our immune response when bound to a complementary receptor on a T-cell. This plays an important physiological role in preventing excessive destruction to non-harmful cells, as well as preventing the onset of autoimmune diseases.2 

However, some tumour cells express these proteins on their surface, which when bound to a T-cell inhibits the cell-mediated immune response, and the cancer cells remain undestroyed.3 Dostarlimab is a monoclonal antibody which binds to PD-1, the complementary receptor on the T-cell, and therefore prevents the interaction between the tumour and T-cells. This enables both identification and reactivates cytotoxic activity, allowing the cancer cells to be attacked.

Fig: Images showing the interactions without (left) and with (right) Dostarlimab3

Usage of the Drug

Although the drug has only recently risen to fame in mainstream media, the drug had already begun rollout across the NHS in February 2022 as a treatment method for endometrial cancer.4 Alternative options such as surgery and chemotherapy tend to be more invasive and often leave patients with a poor prognosis, which is why Dostarlimab serves as an innovative drug. It requires only four half-an-hour sessions over a 12 week-period, offering patients quicker, safer and more effective treatment.

It was only more recently that a small trial involving 12 rectal cancer patients saw a 100% remission rate.5 The patients involved suffered from a particular subset of rectal cancer caused by mismatch repair deficiency (cells with many DNA mutations), which is affected by blockage of the PD-1 receptor on T-cells caused by Dostarlimab. Despite the small sample size, a high confidence interval of 95% and no severe side effects suggest that the drug holds a lot of potential. 

Limitations of the Drug

Despite the fact that the drug has only been proven effective on one particular form of the disease, it is estimated that 5-10% of rectal cancers are due to mismatch repair deficiency6. With over 700,000 people diagnosed with rectal cancer each year7, even a small proportion of those cases being treated represent a significant triumph. 

However, the results of this trial must not be taken as a definitive yes for the use of Dostarlimab, as a follow-up study with a larger sample size would increase the validity and reliability of the study. Additionally, the patients were followed up for between 6 to 25 months 5 in order to assess any recurrence, but ideally, longer follow-up times would allow researchers to better ascertain the long term efficacy. A further obstacle which may hinder large scale roll out is cost, which is particularly a challenge in countries where private healthcare is dominant. According to the New York Times8, each dose cost $11,000 and with several doses required over a 6 month period, the drug may prove to be unaffordable for many. 

Such limitations are not completely restricting, as numerous solutions exist to tackle them. For example, subsidies from the government would not only allow for larger studies to be completed, but also increase research in cost reduction. Whilst this presents an opportunity cost to a country’s government, extra funding for the healthcare sector leads to better survival rates, which benefits the economy, hence creating a positive multiplier effect.


The future of Dostarlimab seems to be exciting and may change the way in which we treat rectal cancer. Not only is it an innovative way in which to treat cancer, it’s potential benefits to the fields of endometrial and mismatch repair deficiency cancers are immense. However, in the near future, further trials, or extensions of ongoing ones, are warranted in order to successfully determine whether the drug is a viable treatment method, as well as solutions which address cost reduction. 

The unprecedented results of the trial have been groundbreaking for the medical sector, and provide a great sense of hope that we will continue to discover cancer treatments. Nonetheless, whether it proves to be a miracle cure or not, it is fair to say that immunotherapy in itself has been revolutionary to the world of medicine, and the research gained from such studies conducted will prove to be valuable in the long term. 

Nyneisha Bansal, Youth Medical Journal 2022


1. Cancer [Internet]. 2022 [cited 15 June 2022]. Available from:

2. Touboul R, Bonavida B. YY1 expression and PD-1 regulation in CD8 T lymphocytes. YY1 in the Control of the Pathogenesis and Drug Resistance of Cancer. 2021;:289-309.

3. How JEMPERLI works [Internet]. Jemperli. 2022 [cited 15 June 2022]. Available from:

4. England N. NHS England » New life-extending drug for advanced womb cancer to be rolled out on the NHS [Internet]. 2022 [cited 15 June 2022]. Available from:

5. Cercek A, Lumish M, Sinopoli J, Weiss J, Shia J, Lamendola-Essel M et al. PD-1 Blockade in Mismatch Repair–Deficient, Locally Advanced Rectal Cancer. New England Journal of Medicine. 2022;.

6. Promising rectal cancer study [Internet]. ScienceDaily. 2022 [cited 15 June 2022]. Available from:,a%20subtype%20of%20rectal%20cancer.

7. Colorectal Cancer – Statistics [Internet]. Cancer.Net. 2022 [cited 15 June 2022]. Available from:,with%20colorectal%20cancer%20in%202020.

8. Kolata G. A Cancer Trial’s Unexpected Result: Remission in Every Patient [Internet]. 2022 [cited 15 June 2022]. Available from:

Biomedical Research Commentary

CRISPR Gene Editing: From novel treatment to reality

Originally released in the year 2000, the Marvel blockbuster film series features a team of six genetically enhanced beings called the X-Men. It appears that every time a new X-Men movie is released on the big screens, the world looks to science to answer the age-old question: “Is the creation of such mutants a possibility?”. 

With the endless developments in genetic engineering and the discovery of CRISPR-Cas9 gene editing in 2020, it is difficult not to wonder if the creation of such mutants in our reality is possible. Yet, much sooner than we expected, these so-called “superhumans” are already walking amongst us, with a range of unbelievable powers including super-strength, super-speed, and mind-blowingly high brain power that increasingly mirror superhuman powers seen on the big movie screens.

To understand the science behind these superhumans, we must first understand the basis of gene editing, which forms the foundation and function of CRISPR-Cas9.

What is CRISPR-Cas9?

CRISPR-Cas9 is a new and unique form of gene editing that allows medical scientists to edit parts of the genome by removing, adding, or altering sections of the DNA sequence [1]. Discovered back in 2021, it has been one of the frontiers of genomic research and a common hot topic within the medical community due to its simplicity, versatility, and precise method of genetic manipulation. The cheap price associated with CRISPR has ultimately made it more desirable than previous methods of DNA editing available on the market including transcription activator-like effector nucleases (TALENs) and zinc-finger nucleases (ZFNs), which are much less cost-effective and accessible [2].

So why is CRISPR-Cas9 gene editing relevant to us right now?

The answer lies in the enormous potential of CRISPR gene editing for treating a wide range of life-threatening medical conditions that have a genetic basis and foundation such as cancer, hepatitis B, and high cholesterol. For example, the excess fatty deposits in major blood vessels causing high cholesterol can be resolved through genetic engineering techniques that “turn off” the genes that regulate cholesterol levels in our body [6]. A new study conducted by Nature 2021 revealed that knocking out the protein PCSK9 with CRISPR significantly reduced LDL cholesterol in monkeys by around 60% for at least 8 months [3]. Although it is likely to be many years before any testing for CRISPR technology can be carried out on humans, this kind of breakthrough within our own genus is impressive. As much current research is focused specifically on ex-vivo or animal models, the intention is to use the technology to routinely treat diseases in humans that can’t be addressed through routine drugs and medications.

How does this form of gene editing work?

The foundation of CRISPR-Cas9 is formed from two key molecules that introduce a mutation into the targeted DNA: the Cas9 enzyme and guide RNA (gRNA). The guide RNA has RNA bases that are complementary to the target DNA sequence in the original genome, and this helps the gRNA bind to the correct region within DNA. The Cas9 enzyme follows the gRNA and essentially acts as small scissors that make incisions within both strands of the DNA, allowing for sections of DNA to be added or removed [1][4]

At this point, the cell recognises the damage within the DNA and works to repair it, allowing scientists to use external machinery to introduce one or more new genes to the genome. This causes the genetic makeup to differ from the “normal” human genome, causing mutations and noticeable changes in the phenotype to occur such as the “super-variants” including super-sprinter variant (ACTN3), super-sleeper mutation (hDEC2), and the super-taster variant (TAS2R38) [5][7].

There is also extensive research being put into eliminating the “off-target” effects of CRISPR, where the Cas9 enzyme makes cuts at a site other than the intended one, introducing a mutation in the wrong region. Whilst some of these changes are inconsequential to the overall phenotype, they may affect the structure and function of another part of the genome. It is suggested that the use of Cas9 enzymes that only cut a single strand of target DNA as opposed to the double-strand may be the solution to eliminate this problem [4].

The next generation of enhanced individuals?

Though the alteration of the human genome is very much already a reality, the creation of ‘mutant’ individuals with more fantastical powers such as Wolverine’s special healing and animal keen senses, and the Scarlet Witch’s telekinesis and matter manipulation remains purely fictional. As of right now, the use of CRISPR in medicine is solely therapeutic, used for repairing or altering innate mutations, as opposed to creating them. Yet, it can be argued that these genetic changes allow the patient to have better DNA than the one they were born with, making them the first generation of genetically modified humans to walk the earth – mutants indeed.

In the X-Men franchise, all mutants carry an ‘X-gene’ which bestows upon them their aforementioned abilities. Unfortunately, no such gene exists – our phenotype arises from a much more complicated relationship between genes and presenting characteristics, and the effects of current gene editing pale in comparison to what is shown in blockbuster movies. That said, hope is not lost: extensive research and development within this field continually offer the possibility of giving individuals similar ‘powers’ to those of the X-Men and Professor X on an increasingly real scale. 

Below, are some examples of X-Men’s superpowers against their real-world human genetic mutation counterparts [5][7]:

X-Men abilityExisting human genetic variation
Animal-keen senseshDEC2 (super-sleeper mutation)  
Super-speedACTN3 (super-sprinter variant) 
Super-strengthLRP5 (unbreakable bone mutation)  
Enhanced sensesTAS2R38 (super-taster variant)  

Do scientists think it is possible for some of these powers to be attributed to genetic mutation? The simple answer is yes. But unsurprisingly, the uncertainty and unpredictable nature surrounding new treatments will always generate some degree of ethical controversy in the scientific community, and CRISPR is no different. The use of CRISPR technology in medicine will undoubtedly become more mainstream in the near future, and once the door is open for genetic modifications to embryos, babies, and adults alike, there is no going back. As with many medical technologies in the past, human health and safety may fail to be at the forefront of CRISPR’s use, leading to all kinds of unnecessary complications. The impact that CRISPR-Cas9 will have on the medical field, now and in the next generation, is undeniable – whether it’s curing a rare form of cancer or creating the first generation of real-life X-Men [8].

There are many unanswered questions surrounding this topic, and this is unlikely to change. But as the research continues and our questions go on, I would like to leave you with only one… What would your superpower be?


Works Cited

1. 2022. What is CRISPR-Cas9?. [online] Available at: [Accessed 15 March 2022].

2. Beumer, K.J., Trautman, J.K., Christian, M., Dahlem, T.J., Lake, C.M., Hawley, R.S., Grunwald, D.J., Voytas, D.F. and Carroll, D. (2013). Comparing Zinc Finger Nucleases and Transcription Activator-Like Effector Nucleases for Gene Targeting in Drosophila. G3: Genes|Genomes|Genetics, [online] 3(10), pp.1717–1725. doi:10.1534/g3.113.007260.

3. 2022. [online] Available at: [Accessed 15 March 2022].

4. 2022. [online] Available at: [Accessed 15 March 2022].

5. Business Insider. 2022. 8 genetic mutations that can give you ‘superpowers’. [online] Available at: [Accessed 15 March 2022].

6. 2022. Gene tweak creates supermouse – and prevents diabetes | New Scientist. [online] Available at: [Accessed 15 March 2022].

7. Business Insider. 2022. 8 genetic mutations that can give you ‘superpowers’. [online] Available at: [Accessed 15 March 2022].

8. Pinkstone, J., 2022. Human beings could achieve immortality by 2050. [online] Mail Online. Available at:￾immortality-2050.html [Accessed 15 March 2022].

Biomedical Research Commentary Health and Disease

Behind the Controversial and Forbidden Technique of Gene Editing


Gene editing has been one of the biggest names in the biotechnology industry. On the surface, it seems like a tool that can help prevent the anomaly of genetic diseases, however when you dive deeper into it, it is something that can be very unpredictable and can cause abnormal and irregular outcomes in subjects. This technique, classified as forbidden in many parts of the world, is highly controversial and users of such tech have been imprisoned. Many nations around the world are still researching gene editing, so perhaps one day it could be a safe and reliable tool to bring an end to the rise of genetic diseases.

History Behind Gene Editing

To fully understand how the concept of gene editing was first derived, you have to look into its history. Research into genetics started in the 1950s and 1960s; discoveries in this time period paved the way for future study of genetics and biotechnology

It all started from the discovery of the double helix structure of DNA in 1953 by scientists James Watson and Francis Crick, based on the work of their colleague Rosalind Franklin. The discovery of the double helix structure was an important moment in the history of innovation in the field of genetics. It was followed by the first isolation of DNA in a test tube in 1958 by scientist Arthut Kornberg. He isolated DNA polymerase from bacterial extracts and within a year he was able to successfully synthesize DNA in vitro for the first time.

Leading into the 1960s, genetic engineering innovation shifted to Silicon Valley in 1962 through the work of scientist Osamu Shimomura and researchers Martin Chalfie and Roger Tsien. The gene coding for the green fluorescent protein (GFP) present in Aequorea Victoria jellyfish was successfully fused with another gene that produces the protein of interest (POI). This allowed researchers to track which cells produced that POI as the GFP protein when exposed to blue wavelength glows. This revealed the location of the POI, thus allowing the tracking of it in cells.

Following this, the discovery of DNA ligase in 1967 was a pivotal point in molecular biology since DNA ligase is essential for the repair and replication of DNA in all organisms, which is what gene editing is based on. This was soon followed by the discovery of restriction enzymes which identify and cut foreign DNA.

It wasn’t until the 1970s though that genetic engineering took off. Throughout the 1970s, Paul Berg accomplished creating recombinant DNA from more than one species, this became known as the “cut and splice” technique. DNA was cut from 2 viruses creating sticky ends, then the DNA was incubated, the ends would anneal on their own, and the addition of DNA ligase would seal the sticky ends together. This period of time formed the understanding of how restriction enzymes cut DNA, and how host DNA works to protect itself is the basis for the modern genetic engineering therapies that are being developed, for example CRISPR which we will dive deeper into in this article.

Innovation and Controversy Behind CRISPR Gene Editing

CRISPR gene editing is based on the CRISPR-Cas systems, such as CRISPR-Cas9. These are adaptive immune response systems that protect prokaryotes from bacteriophages. They work by splitting the nucleic acids of invading bacteriophages such as viruses, thus protecting prokaryotes from viral infections. Over time the use of CRISPR-Cas9 turned to gene editing. This technique was thrust into the spotlight in 2012 when George Church, Jennifer Doudna, Emmanuelle Charpentier, and Feng Zhang modified targeted regions of genomes using gene editing

CRISPR stands for clustered regularly interspaced short palindromic repeats, which are repeating DNA sequences in the genomes of prokaryotes. They were first identified in the bacteria E.coli in 1987. When these CRISPR systems were first discovered they were only thought to have applications in repairing DNA in prokaryotes to create defense mechanisms against bacteriophages. However in 2012, the same scientists discovered that by designing “guide” RNA, a specific region in a genome could be targeted. It was found that the CRISPR-Cas9 system could be used as a cut and paste tool to modify genomes. This system could be used to introduce new genes, and even remove old genes. It could also be used to activate and silence genes.

CRISPR-Cas9 has been used to switch off genes that limit the production of lipids in microalgae, leading to increased lipid production and higher yields of biofuel. This technique has the ability in the near future to even cure genetic disorders such as sickle-cell anemia and cystic fibrosis. There is already a wide range of applications of CRISPR-Cas9 in diseases such as cancer.

Even though this system has lots of positive and revolutionary applications in the field of healthcare, there is still a lot of controversy related to the system’s ethicalness surrounding it. One of the controversies around CRISPR is based around the fact that this new technology is powerful and very vulnerable to misuse. For example, Chinese scientist He Jiankui announced that he had genetically modified twins before birth using CRISPR which made them resistant to HIV, resulting in a three year imprisonment. The effects of such technology are far too uncertain and should not be used to make heritable changes to a human’s DNA, though non-heritable changes could be argued for. This method was also known to be medically unnecessary as there were already much safer and certain methods to prevent certain diseases. So given gene editing’s unpredictable and unknown effects it is logical and ethical to be wary of it.

Why is CRISPR Gene Editing Forbidden + Discussion

So why is this revolutionary technique forbidden throughout the world? The main reason for this is simply that the technique is too risky in embryos targeted for implantation. The technology would still only be permitted for certain circumstances even if it ever gets approved.
Although CRISPR can precisely edit the genome of an individual, it has been seen that many unwanted changes have occurred in the genes of the subject which resulted in many unpredictable outcomes among the cells in the embryo. So the question stands, is this method of gene editing necessary? Is it an ethical solution to preventing genetic diseases despite its uncertainty and unpredictability? These questions are the exact roadblocks on the journey to the future of gene editing.


“Full Stack Genome Engineering.” Synthego,,genetics%20for%20all%20future%20scientists.

Ng, Written by Daphne. “A Brief History of CRISPR-Cas9 Genome-Editing Tools.” Bitesize Bio, 29 Apr. 2021,

Hunt, Katie. “What Is CRISPR and Why Is It Controversial?” CNN, Cable News Network, 7 Oct. 2020,

Human Germline and Heritable Genome Editing: The Global Policy …
Ledford, Heidi. “’CRISPR Babies’ Are Still Too Risky, Says Influential Panel.” Nature News, Nature Publishing Group, 3 Sept. 2020,,a%20high%2Dprofile%20international%20commission.

Biomedical Research

Antibiotic Resistance: The Quiet Crisis


Since the inception of the first penicillin drug in 1928 by Alexander Fleming, antibiotics have systematically changed and revolutionized the field of medicine. These antibiotics drugs or antimicrobial substances are widely used throughout medical treatment to prevent infections by inhibiting the growth and survival of bacteria. However, as the use of antibiotics continues to become mainstream, reaching even consumer shelves in what are now known as “over-the-counter medicine”, so does the risk of bacteria gaining resistance to these antibiotics.


Pioneered by Sir Alexander Fleming in 1928, the penicillin “Wonder-Drug” transformed modern medicine and saved millions of lives. These antibiotics were first prescribed during the World War 2 era to control infections on wounded soldiers. However, only years later penicillin resistance became a massive problem in many clinics and health organizations. In response to the new penicillin-resistant bacteria epidemic; a new line of beta-lactam antibiotics were created, restoring confidence in antibiotics across the country. Antibiotics have not only played a pivotal role in saving patients’ lives, but have also aided in key medical and surgical breakthroughs. They’ve successfully prevented or treated infections in patients undergoing procedures such as chemotherapy, who have chronic diseases such as end-stage renal disease, or rheumatoid arthritis, or who have undergone complex procedures including organ transplants or cardiac surgery. 

The Quiet Crisis

The world was warned of the imminent antibiotic resistance crisis as early as 1945. Sir Fleming expressed his concerns about an age of antibiotic abuse, “[the] public will demand [the drug and] … then will begin an era … of abuses.” (Ventola 2015) Despite the pleas of Fleming, as well as many other scientists, antibiotics still continue to be overused worldwide. The CDC has already classified hundreds of bacteria that continue to pose concerning threats towards our healthcare systems and their patients. 

Additionally, resistance genes from bacteria can easily be spread from one species to another through a method known as Horizontal Gene Transfer (HGT). As the primary mechanism for spreading resistance, HGT is defined as the, “movement of genetic information between organisms”. Due to HGT and the hereditary passing of genetic information to offspring (Vertical Gene Transfer) eliminating bacteria with resistance genes has become a seemingly impossible problem for healthcare professionals to deal with. In third-world countries such as India, the antibiotic resistance crisis has become so bad that many simple wounds lead to deadly infections.

The crisis is further perpetuated through problems such as inappropriate prescribing, extensive agricultural use, and the availability of few new antibiotics. Antibiotics that are given incorrectly continue to corroborate the spread of microbial resistance. In a recent study, Ventola expresses, “Studies have shown that treatment indication, choice of agent, or duration of antibiotic therapy is incorrect in 30% to 50% of cases.”. Antibiotics administered inappropriately have limited medical benefits and expose patients to antibiotic-related risks, such as drug-induced liver injury. Such antibiotic administrations can lead to genetic alterations within the bacteria such as changes in gene expression and HGT. These alterations promote increased bacterial virulence and resistance.

Furthermore, Antibiotics are largely utilized in animals to stimulate growth and prevent infection, accounting for over 80% of antibiotics sold in the United States. Antimicrobial treatment of livestock is supposed to improve the animals’ overall health, resulting in increased yields and a higher-quality output. Bacteria found inside of these livestock gain resistance to the antibiotics being ingested by the cattle, which is then transferred to the humans who eat the meat of the newly butured cattle. Antibiotic use in agriculture has an impact on the microbiome in the environment. Drugs administered to livestock are expelled in urine and stool in up to 90% of cases, and afterwards broadly disseminated by fertilization, freshwater, and runoffs. This approach also exposes bacteria in the surrounding area to development-inhibiting substances, affecting the ecology of the environment by raising the ratio of resistance against vulnerable bacteria.


Although the antibiotics resistance crisis seems to be unsolvable, many of the world’s citizens can play their part through less consumption of antibiotics and only using them when need be. Additionally, a new micro-organism, known as “Bacteriophages” seems to be a promising alternative that could help alleviate the stress on the antibiotic resistance crisis.

Works Cited

  1. Bohan, J. G. B., Cazabon, P. C., Hand, J. H., Entwisle, J. E., Wilt, J. K. W., & Milani, R. V. M. (2019, February 13). Reducing inappropriate outpatient antibiotic prescribing: normative comparison using unblinded provider reports. PubMed. Retrieved February 25, 2022, from
  2. Romero-Calle, D. R., Benevides, R. G. B., Góes-Neto, A. G., & Billington, C. B. (2019, September 4). Bacteriophages as Alternatives to Antibiotics in Clinical Care. PubMed. Retrieved February 25, 2022, from
  3. Ventola, C. L. V. (2015, April). The Antibiotic Resistance Crisis. PubMed. Retrieved February 25, 2022, from,incentives%20and%20challenging%20regulatory%20requirements.
  4. World Health Organization. (2020, July 31). Antibiotic resistance. World Health Organization. Retrieved February 25, 2022, from