Artificial intelligence in medicine: opportunities and challenges

Jan Przybylowicz
18 min readJul 14, 2021
Figure 1. Revolution in medicine is coming (Anch, 2019).

A vision of intelligent machines assisting humans in everyday life had been a widely explored topic by writers, directors, or researchers for decades. However, due to the lack of appropriate infrastructure it had been merely science fiction, until recently. Rapid technological advancements equipped computers with an unparalleled processing power that allows them to run complex algorithms and process millions of data points within seconds. Hence, the field or artificial intelligence (AI) has become of broad and current interest in academia and business, inspiring hope for ground-breaking discoveries and optimal execution of routine tasks.

A system is perceived to act intelligently when it does what is appropriate for its goal and circumstances, can respond to changing environments and changing objectives, as well as learns from experience to improve its reliability in the future (Poole, 1998). In effort to create such systems, different subsets of AI have been developed, most notably machine learning (ML), which is based on algorithms that can be trained on data without relying on explicit pre-programmed rules (Pyle & San José, 2015). This is illustrated in the Figure 2.

Figure 2. Artificial intelligence, machine learning and deep learning (World Wide Web Foundation, 2017).

In the era of data abundance and sophisticated statistical methods, machine learning has proven its potential and found many applications in different industries. Autonomous vehicles and Netflix personalised movie recommendations can serve as an example. Obviously, many attempts have been made to leverage AI and ML to revolutionise healthcare as well. Unfortunately, in this case machine learning has failed to fully deliver its promise so far, as there is a lack of substantial evidence for its effectiveness, especially in regard to health outcomes (Cabitza, Rasoini & Gensini, 2017). Nevertheless, healthcare attracts unprecedented attention and massive funding in this respect, and this hype is predicted to progress even further, as presented in the Figure 3.

Figure 3. Explosive growth of the AI health market (Collier Matt, Fu Richard & Yin Lucy, 2017).

How has it come to happen that despite cutting-edge technology and considerable capital injections medical practice has not evolved noticeably towards automation in recent years? How to effectively introduce digital solutions? These are important questions to answer, due to a number of reasons:

1) Human life is at stake. Doctors, scientists, and policymakers are ethically obliged to make every effort to maximise the potential of available technologies for improvement of quality of care.

2) Financial burden of healthcare in enormous and ever-growing (Keehan et al., 2015). Automation of certain aspects of clinical work could certainly reduce cost of provided services. According to Accenture, combined savings thanks to AI health applications will possibly reach $150 billion by 2026, just in the US (Collier Matt, Fu Richard & Yin Lucy, 2017).

3) There are many important areas of research in medicine other than artificial intelligence and machine learning, so government-funded AI ventures must be backed by convincing evidence.

It is therefore essential to understand potential benefits and current limitations of AI and ML and put them in context of clinical care. No matter how disruptive this technology is, a complicated nature of practising medicine, intricacies of healthcare organisations and wishes of patients and doctors have to be taken into account to obtain desired outcomes. The aim of this post is to explore crucial performance, social, and ethical issues related to use of AI and ML in healthcare and follow up with solutions.

Performance

Who does it better: a man or a machine?

It is vital to understand how machine learning works in order to identify areas in which it can bring the most value. Broadly speaking, ML can be divided into two types: supervised learning and unsupervised learning. They are compared in the Figure 4.

Figure 4. Comparison of supervised and unsupervised machine learning (Zhou Linda, 2018).

In supervised learning, the goal is to develop a function that will create a relationship between input and output on the basis of sample data, in order to best predict a known target or output. It is focused on classification and prediction (Deo, 2015), which hints that its use in medicine would be mainly prognosis and diagnosis. Indeed, prognosis for coronary heart disease using the Framingham Risk Score (Kannel et al., 1975) or an automated ECG machine, trained to detect a limited set of pre-programmed diagnoses based on one-dimensional time signals, are relatively simple examples of supervised learning commonly seen in medical practice. Advanced supervised learning techniques create even more possibilities. For instance, they helped clinicians identify rational targets for intervention in complex disease, such as diffuse large B-cell lymphoma, the most common lymphoid malignancy among adults (Shipp et al., 2002). They can also make use of robust electronic health records to enable early detection of disease onset (Choi et al., 2017).

In unsupervised learning, on the other hand, outputs to predict are not specified (Deo, 2015). As seen in the Figure 4 above, algorithms establish naturally occurring patterns in unlabelled, unclassified test data, resulting in clusters that share similar characteristics. Outcomes might be very challenging to analyse yet provide a very powerful tool that allows to uncover hidden relationships within multidimensional datasets. For example, the unsupervised method allowed for identification of certain shared biomolecular events in cancer (Wei-Yi Cheng, Tai-Hsien & Anastassiou, 2013). The achievements in application of AI and ML to medicine are very promising. Not only has this technology a potential to improve present-day clinical practice, but it is also paving the path for emerging fields such as precision medicine (Mesko, 2017) (Krittanawong et al., 2017).

It seems that computers are able to arrive at decent diagnosis faster and cheaper. They do not get tired, bored, or distracted. They do not rely on fallible memory but utilise millions of accurate data points instead. Do we still need doctors then? The short answer is yes. Medicine is a very complex discipline that demands an extensive combination of hard and soft skills. Every patient that visits a clinic has a different history and symptoms. What is more, disease pathogenesis is often a heterogenous and multifactorial process. Thus, most of problems in healthcare do not belong to those with a limited number of known variables that can be easily encoded into software, which is often a case in other industries. Presenting with a great degree of uncertainty, medicine is as much art, as it is science. What has always been a common knowledge among physicians, still poses a great challenge for ML researchers and developers.

There are still many technical limitations of machine learning in medicine. From a mathematical point of view ML attempts to arrive at optimal solutions to tasks with many possible solutions (so-called ill-posed problems) (Taylor, 2006). In essence, if the pursuit of a solution is unconfined, the established decision rule will ‘overfit’ the training data. This is especially problematic with more complex decision rules, which are at risk of binding to strongly to the multidimensional training dataset and therefore failing to generalise to a new dataset. In other words, one must choose between simple yet generalisable machine learning solutions and complex but unpredictable ones. This pertains to a fundamental problem of appropriate feature selection in machine learning (Iguyon & Elisseeff, 2003), and can be extremely risky in medical context, where one error may cost lives of patients. Doctors are superior in this aspect as their reasoning in much more finely tuned and they are able to constantly reconsider their decisions. What is more, due to a dynamic nature of patient management, statistical models in ML can hardly account for unexpected changes in patient’s condition such as a lack of response to treatment or rapid deterioration. As long as artificial intelligence can only provide probabilistic outputs etc. but is unable to answer the question: How do I effectively manage this patient throughout his journey?, its value is restricted in clinical practice. Moreover, new patterns in data exposed by machine learning are indeed be exciting and helpful, but do not provide any explanations on their own without and do not necessarily point towards a cause-effect relationship between two variables, a coveted concept in modern medicine. Therefore, any observations must be further investigated by human specialists.

Despite aforementioned challenges, it must be recognised that advanced statistical methods and beautiful, elaborate algorithms do exist and perform relatively well. Unfortunately, the biggest problem related to the use of artificial intelligence in medicine lies at its source — the data (Wachter, 2015). Medical information is overwhelmingly fragmented and noisy, i.e. very important bits of data are incorrect or missing, and records are flooded with irrelevant or indecipherable inputs. This phenomenon stems from multiple causes. Firstly, conditions for data collection in clinical setting are rather adverse. Doctors are usually very busy and do not pay enough attention to accurate and complete documentation of patient notes. Moreover, patients’ multifaceted situations predominantly dictate what type of information is gathered, resulting in very haphazard databases. An absence of standardisation does not help. Different electronic health records are used across institutions, let alone countries (Kellermann & Jones, 2013), significantly affecting data interoperability. On top of that, medical records are not routinely shared between healthcare providers, due to legitimate legal and privacy concerns as well as vested interests of private parties. Until these problems are not addressed, the progress in adoption of AI in healthcare is likely to be hindered.

Why not both!

A combined potential of state-of-the-art algorithms and powerful computers gives machine learning a technical opportunity to approximate and outperform human healthcare providers in many cognitive tasks. Even verdicts based on holistic assessment and eyeball tests, that appear to put well-rounded doctors above computers, fail short to objective statistical analysis (Jain, Duval & Adabag, 2014). We tend to overestimate the capabilities of human intuition and experience. Nevertheless, machine learning is focused solely on data with a demise of context (Cabitza, Rasoini & Gensini, 2017). It greatly affects the value of the output diagnosis, therapy and prognosis that are missing relevant psychological, social, relational, and organizational issues. These important factors are difficult to incorporate into machine learning to due to their qualitative rather than quantitative nature. Displaying complementing advantages and limitations, doctors and computers should be perceived as team players rather than direct competitors, contrary to the speculations in media that ask whether AI will replace doctors (BBC, 2018).

Figure 5. Results produced by machine learning must undergo evaluation by human expert (Chollet, 2017).

To overcome ML’s tendency to overfit the data and miss clinical context, a doctor should oversee and evaluate results produced by a machine, as depicted in the Figure 5 above. It will ensure a very high sensitivity, lower cost and saved time, while preserving specificity of and patient safety (Deo, 2015). Even the biggest corporate players in the field of machine learning such as IBM Watson stress that intelligent systems are designed to augment clinical practice and cannot work independently (Wachter, 2015). It is therefore essential to engage doctors at every stage of product development. Healthcare organisations and practitioners should figure out how make this cooperation as productive as possible. It certainly involves opening the black box that machine learning is (Jastrzebski, Nov 16, 2018). The software should be able to provide rational explanations for outputs that it produces and utilise intuitive visualisation tools that will allow medical practitioners to better understand the effect of different exposure variables (Cabitza, Rasoini & Gensini, 2017). Such improvements, however, would not exempt clinicians from acquisition of strong skills in critical and analytical evaluation of AI solutions. Institutionally, a medical school curriculum should be updated for the future generation of doctors to include at least a basic IT training. Ideally, it could entail coding and software development skills as well as an introduction to advanced statistics. Universities worldwide might consider extending their offer of supplementary IT-related courses, such as medical doctorate degrees in artificial intelligence. Intercalated degrees in Management and Biomedical Engineering (Imperial College London) or iBSc in Mathematics, Computers, and Medicine (University College London) designed specifically for medical students are also excellent examples to follow. As Alan Perlis, one of the pioneers in computer science, said: In man-machine symbiosis, it is man who must adjust: The machines can’t. (Perlis, 1982). However, a shift towards a greater reliability on digital solutions should not affect doctors’ knowledge and professionalism — any clinical deskilling must be prevented at all cost (Hoff, 2011).

The issues described earlier related to quantity and quality of data available for training of machine learning programs need to be addressed as well. Development of national frameworks for interoperability of electronic health records would allow for creation of large, uniform databases. There are three dimensions in which standardisation must be achieved: how data is sent and received, the format and structure of information, and terminology used within messages (Kellermann & Jones, 2013). Health IT systems must be easy to use to encourage collection of complete and accurate data and avoid slowing doctors down (Campbell et al., 2006). Companies should work closer with public health providers who have a capacity to provide comprehensive datasets gathered from millions of patients. Understandably, appropriate incentives model could be created in order to ensure that all parties involved, i.e. clinics, AI companies, and patients benefit from sharing their data. Given that some of these ideas are actually implemented in practice, quality of datasets will improve but is probably going to be far from perfect anyway. This remains a challenge for researchers to further investigate methods of data processing and apply them in real world. For example, pre-existing databases could undergo extensive data mining in order to create new bits of information (Fayyad & Uthurusamy, 1996).

The conclusion follows that, despite some limitations, machine learning has a great capacity to optimise and automate specific tasks drawing on elaborate algorithms and large datasets. The best outcomes, however, are achieved when this potential is used to complement the physician’s skills and expertise.

Social impact

Redefining the role of the doctor

Diagnosis has been a foundation of medical practice for ages. In his bestselling book How We Die (Nuland, 1994), the Yale surgeon and finalist for the Pulitzer Prize respectfully termed diagnosis ‘The Riddle’ and confessed:

I capitalise is so there will be no mistaking its dominance over every other consideration. The satisfaction of solving The Riddle is its own reward, and the fuel that drives the clinical engines of medicine’s most highly trained specialists. It is every doctor’s measure of his own abilities; it is the most important ingredient in his professional self-image.

Technical analysis of machine learning unveiled its ability to take a solid chunk of diagnostic work away from doctors. Practitioners may be concerned about diminishing public respect for their profession as well as a general purpose of it. Worse yet, the might fear that they will eventually lose their jobs completely.

However, doctors do not need to be afraid about running out of responsibilities. They still are the decision makers, responsible for final diagnosis and treatment. Machine learning solutions will only improve confidence and decrease the time required to make a clinical judgement, which is turn will boost the doctor’s efficiency and allow him to help more patients than ever. Moreover, a growing and aging global population puts so much pressure on healthcare providers that it is going to be difficult for them to keep up anyway. According to the analysis by Accenture, presented in Figure 6, by 2026 artificial intelligence will address only 20% of unmet clinical demand.

Figure 6. Both doctors and AI will be necessary to meet an increasing clinical demand. Graph is not to scale and is illustrative (Collier Matt, Fu Richard & Yin Lucy, 2017).

The paradox: AI making healthcare more human

It is worth considering whether diagnosis is actually the most important aspect of care provided by doctors. Patients come to the clinic for numerous reasons. Not only do they want to understand and treat their condition, but they also need someone to listen, support and provide guidance. In many cases, there is no definite diagnosis or effective treatment for patient’s condition and the supportive role of a medical team becomes crucial. Partial automation of clinical and administrative tasks will hopefully allow physicians to spend more time with patients in a meaningful, personal way. Interestingly, focusing on patients and getting to know them better may bring about amazing clinical outcomes. The patient-centred approach reduces the number of symptoms experienced by patients as well as improves diagnostic efficiency, adherence with treatments, recovery, and finally satisfaction with the service (Murtagh, 2015) (Clever et al., 2008).

Therefore, a potential transformation of the doctor’s professional appearance from mainly diagnostic to more supportive can be indeed valuable and delightfully humane. To achieve this, all doctors must learn to appreciate the value of holistic care, and patients have to maintain the trust they have in doctors, even when some of their most impressive clinical skills are supplemented by silicon wafers. In the light of revolution, physicians will hopefully invest more in human skills and find the use and development of data-driven software rewarding.

Ethical considerations

Safety and responsibility

Since artificial intelligence is apparently taking healthcare by storm, it is necessary to reflect on the ethical issues that might arise. Do what extent shall we entrust ‘thinking computers’? A recent case from the New York-based Memorial Sloan Kettering Cancer Centre proves that an excessive reliance on computationally derived solutions could precipitate tragic consequences. Namely, the hospital deployed IBM’s Watson that reportedly created ‘unsafe and incorrect’ treatment plans for patients suffering from cancer (Spitzer, 2018). The tech giant was accused of training on hypothetical cases instead of real patient data and ignoring guidelines. It is unacceptable to let situations like this happen, as the patient safety is paramount. Being a digital tool, AI can be instinctively perceived as safer than, for instance, novel medicines that directly enter into the human body. However, this perception is false as misleading recommendations can do a lot of harm.

Assuming that doctors will remain final decision makers, they should also accept the ultimate responsibility for the patient well-being. However, it should not exempt AI vendors from the obligation to present exhaustive evidence and obey the rules of good practice in marketing and sales. Policymakers are now facing a challenging task of regulating a rapidly evolving market of ‘intelligent’ solutions. On top of the safety issues, they need to polish the legal frameworks regarding secure maintenance of large databases, while simultaneously allowing patients to retain a full control over their private information (Neame, 2013).

Care for everyone

Digital nature of artificial intelligence gives real hope for reducing healthcare inequalities worldwide, as it can potentially increase the reach and quality of healthcare, especially in remote locations. The value of bringing specialist knowledge to scale is enormous. Speaking of global health, AI can also effectively protect the whole populations. For example, computers learned to recognise weather and land-use patterns correlated with the spread of dengue fever, a disease that a half of the world’s population is at risk of acquiring (Hornyak, 2017).

To achieve impressive results globally, companies and researchers must remember to address the needs of poor regions with their machine learning tools and try to provide them at affordable rate.

Final thoughts

It is evident that artificial intelligence is on its way to become a key player in the modern healthcare. Bearing this in mind it is crucial to take appropriate steps to ensure that the wide adoption of AI will actually be favourable for patients. As of today, even though there are many successful applications of machine learning to medical data, the vast majority of them has not contributed substantially to clinical care. Therefore, this post analysed and compared the cognitive capacity of computers and humans, provided an insight into social and ethical issues related to beneficial implementation machine learning, and suggested a set of strategies to change it.

Machine learning teaches programs to recognise patterns, which can later be used to approximate the doctor’s performance or guide further research. In order to build less ‘narrow’ AI models in medicine, new relevant features in patient datasets need to be identified. The interplay of supervised and unsupervised learning can therefore lead to amazing discoveries and identification of molecular pathways triggering a disease that are beyond human perception. However, according to Deo (2015):

This raises a question about the underlying pathophysiologic basis of complex disease in any given individual: is it sparsely encoded in a limited set of aberrant pathways, which could be recovered by an unsupervised learning process (albeit with the right features collected and a large enough sample size), or is it a diffuse, multifactorial process with hundreds of small determinants combining in a highly variable way in different individuals? In the latter case, the concept of ‘precision medicine’ is unlikely to be of much utility.

Therefore, more research into the intricacies of complex disease could be done to explore the opportunities and challenges related to such exciting field such as precision medicine. Moreover, the scope of this post entailed predominantly machine learning applied directly to clinical tasks. There are many possible uses of artificial intelligence that were not investigated such as medical education (Kolachalama & Garg, 2018) or automation of administrative tasks (Smith, 2018), and these are also central to overall improvement of healthcare. Social and ethical issues that were described in the post are also very important. Hence, a more detailed legal insight and propositions of payment schemes could be developed to provide tangible solutions to these problems.

In his book ‘The Digital Doctor’, doctor Robert Wachter (2015) used Ernest Hemingway’s phrase gradually and then suddenly to cleverly describe the probable timeline of AI revolution in healthcare. What will medicine of the future offer though? No one knows for sure, but let’s hope for the best.

References

  1. Anch. (2019) Artificial Intelligence In Medicine: How AI Can Benefit The Healthcare Industry. Available from: https://robots.net/ai/artificial-intelligence-in-medicine/.
  2. BBC. (2018) Could artificial intelligence replace doctors? Available from: https://www.bbc.co.uk/news/av/technology-44795307/could-artificial-intelligence-replace-doctors.
  3. Cabitza, F., Rasoini, R. & Gensini, G. F. (2017) Unintended Consequences of Machine Learning in Medicine. Jama. 318 (6), 517–518. Available from: http://dx.doi.org/10.1001/jama.2017.7797.
  4. Campbell, E. M., Sittig, D. F., Ash, J. S., Guappone, K. P. & Dykstra, R. H. (2006) Types of unintended consequences related to computerized provider order entry. Journal of the American Medical Informatics Association. 13 (5), 547–556.
  5. Choi, E., Schuetz, A., Stewart, W. F. & Sun, J. (2017) Using recurrent neural network models for early detection of heart failure onset. Journal of the American Medical Informatics Association. 24 (2), 361–370. Available from: doi: 10.1093/jamia/ocw112.
  6. Chollet, F. (2017) The limitations of deep learning. Available from: https://blog.keras.io/the-limitations-of-deep-learning.html.
  7. Clever, S. L., Jin, L., Levinson, W. & Meltzer, D. O. (2008) Does Doctor–Patient Communication Affect Patient Satisfaction with Hospital Care? Results of an Analysis with a Novel Instrumental Variable. Health Services Research. 43 (5), 1505–1519. Available from: doi: 10.1111/j.1475–6773.2008.00849.x.
  8. Collier Matt, Fu Richard & Yin Lucy. (2017) Artificial Intelligence: Healthcare’s New Nervous System.
  9. Deo, R. C. (2015) Machine Learning in Medicine. Circulation. 132 (20), 1920–1930. Available from: doi: 10.1161/CIRCULATIONAHA.115.001593 [doi].
  10. Fayyad, U. & Uthurusamy, R. (1996) Data mining and knowledge discovery in databases. Communications of the ACM. 39 (11), 24–26. Available from: doi: 10.1145/240455.240463.
  11. Hoff, T. (2011) Deskilling and adaptation among primary care physicians using two work innovations. Health Care Management Review. 36 (4), 338. Available from: doi: 10.1097/HMR.0b013e31821826a1.
  12. Hornyak, T. (2017) Mapping Dengue Fever Hazard with Machine Learning. Available from: https://eos.org/articles/mapping-dengue-fever-hazard-with-machine-learning.
  13. Iguyon, I. & Elisseeff, A. (2003) An introduction to variable and feature selection. Journal of Machine Learning Research. 3 1157–1182.
  14. Imperial College London. Intercalated BSc programme. Available from: https://www.imperial.ac.uk/medicine/study/undergraduate/intercalated-bsc-programme/.
  15. Jain, R., Duval, S. & Adabag, S. (2014) How accurate is the eyeball test?: a comparison of physician’s subjective assessment versus statistical methods in estimating mortality risk after cardiac surgery. Circulation.Cardiovascular Quality and Outcomes. 7 (1), 151. Available from: doi: 10.1161/CIRCOUTCOMES.113.000329.
  16. Jastrzebski Stanislaw. (Nov 16, 2018) How Neural Networks Begin to Learn. Science: Polish Perspectives. Nov 16–17, 2018, University of Oxford.
  17. Kannel, W. B., Doyle, J. T., McNamara, P. M., Quickenton, P. & Gordon, T. (1975) Precursors of sudden coronary death. Factors related to the incidence of sudden death. Circulation. 51 (4), 606. Available from: http://circ.ahajournals.org/cgi/content/abstract/51/4/606.
  18. Keehan, S. P., Cuckler, G. A., Sisko, A. M., Madison, A. J., Smith, S. D., Stone, D. A., Poisal, J. A., Wolfe, C. J. & Lizonitz, J. M. (2015) National health expenditure projections, 2014–24: spending growth faster than recent trends. Health Affairs (Project Hope). 34 (8), 1407–1417. Available from: doi: 10.1377/hlthaff.2015.0600 [doi].
  19. Kellermann, A. L. & Jones, S. S. (2013) What it will take to achieve the as-yet-unfulfilled promises of health information technology. Health Affairs (Project Hope). 32 (1), 63. Available from: doi: 10.1377/hlthaff.2012.0693.
  20. Kolachalama, V. B. & Garg, P. S. (2018) Machine learning and medical education. Npj Digital Medicine. 1 (1), 1–3. Available from: doi: 10.1038/s41746–018–0061–1.
  21. Krittanawong, C., Zhang, H., Wang, Z., Aydar, M. & Kitai, T. (2017) Artificial Intelligence in Precision Cardiovascular Medicine. Available from: http://www.sciencedirect.com/science/article/pii/S0735109717368456.
  22. Mesko, B. (2017) The role of artificial intelligence in precision medicine. Expert Review of Precision Medicine and Drug Development. 2 (5), 239–241. Available from: doi: 10.1080/23808993.2017.1380516.
  23. Murtagh, G. (2015) Clinical Communication: Course Guide. London, Imperial College London School of Medicine.
  24. Neame, R. (2013) Effective Sharing of Health Records, Maintaining Privacy: A Practical Schema. Online Journal of Public Health Informatics. 5 (2), 217. Available from: doi: 10.5210/ojphi.v5i2.4344.
  25. Nuland, S. B. (1994) How we die. London, Chatto & Windus.
  26. Perlis, A. J. (1982) Special Feature: Epigrams on programming. ACM SIGPLAN Notices. 17 (9), 7–13. Available from: doi: 10.1145/947955.1083808.
  27. Poole, D. (1998) Computational intelligence: a logical approach. New York; Oxford, Oxford University Press.
  28. Pyle, D. & San José, C. (2015) An executive’s guide to machine learning. Available from: https://www.mckinsey.com/industries/high-tech/our-insights/an-executives-guide-to-machine-learning.
  29. Shipp, M. A., Ross, K. N., Tamayo, P., Weng, A. P., Kutok, J. L., Ricardo C.T. Aguiar, Gaasenbeek, M., Angelo, M., Reich, M., Pinkus, G. S., Ray, T. S., Koval, M. A., Last, K. W., Norton, A., Andrew Lister, T., Mesirov, J., Neuberg, D. S., Lander, E. S., Aster, J. C. & Golub, T. R. (2002) Diffuse large B-cell lymphoma outcome prediction by gene-expression profiling and supervised machine learning. Nature Medicine. 8 (1), 68. Available from: doi: 10.1038/nm0102–68.
  30. Smith, M. (2018) Bradford, GE Healthcare Announce AI-Powered Hospital Command Center, First of its Kind in Europe. Available from: https://blog.thecamdengroup.com/blog/topic/command-center.
  31. Spitzer, J. (2018) IBM’s Watson recommended ‘unsafe and incorrect’ cancer treatments, STAT report finds. Available from: https://www.beckershospitalreview.com/artificial-intelligence/ibm-s-watson-recommended-unsafe-and-incorrect-cancer-treatments-stat-report-finds.html.
  32. Taylor, P. D. (2006) From patient data to medical knowledge : the principles and practice of health informatics. London, BMJ.
  33. University College London. iBSc Mathematics, Computers and Medicine. Available from: https://www.ucl.ac.uk/infection-immunity/study/ibsc-mathematics-computers-and-medicine.
  34. Wachter, R. M., author. (2015) The digital doctor : hope, hype, and harm at the dawn of medicine’s computer age. New York; London, McGraw-Hill Education.
  35. Wei-Yi Cheng, Tai-Hsien, O. Y. & Anastassiou, D. (2013) Biomolecular events in cancer revealed by attractor metagenes. PLoS Computational Biology. 9 (2), e1002920. Available from: doi: 10.1371/journal.pcbi.1002920.
  36. World Wide Web Foundation. (2017) Artificial Intelligence: The Road Ahead in Low and Middle-Income Countries.
  37. Zhou Linda. (2018) Simplify Machine Learning Pipeline Analysis with Object Storage. Available from: https://blog.westerndigital.com/machine-learning-pipeline-object-storage/supervised-learning-diagram/.

--

--