Software Architect: Hyunsuk Frank Roh, MD


Publication

  -  1st Author in SCI Journals

  -  Protocols.io

  -  Acknowledged

The asterisk (*) denotes his corresponding authorship.

Descriptive alt text


Table of Contents

Software Architecture

Software Architecture (Conceptualized in 2013)

Software Architecture (The 2024 Edition)


Robotic Surgery

RCT Meta-analysis: Robotic vs. Laparoscopic Surgery (Frank, 2018)

Robotic surgery cost, under the hood

My general subjective opinion on surgical robotics


ECMO (ExtraCorporeal Membrane Oxygenation)

ECMO meta-analysis on hazard ratios: Cardiopulmonary Diseases (Frank, 2020)

ECMO meta-analysis on hazard ratios: Respiratory failure (Frank, 2020)


My Thoughts about Relevant Books, Films, and Media

Artificial Intelligence

'AI Ethics' by Mark Coeckelbergh

'Virtual Reality' by Samuel Greengard

'Intellectual Property Strategy' by John Palfrey

'Cloud Computing' by Nayan B. Ruparelia

'The Internet of Things' by Samuel Greengard


Contact


Acknowledgment


Contents

- Infrastructural Aspects of Software Component (in association with Device)

   (1) Device Interface
   (2) Waveform Analyses
   (3) Hemodynamics
   (4) Medical Statistics
   (5) Machine Learning

The idealism of a hemodynamic software

The complexity of hemodynamic models has prevented clinicians from getting the insights out of the models when relating the clinical issues with the hemodynamic model. Visualization is the most persuasive way to illustrate a hemodynamic equation, and simulation is needed to visualize how the equation changes upon the manipulation of the coefficient of equations. Thus, the success of the hemodynamic software depends on how easy it is to work with visualizing the hemodynamic model and how effective it is for clinicians to draw insights from the models.

Additionally, it would be better if the following conditions are fulfilled: -1) an engineer takes care of the CPU time and memory management when combining and implementing numerous hemodynamic models published so far; -2) the simulation software provides an alternative interface other than GUI, which could enable experts to work more flexibly with the hemodynamic model; -3) components such as device interface, medical statistics, and artificial intelligence are coherently integrated in order to facilitate hemodynamic research.

Infrastructural aspects of each component

Each component will be the basis upon which other components can be built. This circulative data flow in the architecture diagram will eventually contribute to the development of other components synergistically. In other words, when considering the final overall goal of this software project as facilitating the data flow according to the software architecture, one part of the development will benefit the other part of the research.

The hemodynamic workbench software will be implemented to provide the following infrastructural functionalities: (1) To receive signals from the hemodynamic instrument; (2) To extract necessary information by wavelet analyses; (3) To understand the data according to the hemodynamic model and simulation; (4) To provide medical statistics; (5) To perform an action by reinforcement of the learning process.

Why the thoracic cavity for hemodynamic software and robotic surgery?

The thoracic cavity is intriguing in regards to its demanding physiological and computational potential. It is physiologically intriguing how the lungs and the heart are directly governed by the laws of physics: the hemodynamics during blood circulation and respiration with relation to auscultation, electrocardiography, ECMO and anesthetic machines. Computationally, a kernel-level device driver and Bayesian-based machine learning algorithm can be employed for (1) monitoring of the states of the thoracic organs, (2) computer-assisted hemodynamic modeling and simulation, and (3) machine learning for information processing. In addition, the thoracic cavity is ideal for a specialty that sits on the cusp between surgery and engineering to perform intellectually and technically challenging surgical robotic R&D projects on the organs encased by bones, which are best accessed and manipulated by a thin robotic hand instrument with ergonomic advantages. This will widen the indication of robotic cardiovascular surgery with new surgical procedures that integrate various additional hemodynamic devices and computational support.

"Surgeons must progress beyond the traditional techniques of cutting and sewing that have been their province since surgeons were barbers to a future in which approaches involving minimal access to the abdominal cavity are only the beginning." - Pappas et al. (2004) N Engl J Med.


(1) Device Interface index

Device driver interface component will enable the software to access raw data directly from a device. Biomedical companies seem to welcome the idea of enabling third parties to write software for their devices, which is exemplified by 3M providing an SDK (Software Development Kit) to allow people to write software for its Bluetooth stethoscope. However, my ultimate goal will be to make one step further by implementing the kernel-level device driver that would connect devices more fundamentally (as compared to existing SDK) and, therefore, to establish an integrative and flexible hemodynamic workbench.
    Some EKG classification articles (Lee, 2013) (Lihuang, 2010) relied exclusively on the MIT-BIH arrhythmia database or the standard test material to evaluate their arrhythmia detection algorithms. However, to the best of our knowledge, the difficulty of acquiring additional new raw EKG dataset due to the absence of open-source device interface for EKG instrument may be at least partially attributed to those researchers's having to work exclusively on MIT-BIH arrhythmia database. Therefore, if this software can receive the EKG raw stream over a WiFi or USB connection from instruments, future engineers can acquire additional test materials by collecting further raw EKG data alongside with corresponding EKG diagnoses, directly.
    Nonetheless, companies would be cautious about opening their device protocols for my implementing the kernel-level device interface, since doing so might change the company's marketing strategies and policies. Therefore, continuous improvement of Project nGene.org® in the long-term to gain agreement concerning its clinical pragmatism and to embrace clinicians' needs by providing an easy-to-write environment for their own scripts will have to be prioritized over this kernel component.


(2) Waveform Analyses index

"(2) Waveform Analyses" component pre-processes the raw wavelet data directly from the devices via the "(1) Device interface" component. In order to handle the raw wavelet dataset, such as EKG, lung and heart sounds, etc., two core algorithms have been chosen to be common denominating features: Independent Component Analysis (ICA) separates the mixed wavelets, whereas Support Vector Machine (SVM) classifies things after being trained.
    Its benefit can be illustrated by how this feature may change the existing flow. These machine-learning components can be used tentatively, until a more precise implementation of the classification for wavelets is implemented later in the point of time. For example, machine-learning algorithms for classifying EKG would be no match for a manually-written conditional statements implemented according to the Sokolow-Lyon Criteria for left ventricular hypertrophy (LVH) (Sokolow, 1949), as it would be nonsensical for training SVM to distinguish whether the summation of the S wave in V1 and the R wave in V5 or V6 is greater than, specifically, 35mm or not for LVH. However, until the manually-implemented code is developed according to certain criteria, it may be better to employ machine-learning features to accommodate wavelets in order to accelerate research and development in the meanwhile.
    For an example of embedding this software into the educational CPR kit mentioned above, the AED (Automated External Defibrillation) algorithm requires distinguishing normal EKG from various arrhythmia cases. However, since the MIT-BIH "arrhythmia" database does not have normal EKG dataset, the "(1) Device Interface" component can be used to collect a normal EKG raw dataset. Once normal EKG data with diagnoses are accumulated, then the SVM algorithm can be trained to classify whether it should be defibrillated, synchronized cardioversion, non-shockable, and normal, until the development of a more accurate manually-programmed classifying algorithm.


(3) Hemodynamics index

Project nGene.org® intends to facilitate research on the hemodynamic model, not only to better understand the physiology, and but also to gain further insights into improving the model. There are numerous equations published already and in the future and it may be too late if we just wait for the echocardiography manufacturing engineer to implement the module for the equation we need. Unless it is open-sourced, it cannot possibly follow the speed of insights during research. Yale Neuron is open-sourced with GUI for simulating neuron network; however, in my opinion, no matter how flexibly a software architect may implement its GUI, it cannot be on a par with the flexibility and creativity of new equations and insights of clinicians in the future.
    Therefore, Project nGene.org® tries to circumvent this problem by integrating R script so that clinicians can add their equations to test those features during echocardiographic measurements on the flies. At the same time, I believe that the success of earning popularity depends on how easy and generic it is for clinicians to add and modify the source code. Since clinicians do not have time to spend on learning, it is very important to make it very intuitive to make them willing to invest their time. I think that clinicians will invest their time only if they can get it intuitively.


(4) Medical Statistics & (5) Machine Learning index

"(4) Medical Statistics" is something that I do, not as a destination, but as a necessary step. To put it straightforwardly, the ultimate goal is "(5) Machine Learning". "(5) Machine Learning" component is pushed back on the priority list in the Masterplan Chart, because the software is designed to provide the following different types of dataset for the machine-learning algorithms: (i) Directly from hardware via the kernel program part, "(1) Device Interface"; (ii) Indirectly processing the wavelets raw data from instruments, "(2) Waveform Analyses"; (iii) Parsing and processing articles, especially meta-analysis and survival curve data, "(4) Medical Statistics", via a semantic web.
    The semantic web is a very suitable piece for medicine due to several reasons: (1) It is very flexible to integrate other semantic webs together, such that it can be used as a knowledge database with numerical information. (2) This numerical information with a network form can be fed into Bayesian-based machine learning. (3) Meta-Analysis is one of the forms of very specialized information that are available in the domain of medicine, and getting the hazard ratio from the survival curve for meta-analysis was, in my opinion, the most difficult methodology and the most challenging technical barrier when building a semantic web database.




Software Architecture (The 2024 Edition)

As both a medical doctor and a software engineer, with experience in echocardiography and serving as an IRB chair, I bring a unique, chimeric perspective to the development of Project nGene.org®. This dual expertise is crucial in navigating the challenges outlined in three seminal works: The Mythical Man-Month, The Innovator's Prescription, and Crossing the Chasm.

The Mythical Man-Month: In the interdisciplinary world of software and medicine, I have learned that communication is key to bridging the gap between different fields—what I call the "Apple and Orange" problem. This lesson was driven home by my experiences and reinforced by Fred Brooks' The Mythical Man-Month. Brooks warns that simply adding more manpower to a project often increases complexity rather than reducing it. As a chimera, trained in both fields, I strive to minimize this intercommunication complexity, ensuring that the app remains manageable and effective without the need to constantly increase resources.

The Innovator's Prescription: The Project nGene.org app is not designed to guarantee perfect accuracy in recognizing visual or auditory data through its camera or microphone. Instead, drawing from The Innovator's Prescription, the app's primary objective is to disrupt traditional clinical workflows by simplifying and democratizing complex medical processes. My goal is to enhance the clinical experience, making it more efficient and cost-effective, while keeping the app accessible to a broader audience. Additionally, by making parts of the codebase open-source, we are fostering a collaborative environment that invites continuous innovation and improvement.

Crossing the Chasm: Finally, in alignment with Geoffrey Moore's Crossing the Chasm, this app is strategically focused on identifying and capturing its niche market within the healthcare industry. By targeting a specific segment that values innovation, efficiency, and cost-effectiveness, the app aims to establish a strong foothold and gradually expand its user base. I am committed to ensuring that the app not only provides core technology but also offers a comprehensive ecosystem of support and services. This approach ensures seamless integration into existing clinical workflows, addressing the pragmatic needs of a broader user group and facilitating the app's transition from early adopters to the early majority.

The software project is meticulously crafted, with each component acting as a foundational pillar for subsequent innovations, establishing a circular data flow within its architectural framework. This methodology is anticipated to synergistically propel the evolution of the platform's elements. The project's paramount objective is to refine data circulation to mirror its architectural blueprint, ensuring that progress in one domain reciprocally amplifies research endeavors across the board. The hemodynamic workbench software is poised to offer essential functionalities: (1) capturing signals from hemodynamic instruments, (2) distilling vital information via wavelet analyses, (3) decoding data through hemodynamic models and simulations, (4) compiling medical statistics, and (5) executing actions based on a reinforcement learning algorithm.

Implementing the software marks the recrystallization of his professional journey, serving as a compass to navigate his career. This endeavor will not only guide him towards new horizons but also enrich his understanding for further development, ultimately fulfilling his life's purpose and enhancing his sense of satisfaction.


Why the thoracic cavity for hemodynamic software and robotic surgery?

The thoracic cavity, encasing critical organs such as the heart and lungs, presents a unique intersection of physiology and technology, demonstrating the profound influence of physical laws on biological functions. From a computational perspective, the integration of kernel-level device drivers with machine learning algorithms offers transformative potential in thoracic medicine. These technologies enable continuous monitoring of thoracic organ states through advanced waveform analyses, including ECG and ventilation monitoring waveforms (pressure, flow, volume), and auscultated mixed heart and lung sounds. Such detailed data acquisition is crucial for effective decision-making and patient management in real-time scenarios. The computational modeling capabilities, particularly in hemodynamic simulations, are further enhanced by incorporating echocardiography data. This integration is especially pivotal in addressing complex conditions like pulmonary hypertension, where accurate hemodynamic models can significantly improve the outcomes of interventions such as congenital heart defect surgeries in neonates. By simulating various physiological conditions, surgeons and clinicians can predict the effects of surgical interventions, thereby planning surgeries with higher precision and better prognostic outcomes. Moreover, the field of robotic surgery in the thoracic cavity is advancing rapidly, driven by machine learning algorithms that learn from thousands of surgeries performed by human doctors. This data not only informs the development of autonomous surgical robots but also supports the creation of new surgical techniques that integrate hemodynamic devices and computational support. The advent of slender robotic hand instruments designed specifically for the ergonomic constraints of thoracic surgery further underscores the technical sophistication in this field.

"Surgeons must progress beyond the traditional techniques of cutting and sewing that have been their province since surgeons were barbers to a future in which approaches involving minimal access to the abdominal cavity are only the beginning." - Pappas et al. (2004) N Engl J Med.


(1) Interface index

(2) Waveform Analysis index

(3) Hemodynamics index

The integration of computational modeling and simulation has revolutionized the field of hemodynamics, transforming the way cardiovascular conditions are studied and treated. The dynamic and interactive nature of hemodynamic simulations, as discussed in "Computational Thinking" by Peter J. Denning and Matti Tedre, goes beyond the capabilities of traditional graph drawing, which often falls short when dealing with the complex, variable nature of biological systems. Unlike static graphs that display a fixed dataset, simulations provide a real-time, interactive platform that allows researchers to modify parameters and observe how these changes affect the cardiovascular system. This interactivity is crucial for a detailed understanding of how blood flow and pressure react to various physiological changes, making simulations an indispensable tool in predicting the effects of alterations within the cardiovascular system and aiding in the development of effective treatments for heart diseases.

Advanced modeling and simulation techniques are particularly impactful in addressing the challenges of congenital heart defects (CHD) and pulmonary arterial hypertension (PAH). For instance, the development of logistic-based equations for estimating Pulmonary Artery Pressure (PAP), as noted in Project nGene.org®, underscores the practical application of theoretical models in a clinical setting. These simulations enable the visualization and analysis of cardiovascular responses to treatments in a risk-free environment, which is especially crucial in designing interventions for vulnerable populations such as neonates with CHD. The traditional approach to surgical interventions, fraught with significant risks, highlights the need for non-invasive methods facilitated by simulations. By simulating specific cardiovascular conditions associated with CHD and PAH, Project nGene.org® not only provides insights into the intricate factors influencing patient outcomes but also enhances the potential for successful treatments while minimizing risks.

The ongoing initiative to harness hemodynamic modeling and simulation in the development of neonatal CHD surgery simulations exemplifies the shift towards simulation-based planning and execution of surgical interventions. This approach not only refines the understanding and management of PAH within the context of CHD but also pioneers new methodologies for surgical planning. By creating highly accurate, virtual models where surgical strategies can be tested and refined, simulations ensure the highest level of safety and efficacy in neonatal CHD treatments.


(4) Medical Statistics & (5) Machine Learning index

Integrating "(4) Medical Statistics" into my work is not merely a destination but a vital step towards a broader objective: mastering "(5) Machine Learning". This component is strategically deferred in the Masterplan Chart, as the software is intricately designed to curate diverse datasets for machine learning algorithms through various means: (i) directly from hardware via the kernel in the "(1) Device Interface"; (ii) by processing raw wavelet data from instruments in "(2) Waveform Analyses"; and (iii) by parsing and analyzing medical literature, particularly meta-analyses and survival curve data, through "(4) Medical Statistics", utilizing a semantic web (or Web 3.0) approach. Initially, the semantic web seemed perfectly aligned with medical applications for several reasons: (1) Its inherent flexibility facilitates the integration of multiple semantic webs, creating a comprehensive knowledge database enriched with numerical data. (2) This numerically dense network is ideal for Bayesian-based machine learning applications. (3) Specifically, meta-analysis represents a form of highly specialized information within the medical domain, where deriving hazard ratios from survival curves posed a significant technical challenge and a methodological bottleneck in developing a semantic web database.

However, the rapid evolution of machine learning algorithms necessitated a shift in methodological approach. Acknowledging the advancements in deep neural networks and linear algebra techniques, especially Singular Value Decomposition (SVD), these methods now appear more apt for these objectives. This change in methodology is driven by the emerging efficiencies and capabilities of these algorithms in machine learning, signifying a pivotal adaptation to the evolving landscape of data analysis. This recalibration of approach, moving from a Bayesian-based semantic web to emphasizing deep learning and SVD, reflects a commitment to leveraging the most effective and advanced methodologies available in the field of machine learning. It underlines readiness to adapt and evolve in response to the dynamic nature of technological advancement and the continuous quest for more refined and powerful analytical tools.

The reconsideration of Bayesian algorithms also draws from a historical challenge in the field of artificial intelligence. Despite the Bayesian approach's flexibility and appeal, its application is marred by complexity in calculations beyond simple, restrictive assumptions. This complexity often necessitates approximation methods or sampling, which, while practical, diverge from dealing with the real posterior distribution directly. Further complicating the landscape was the neural network's initial inability to solve the exclusive OR (XOR) problem, a straightforward task achievable with basic digital logic gates but unattainable by a single-layer perceptron. Although it was known that multi-layer perceptrons could theoretically execute such tasks, the lack of effective training methods led to significant disillusionment and a temporary retreat from neural network research. This historical bottleneck highlights the limitations of early machine learning approaches and underlines the strategic pivot towards more advanced and capable methodologies, such as deep learning, that have since overcome these early challenges. (On February 5th, 2024, this segment of the software architecture underwent a revision to include sophisticated deep learning and SVD techniques.)


Robotic Surgery

RCT Meta-analysis: Robotic vs. Laparoscopic Surgery (Frank, 2018)

RCT Meta Analysis

Importance This review provides a comprehensive comparison of treatment outcomes between robot- assisted laparoscopic surgery (RLS) and conventional laparoscopic surgery (CLS) based on randomly-controlled trials (RCTs).
Objectives We employed RCTs to provide a systematic review that will enable the relevant community to weigh the effectiveness and efficacy of surgical robotics in controversial fields on surgical procedures both overall and on each individual surgical procedure.
Evidence review A search was conducted for RCTs in PubMed, EMBASE, and Cochrane databases from 1981 to 2016. Among a total of 1,517 articles, 27 clinical reports with a mean sample size of 65 patients per report (32.7 patients who underwent RLS and 32.5 who underwent CLS), met the inclusion criteria.
Findings RLS shows significant advantages in total operative time, net operative time, total complica- tion rate, and operative cost (p < 0.05 in all cases), whereas the estimated blood loss was less in RLS (p < 0.05). As subgroup analyses, conversion rate on colectomy and length of hospital stay on hysterectomy statistically favors RLS (p < 0.05).
Conclusions Despite higher operative cost, RLS does not result in statistically better treatment outcomes, with the exception of lower estimated blood loss. Operative time and total complication rate are significantly more favorable with CLS.

Robotic surgery cost, under the hood

Regarding the cost-effectiveness of robot-assisted laparoscopic surgery (RLS), it is generally perceived as more expensive. This perception raises questions about the viability of further employing RLS, especially amid concerns over its advantages in complications, conversion rates, and the extended operative time. However, from a patient's perspective, although numerous articles have closely compared the total operative costs between RLS and conventional laparoscopic surgery (CLS), finding a common objective ground is complicated—not to mention considering the exchange rate at the time of surgery (Morino, 2006). Moreover, the information may not be practically relevant to patients, as the total operation cost does not directly correlate with the actual payment by patients due to varying insurance policies across different companies, hospitals, and countries. Aboumarzouk et al. highlighted in their meta-analysis that the so-called 'total cost' fails to account for the 'social cost analysis', which considers the benefits of quicker recovery and shorter convalescence (Aboumarzouk, 2012).

Similarly, from the hospitals' perspective, the profitability of RLS should take into account not only the quantitative aspects such as the cost of equipment, operation time, training surgeons for both CLS and RLS considering their respective learning curves, and the impact of RLS's longer operative time on hospital revenue, hospital stay, blood loss, and insurance policies, but also qualitative factors. These include the surgeon's safety from infections like HIV, repeated radioactive exposure from bedside X-rays, and the comfort of surgeons during surgery by allowing them to sit. Lin et al. also noted that insufficient data and significant heterogeneity due to differences in skill, the extent of lymph node dissection, and the duration of the learning curve preclude a comprehensive meta-analysis of cost-effectiveness (Lin, 2011). Moreover, the unique capability of RLS for remote surgery in scenarios like war and rural areas should not be overlooked. Furthermore, it is empirically understood that the cost of new technology tends to decrease over decades. From the perspective of the public or investors in surgical robotics, it is advisable to consider these underlying factors when evaluating the cost-effectiveness of robotic surgery.

My general subjective opinion on surgical robotics

It may be surprising that the criticisms leveled at laparoscopic pioneers between the 1950s and 1990s bear a striking resemblance to those currently directed at surgical robotics. Most of the criticisms of conventional laparoscopic surgery (CLS), including 'higher complication rates than laparotomies ... attributable mainly to inexperience, and [e]ach procedure normally done via laparotomy [being] re-invented [with] trial and error,' (Page, 2008) are similarly applicable to robot-assisted laparoscopic surgery (RLS). Despite the harsh criticisms in the late 20th century, CLS has now become widely acknowledged as an indispensable surgical method (Pappas, 2004). Thus, mirroring the history of CLS, there remains the potential for RLS to achieve better clinical outcomes in the future, as knowledge and experience continue to accumulate through trial and error across society. This is especially relevant considering that the industry has now entered the era of Industry 4.0, or robotics.


ECMO (ExtraCorporeal Membrane Oxygenation)

ECMO meta-analysis on hazard ratios: Cardiopulmonary Diseases (Frank, 2020)

Extracorporeal membrane oxygenation meta-analysis of time-to-event data in cardiopulmonary disease in adults

In recognition of the benefits of extracorporeal membrane oxygenation (ECMO)[1], clinical outcomes have been the subject of multiple meta-analyses. Previous meta-analyses of ECMO treatment reported forest plots based on relative risks. Unlike a hazard ratio (HR), a relative risk does not consider the time to event and censoring and runs the risk of not using all the available information[2]. In other words, with respect to the patient mortality, the relative risk between ECMO and no-ECMO patient groups cannot avoid overlooking the critical factor of how ECMO has influenced the timing of each event or patient death over the course of disease progression.
   Previous meta-analyses have focused on a single indication presumably because, given the wide range of potential applications for ECMO, studying a particular patient population separately is a crucial step in terms of reducing confounding factors. The present study endeavors to investigate ECMO indications of cardiopulmonary disease as a whole and to list the findings of ECMO mortality in individual indications as subgroup analyses. This was done to ensure that a positive result of a particular indication is not automatically applied to a different patient population that may not have the same benefit, and thereby to prevent a potentially unnecessary intervention. Based on the ECMO indications[3, 4], the present study applies time-to-event data to evaluations of both the overall and individual cardiopulmonary indications of ECMO in adult patients in relation to relevant meta-analyses.
   To the best of our knowledge, the present meta-analysis is the first attempt to use time-to-event HR data to illustrate a forest plot of all-cause mortality from the use of ECMO in adult patients, in terms of both overall cardiopulmonary indications and individual indications as a subgroup analysis. As shown by the results of the overall analysis, across various indications of ECMO in cardiopulmonary diseases in adults, outcomes favored neither the ECMO group nor the no-ECMO group. However, as to the subgroup analyses, the reduction in mortality in the ECMO group was found in respiratory failure, whereas increased mortality in the ECMO group was noted in post-LTx, bridge to HTx, and post-HTx.
   These results should be understood not only in the context of weighing the benefits and adverse effects of ECMO, but also in consideration of patient selection issues. We could not help but notice the propensity to allocate the ECMO treatment to the poor patient conditions. In other words, the no-ECMO groups were selected and specified as groups of patients who required no invasive support[23, 24, 49]. Presumably, this was so because, in a daily practice, ECMO are used in desperate cases such as a cardiogenic shock where, without ECMO implantation, the mortality is critically high. This discriminate propensity of ECMO allocation appears to reflect the wide recognition of the benefits of ECMO treatments[1]but, at the same time, indicates a patient selection bias issue of a meta-analysis on the retrospective studies. Therefore, in addition to the intrinsic benefits and adverse effects of ECMO treatment, biased allocation of ECMO based on patient conditions as a whole appeared to contribute to the aforementioned results.
   In this regard, the significant reduction in mortality of the ECMO group in the patients with respiratory failure compared with the no-ECMO group is worthy of mention. That is, against the patient selection biases that presumably favor the superior outcome in the no-ECMO group, the significantly improved patient outcomes in respiratory failure in the concurring ECMO group is evident. Our result favoring the ECMO group in respiratory failure is consistent with previous meta-analyses for H1N1 pneumonia[65]and ARDS[66]. It can be tentatively proposed thatthe inclusion of the two RCTs, which is less apt to be influenced by the patient selection bias, may contribute to the significant reduction in mortalityof the forest plot due to the increased statistical power of the pooled studies. In addition,Annichet al.stated that themajority of patients with respiratory failure including ARDS has been well supported with veno-venous (V-V) ECMO[1]. In this regard, the increased likelihood of normal cardiac function in respiratory failure conditions could enable the more frequent use of V-V ECMO (or all the use of V-V ECMO[22]), which could avoid the complications of veno-arterial (V-A) ECMO, such as systemic embolization, arterial trauma, and increased left ventricular afterload[67, 68]. However, in consideration ofnumerous possible confounding factors of heterogeneities that may have influenced the mortality results, this hypothesisneeds to be enlightened by more meticulous reasoning that unleashes which factorscontributed to this deviation of respiratory failure subgroup analysis from the overall global analysis.
   Although we are aware of the fact that other ECMO meta-analyses conducted database searches on PubMed, EMBASE, Cochrane, and so forth, we searched against the PubMed database only[69], due to the following reasons. During the pilot study, we found that this study required quite an inclusive search of keywords for various cardiopulmonary ECMO indications, compared with meta-analyses on a single indication, as manifested by the total number of articles we worked with. In addition, unlike meta-analyses on relative risks and mean differences, a full-text was laboriously required to confidently make a decision to exclude its corresponding article, because the survival analysis is usually not the main topic of the referenced study but typically comprising just one line of hazard ratio information in the result table or one Kaplan-Meier survival curve figure. Nonetheless, we acknowledge that the risk of missing appropriate articles by not searching against multiple databases could have lowered the reliability of our study[70].
   Whenever HRs and their variances were not reported explicitly, we estimated them from the information reported in the studies. Therefore, the significance of the results of the forest plot should have been diminished by our estimates of HR and variances. In further research, reporting numerical hazard ratios explicitly to facilitate later meta-analysis should be encouraged to investigate the mortality associated with ECMO use.







ECMO meta-analysis on hazard ratios: Respiratory failure (Frank, 2020)

Extracorporeal membrane oxygenation meta-analysis of time-to-event data in respiratory failure in adults

In recognition of the benefits of extracorporeal membrane oxygenation (ECMO) [1], clinical outcomes have been the subject of multiple meta-analyses. Respiratory failure incorporates 'oxygenation failure' of acquiring oxygen and 'ventilatory failure' of eliminating carbon dioxide [2], which are, respectively, exemplified to ECMO indications of "acute respiratory disease syndrome" (ARDS) and "hypercapnic respiratory failure" [3, 4]. The controversial efficacy of ECMO on patient mortality in respiratory failure has been statistically assessed by previous meta-analyses based on relative risks [5-9].
   Unlike a hazard ratio (HR), the relative risk does not consider the time to event or censoring and runs the risk of not using all the available information [10]. In other words, with respect to patient mortality, the relative risk between ECMO and non-ECMO patient groups cannot avoid overlooking the critical factor of how ECMO has influenced the timing of each event or patient death over the course of disease progression. In consideration of heterogeneities such as veno-arterial (VA) and veno-venous (VV) types, this present study applies time-to-event data to evaluations of the utility of ECMO in patients with respiratory failure.
   To the best of our knowledge, the present meta-analysis is the first attempt to use time-to-event data to illustrate a forest plot of mortality from the use of ECMO in adult patients, comprising both VA type and a majority of VV type, in respiratory failure of 'oxygenation failure' and 'ventilatory failure', compared against no ECMO group. When confining to only VV-ECMO, significant reduction in mortality was also noted.
   These results should be understood not only in the context of weighing the benefits and adverse effects of ECMO, but also in consideration of patient selection issues. Although the propensity to allocate the ECMO treatment to poor patient condition was not explicitly located in the referenced studies [27-31], the non-ECMO groups were reportedly selected and specified as groups of patients who required no invasive support [33-35]. This discriminate propensity of ECMO allocation appears to reflect the wide recognition of the benefits of ECMO treatments [1] but, at the same time, indicates a patient selection bias issue of a meta-analysis on the retrospective studies. Therefore, in addition to the intrinsic benefits and adverse effects of ECMO treatment, biased allocation of ECMO based on patient conditions as a whole appeared to contribute to the aforementioned results.
   In this regard, the significant reduction in mortality of the ECMO group in the patients with respiratory failure compared with the non-ECMO group is worthy of mention. Although VV-ECMO could avoid the complications of VA-ECMO, such as systemic embolization, arterial trauma, and increased left ventricular afterload [36, 37], even VV-ECMO alone is still associated with risk of haemorrhage [27, 28, 30] and circuit-associated complications [5]. That is, against the known complications of the ECMOs and the patient selection biases that presumably favor the superior outcome in the non-ECMO group, the significantly improved patient outcomes in respiratory failure in the ECMO group is evident. Our result favoring the ECMO group in respiratory failure is consistent with previous meta-analyses for H1N1 pneumonia [7] and ARDS [5]. It can be tentatively proposed that the inclusion of the two RCTs, which is less apt to be influenced by the patient selection bias, may partially contribute to the significant reduction in mortality of the forest plot due to the increased statistical power of the pooled studies. In addition, the majority of ECMO in the referenced studies was veno-venous type, possibly due to the increased likelihood of normal cardiac function in respiratory failure conditions, which enable the more frequent use of VV-ECMO (or only the use of VV-ECMO [30]) and could avoid the complications of VA-ECMO. However, in consideration of numerous possible confounding factors of heterogeneities that may have influenced the mortality results, this hypothesis needs to be enlightened by more meticulous reasoning which unleashes what factors contributed to the positive results of respiratory failure indication.
   In reality, the number of ECMO studies tend to be small compared to those on relative risks, and relevant mortality studies on ECMO were not always explicitly designed to meet one subcategory of respiratory failure classification, such as 'ARDS' and 'acute respiratory failure', strictly and mutually exclusively. Thus, the scope of this current study on respiratory failure comprises mortality of respiratory failure by either 'oxygenation failure' or 'ventilation failure.' In the meanwhile, technically speaking, respiratory failure type III occurs during perioperative periods that can be related to cardiopulmonary ECMO indications, to name a few, of "bridge to lung transplantation" [3, 4]; while respiratory failure type IV results from shock, which can be related to "myocardial infraction-association cardiogenic shock" [3, 4]. Nonetheless, for more focused investigation, this study condenses to the mortality of hypoxemic (type I: oxygenation failure) and hypercapnic (type II: ventilation failure) respiratory failure.
   Although we are aware of the fact that other ECMO meta-analyses conducted database searches on PubMed, EMBASE, Cochrane, and so forth, we searched against the PubMed database only [38], due to the following reasons. During the pilot study, we found that this study required quite an inclusive search of keywords, as manifested by the total number of articles we worked with. In addition, unlike meta-analyses on relative risks and mean differences, a full-text was laboriously required to confidently make a decision to exclude its corresponding article, because the survival analysis is usually not the main topic of the referenced study but typically comprising just one line of hazard ratio information in the result table or one Kaplan-Meier survival curve figure. Nonetheless, we acknowledge that the risk of missing appropriate articles by not searching against multiple databases could have lowered the reliability of our study [39].
   Whenever HRs and their variances were not reported explicitly, we estimated them from the information reported in the studies. Therefore, the significance of the results of the forest plot should have been diminished by our estimates of HR and variances. In further research, reporting numerical hazard ratios explicitly to facilitate later meta-analysis should be encouraged to investigate the mortality associated with ECMO use.
   Based on the time-to-event data of respiratory failure, ECMO comprising both VV and VA types and the VV type alone has shown to provide advantages over alternative therapy. Although VV-ECMO alone on respiratory failure was mainly addressed in this study, future investigation of the efficacy of VA-ECMO alone in respiratory failure may be more informative, due to being a more common modality of ECMO yet with greater complications [5]. The accumulation of ECMO time-to-event data studies in respiratory failure will enable more focused mortality assessments, for example, on ARDS, exclusively.

It is acknowledged that the ECMO technology from 1975 has changed immensely such that mortality may be correlated with the year, which is exemplified in the improved mortality over years in-between 1995 -2000 and 2001 -2004 [32]. For the referenced studies, the meta-regression analysis of the midpoint of the study period versus the hazard ratio (Figure 5) illustrates an insignificance (p-value = 0.8011) and neither positive nor negative correlation (r = 0.0635) in the scope of this study.


My Thoughts about Relevant Books, Films, and Media

Artificial Intelligence

In Ethem Alpaydin's "Machine Learning," while machine learning enables systems to adapt and learn from data in dynamic environments, artificial intelligence encompasses the broader capacity for systems to perform tasks requiring human-like intelligence, including but not limited to learning.

  -   A Perspective from 'AI Assistants' by Roberto Pieraccini

  -   A Perspective on the Evolution of 'Recommendation Engines' by Michael Schrage

  -   A Perspective from 'The Technological Singularity' by Murray Shanahan

  -   My Reflections on 'Computational Thinking' and the AI Revolution

  -   A.I. vs. Doctors in ElectroCardioGram (ECG)

  -   A.I. Engine

  -   In-Database Machine Learning




'AI Ethics' by Mark Coeckelbergh

  -   Exploring AI raises profound questions about our knowledge, society, and ethics, across several key domains:

↓ This content is not sourced from the book "AI Ethics." ↓


  -   Perspectives on Privacy Protection for Data Subjects (primarily derived from the Book: Data Science by Kelleher et al.)

  1. Collection Limitation: Personal data collection should be restricted and conducted lawfully and fairly. Where possible, it should be done with the data subject's knowledge or consent.
  2. Data Quality: Data must be pertinent to its intended use and maintained accurately, completely, and up-to-date as necessary.
  3. Purpose Specification: The reasons for collecting personal data should be clearly defined at the time of collection. Use of the data should be confined to these specified purposes or those compatible with them, with any change of purpose explicitly stated.
  4. Use Limitation: Personal data should not be used or disclosed for purposes other than those specified, except with the subject's consent or under the authority of law.
  5. Security Safeguards: Reasonable security measures must be in place to protect personal data from risks like loss, unauthorized access, or misuse.
  6. Openness: There should be a policy of transparency regarding practices and policies related to personal data. Information about data collection and usage, as well as details about the data controller, should be easily accessible.
  7. Individual Participation: Individuals should have the right to confirm if a data controller has their personal data, access their data in a timely and reasonable manner, and challenge or appeal any refusal to grant access. They should also be able to contest the accuracy of their data and have it corrected or amended as needed.
  8. Accountability: Data controllers must be accountable for adhering to these principles, ensuring compliance with the appropriate measures.

  -   Computational Approaches to Preserve Privacy (Data Science by Kelleher et al.)

  -   A Perspective from 'AI Assistants' by Roberto Pieraccini on the Impact of GDPR and Federated Learning

  -   A Perspective from 'Deep Learning' by John D. Kelleher on Privacy and Ethics in Algorithmic Decision-Making




'Virtual Reality' by Samuel Greengard

- An Overview of Extended Reality (XR)

- Challenges and Solutions in Extended Reality (XR)

↓ In resonance with the themes explored in Samuel Greengard's book 'Virtual Reality,' this discussion presents my independent insights and perspective. ↓


- Exploring the Synergy of 3D Glasses, XR, and Hinduism in 'Avatar'

- 'Ready Player One' and the Inspiration Behind VR Innovation

- The Matrix: VR and the Realm of Simulated Reality

- Exploring AR and MR Technologies in 'Minority Report'

- Tron: The 1982 Odyssey into Digital Universes and the Dawn of Virtual Gaming

- The Convergence of VR and Reality in 'Tron: Legacy'

- From BOTW to TOTK: The Impact of 'The Legend of Zelda' on VR Gaming

- My Reflections on 'Spatial Computing': Shaping the Future of Healthcare and Mixed Reality




'Intellectual Property Strategy' by John Palfrey

Regardless of the industry, there's a need for a more flexible and expansive approach to intellectual property than previous generations adopted. Intellectual property laws are undergoing rapid transformations globally, affecting copyrights, patents, and trademarks alike. The most significant shifts are evident in the strategic thinking of business leaders regarding intellectual property, showcasing a dramatic evolution in just the last ten to twenty years.

  -   A Paradigm Shift in Collaborative Development (in the Web 2.0 Era)

↓ In alignment with the concepts explored in 'Intellectual Property Strategy', the following discussion offers my own independent insights and a perspective that resonates with the themes of the book. ↓


  -   Navigating the Digital Evolution From Web 1.0 to 4.0

  -   IP Strategy for the Symbiotic Web Era (Web 4.0): A Personal Perspective

  -   The Impact of Creative Priorities on Artistic Work and IP Strategies in the Digital Age: A Personal Perspective

  -   Balancing Open Innovation and Strategic Protection: A Personal Perspective




  • NIST's Definition: Cloud computing, as defined by NIST (National Institute of Standards and Technology), is a model that provides widespread, easy, and immediate access to a collective pool of configurable computing resources, enabling them to be quickly allocated and released with minimal effort from management or interaction with the service provider. This model is designed to ensure high availability and comprises five key characteristics: broad network access, on-demand self-service, pooled resources with virtualization, rapid scalability, and services measured and metered for use. It is structured around three core service models — Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) — and is deployed through four models: public, private, community, and hybrid clouds.
  • (1) Virtualization to (2) Cloud: Cloud computing and virtualization serve as cornerstone technologies in modern IT infrastructures, with (1) virtualization enabling multiple virtual environments to run on a single physical hardware system through server and application virtualization. VMware exemplifies server virtualization by dividing a physical server into multiple virtual servers, allowing for efficient resource distribution and coexistence of various operating systems on a single server, while application virtualization simplifies deployment by enabling centralized access for multiple users. In contrast, (2) cloud computing expands on virtualization's resource optimization, providing scalable, flexible, and metered computing services over the internet, such as servers, storage, and software. It introduces key features like on-demand self-service, broad network access, and rapid elasticity, distinguishing itself from virtualization by offering a comprehensive service model that includes infrastructure, platform, and software as services, thus facilitating a broader range of IT solutions beyond mere resource efficiency.
  • Unveiling Shadow IT: Shadow IT refers to the use of IT systems, applications, or services without the explicit approval of an organization's central IT department. This practice is particularly prevalent in cloud computing, where the ease of accessing and deploying cloud services enables individuals or departments to bypass traditional IT controls. While shadow IT can foster innovation by allowing users to quickly meet their needs, it also poses significant risks, including security vulnerabilities and compliance issues, due to the lack of oversight and integration with the organization's IT infrastructure. In the context of cloud computing, the unchecked use of shadow IT amplifies these challenges, potentially leading to data breaches and operational inefficiencies as organizations struggle to manage a sprawling, unsecured digital environment.

↓ The information provided does not originate from the book "Cloud Computing," but it has been supplemented with relevant information. ↓


  -   Privacy Enhanced Through the Power of On-Device AI in Mobile Devices






  • Understanding IoT: The Internet of Things (IoT) is a network where devices, from smartphones to sensors, connect and communicate through technologies like Wi-Fi and Bluetooth. It's a complex system of interlinked objects exchanging data and making decisions, often without human intervention, powered by advancements in artificial intelligence. This interconnectedness allows for an unprecedented level of automation and smart functionality in everyday objects, transforming them into active participants in data gathering and analysis.

Contact

Email: Support [AT] nGene.org

Call sign : K3CWKP (FCC) or DS1UHK (Emergency Radio Communication Support Corps)


Acknowledgment

Special thanks to my beloved mom who always trusts me. Were it not for her, it would be impossible for me to implement this software.


Back to Top