The Indian And A Cause To Reason
12/29/20254 min read


There are situations we encounter that outrightly calls for attention. Certainly, some situations call for a brush. Others cause the raising of eyebrows and an opportunity to learn and improve. While seeking the optimal path and later to understand the Indians better, I had the opportunity to analyse as formality what Thomas Kwa et al sought to explain, the gap between human actors, AI and AI systems. On their trajectory they sought the discrepancy on the highs and lows to strap AI, AI systems and its capabilities especially with human actors' ability in hindsight. Their main targets included the uncovering of what AI can do given the knowledge of the capabilities of humans in performance of specific tasks.
Their solution was the 50% task completion time horizon which explains, the time taken by humans to complete a task with at least 50% success rate by an AI system. With this in mind we can postulate that the rapidity for which AI systems can be able to complete a task as uncovered by Thomas and the group of researchers that typically humans can do will be of help in my opinion in the area of potential exploration of regulation of AIaMD. Thus the knowledge Thomas Kwa et al sought to present gives room for considering regulation of AI as a medical device in its entirety and much more narrowly with respect to software/hardware design and usage. In that regard we can say that room has been created for the regulation of AIaMD that not only considers pretrained data but inculcates realtime or real world scenarios using live interfaces, where continuous surveillance of AI tools as well as auditing of patient response can be considered concurrently to enhance recommendation and patients outcomes. This we can claim further necessitates the regulatory sandbox approach to supervision where regulatory tools that allow flexibility to test innovative products or services with minimal regulatory oversight is adopted. Presenting thence an agile approach to adoption and assimilation of innovation in healthcare especially. It further will allow some laxity for regulators to ponder the cause and effect trajectories of innovation in AIaMD. This therefore will have a clearcut impact on most specifically timing of regulation, where rather than a rash to encumber potential hazards of AIaMD, allow transient regulation giving room for innovation without siphoning the promising progress AIaMD can have on healthcare. Thereby maintaining ethical standards, prevent unfairness and bias and protect patient privacy: allowing periodic auditing and risk assessment of AI tools.
BOGC Foundation's twinned offering IDD and Ensurance seek to help children and young adult transition, bearing in mind the role of parents and guardians. The former IDD presents an avenue for disease diagnosis and drug recommendation. Health data it is established to be more generally considered sensitive and vital and hence requires proper risk management, regulation and protection. The IDD app besides its uses will aid in these areas worth emphasizing: Awareness, Mock precedence analysis, Ethical standards trumpeting and a policy guide.
On Awareness, through the IDD ecosystem, we seek to create awareness on the potential bias AIaMD presents and the solution we have instituted to that extent. Effects of consent for patient health data usage, potential legal help available for breach of health information and the likes. On mock precedence analysis, there is potential for mock precedence analysis on effects of AI decisions. Currently, there is little to no precedence on who faces what and the charge(s): the app developer, clinician, foundational model developer or which specific individual should bare the accountability burden. IDD seems a potent lead. Ethical standards when known should be made known. Existing standards in governance of AIaMD if readily available should be made public knowledge. That notwithstanding, there is evidence that proliferating will require robust, well researched technical skills and guidance. Our current research and future trajectories is a promising way to begin. The IDD ecosystem can also serve as a policy guide to governments especially on the African continent. It presents an opportunity to explore the potential of piloting policies before and appraisal after implementation as well as the potential for scaling promising programs, projects or policies.
It is interesting to note that the idea of regulation is hugely debated especially in the area of whether or not a regional, national or subregional body suffices. Such a stance seems difficult to address without substantial authority or a solid backing. Hence in 2024, I sought to at Oxford enlighten on the essence of tailored regulation. Agreeing with a few researchers (listed below) on how unique characterization of LLMs and other AI systems call for a specialized approach to regulating the AIaMD landscape. Currently, most countries rely on regional bodies for regulatory oversight. But AIaMD developers, governments, NGOs and other policy makers should be reminded of the banning of ChatGPT in Italy in 2023. There seem to be thence precedence to rather than considering regional regulatory bodies insights, national and subregional bodies. There is therefore the need to see by these 'subs' regulation of the AIaMD landscape . Laying emphasis on why Africa's stance should be different. Then, I cited as evidence: Obemeger et al 2017, Seyyed-Kalantari et al 2020 and Ghasseni 2021. The first pointed out how an algorithm used in the US to help in referral process of patients who need extra or specialist care was shown to discriminate against black patients. The second in their analyses of a study in Canada on fairness in algorithms and in this case a deep learning algorithm discovered there was some form of bias with regards to diagnosis. The highest form of under diagnosis was in young females and black patients on public health insurance for low-income people and households. In this case the diagnosis was for detection of fractures, lung lesions, pneumonia among others in chest x-ray images. And finally Ghasseni argued that the most common cause for unfairness in medical AI is the bias in the data used to train the machine learning models. The period exposed or should I say enlightened on how the European Commission, FDA US and for instance Singapore Health Service Authority are seeking to race with AI as a potential to enhance healthcare.
