By Dr Moataz Afifi
Vice President of the State Lawsuits Authority
At present, artificial intelligence (AI) has evolved into a decisive force in fields that shape the daily life, influencing medical treatment, military planning, and employment decisions.
Its growing autonomy compels legislators to rethink of launching traditional legal principles, particularly those governing liability.
In this respect, a key challenge arises: whether a unified legal framework can regulate AI across all sectors, or whether the diversity of risks, professional roles, and degrees of human oversight makes such unity impossible.
The argument advanced here is that sectoral differences are too significant to permit a single legislative model.
The foundations of liability – fault, causation, and exemption – vary greatly depending on the context in which AI is used, making tailored legal rules indispensable.
This becomes clear when examining two contrasting sectors: autonomous transportation and medical practice. In smart transportation, the introduction of autonomous vehicles has created profound legal uncertainty.
Existing laws struggle to define who counts as the “driver” when a vehicle is controlled largely or entirely by an algorithm.
When accidents occur, responsibility is difficult to assign among the human passenger, the operating company, the vehicle owner, or the software developer.
This uncertainty stems from the shifting balance between human control and algorithmic decision-making.
At higher autonomy levels, where human intervention is absent, the user cannot meaningfully influence the system’s choices, making responsibility more appropriately placed on designers and developers.
Conversely, at lower autonomy levels, where human supervision still plays a role, the operator may remain liable unless it is proven that the algorithm acted fully independently.
These complexities, the writer argues, highlight the necessity of a legislative framework capable of addressing the unique technical and ethical challenges of autonomous vehicles.
In the medical field, however, the legal landscape differs fundamentally. Although artificial intelligence is increasingly relied upon to analyse data and support diagnosis, the physician remains the principal decision-maker and retains full professional and legal responsibility.
Medical ethics, he added, require physicians to exercise due diligence, verify information, and assess the patient’s physical and psychological condition – duties that cannot be delegated to an algorithm.
AI systems, regardless of their accuracy, cannot yet replicate the comprehensive clinical judgment required in medical practice. Therefore, if a physician depends entirely on an AI tool without verifying its output, any resulting error is attributed to the physician rather than the software.
Unlike in autonomous transportation, liability does not shift to the designer; it remains a doctor-borne as he (the doctor) chooses to employ and rely on the system.
Thus, the algorithm’s shortcomings do not constitute an external cause that exempts the physician from responsibility.
The comparison between these two fields demonstrates that the legal treatment of artificial intelligence must be sector-specific.
The role of human oversight, the nature of professional duties, and the potential risks of automated decision-making differ too widely to allow for a unified regulatory approach.
Effective legislation must reflect these distinctions to ensure accountability and promote the safe integration of AI into critical areas of modern life.
