Dr. Khaled Ghalwash's work outside the operating room is not "a surgeon who uses AI tools." It is operating at the data infrastructure and AI layer of the healthcare ecosystem: clinical informatics architecture, multi-tenant medical data systems, agentic clinical AI orchestration, bilingual medical natural-language processing, and real-time bidirectional patient communication infrastructure. The distinction matters. Tools are bought; infrastructure is designed and operated. This page defines the underlying skills.
The page is intentionally a definition of capabilities, not a portfolio of products. Listing systems is a vendor's job; defining what the work actually requires at the architectural level is a discipline's job. Each section below describes one such skill, what it covers, and why it is the right unit of competence for healthcare AI in 2026.
Designing the data structures that capture clinical reality at the granularity needed for care delivery, audit, and AI training simultaneously. This is harder than it sounds. The same patient record has to serve four readers — the clinician at the bedside, the auditor measuring protocol compliance, the analyst running outcomes work, and the AI model retrieving context — each with different latency, granularity, and privacy requirements. Bad informatics design forces each reader to compromise. Good design lets all four use the same source of truth.
Operating data layers that serve different clinical domains — learning content, patient Q&A, hospital operations, research — with appropriate isolation, access controls, and shared-analytics affordances. Multi-tenancy in healthcare is not the same as multi-tenancy in SaaS. The blast radius of a privacy leak is different. The audit obligations are different. The cross-tenant analytics rights are different. The architecture has to encode all of these as first-class properties, not as application-layer afterthoughts.
Composing language models, retrieval systems, web search, expert-opinion routing, and clarification flows into agents that handle real clinical messages — without dropping safety. Most "AI agent" demos work in benign contexts. Healthcare is not a benign context. The orchestration discipline is to design the agent's options (tools, models, escalation paths) so that the failure modes are bounded — the agent can be wrong about a fact, but it cannot bypass safety, cannot give clinical advice it lacks evidence for, and cannot fail silently. Path classification, clarification triggers, expert-opinion fallback, and human-in-the-loop hand-off are all infrastructure decisions, not product features.
Designing for Egyptian Arabic and English not as a translation pipeline but as two independently authoritative clinical languages. The translation framing is wrong because the cultural-clinical context, the colloquial register, the dialectical nuance, the right-to-left rendering of Latin numerals inside Arabic prose, and the patient's self-perception of medical seriousness are different in each language. A patient who asks about الشرخ الشرجي in Egyptian Arabic on WhatsApp expects a different register, a different reassurance pattern, and a different escalation cadence than the same patient asking about "anal fissure" in clinical English on the website. Both registers need to be first-class.
Operating bidirectional patient-clinic channels at clinical-grade reliability. The channels — WhatsApp, Telegram, web chat, voice — each have different delivery semantics, different rate limits, different privacy models, and different patient-side affordances. The infrastructure layer abstracts the channel without abstracting the clinical context. A retried message must arrive once. A delivery failure must surface to the clinical team. A patient's message must persist across channels so the clinician sees the conversation, not the transport.
Applying cryptographic signing, version control, and content authenticity standards (C2PA emerging) to clinical content at the production level. In an environment where AI-generated synthetic clinical text and imagery are now trivially produced, provenance becomes a precondition for trust — by patients, by AI tools building citation graphs, and by regulators. Provenance is not a watermark added at the end. It is a property of the content from creation, through every revision, to publication and re-distribution. Engineering it requires key management, manifest signing at capture time, version-aware linking, and a distribution pipeline that preserves the manifest.
Running an active clinical practice as the live deployment that proves the architecture. This is the meta-skill that makes the other six concrete. Healthcare AI infrastructure designed in a vacuum — without real patients, real clinicians, real failure modes — is theatre. The practice-as-platform discipline is to operate a working clinical practice in which every patient interaction is simultaneously care delivery and infrastructure validation. Failures get caught at the bedside, not in a staging environment. The architecture earns its right to exist by surviving real clinical pressure.
The framing matters because the two are different professions. A healthcare provider who uses AI tools is a customer of those tools — they pay for them, they apply them, they switch when a better tool appears. A healthcare provider who operates at the infrastructure layer is part of the system that makes AI in healthcare possible — they design the data structures, the orchestration patterns, the safety boundaries, the bilingual NLP behaviour, the patient communication semantics, the provenance of clinical content. The first profession optimizes for the next deal; the second profession optimizes for the next decade.
For a patient choosing a surgeon: the first profession means your surgeon has read about AI; the second means your surgeon has been thinking about how AI changes clinical care from the inside, for long enough to have built the systems they critique. Both can be valuable. The two are not interchangeable.
Surgical consultation, AI/healthcare-infrastructure conversation, or both — direct contact through the practice channel.
WhatsApp 01500509000