While the petroleum industry, with its workloads of seismic imaging and reservoir modeling, is the historical motivation for supercomputing in the Arab World, the portfolio today is full orbed.
As applications become more ambitious and architectures become more austere, algorithms and software must bridge the growing gap. Hands-on opportunities to resolve this application-architecture tension prove to be a lure to students. We relate case studies that grew up around a university-operated supercomputer, but they can largely be replicated with much less investment because the main challenges today are not in coordinating tens of thousands of nodes across a low-latency, high-bandwidth network. Rather, the challenges lie in extracting performance from within increasingly heterogeneous nodes.
There are manifold ways to improve a scientific computation: (1) increase its accuracy
(computational resolution of an underlying continuum), (2) increase its fidelity (inclusion of a
system’s full features in a computational model), (3) tighten its uncertainty (bound the error of a
model’s output in terms of errors in its inputs), (4) improve its robustness and reliability, (5) reduce its complexity (computational costs, in terms of storage and operations) to achieve a sought accuracy, fidelity, and confidence and (6) tune its performance. Modelers generally customize the first three to their application and are happy to hand off the last three, as a productive separation of concerns. KAUST’s ECRC focuses its resources on robustness, complexity reduction, and architectural tuning for widely used computational kernels in simulation and data analytics. We give examples in computations for sustainability applications.