This project aims to turn the rCUDA technology for remote GPU virtualization into a fully finished commercial product and transfer the developments made to the industry. To achieve this objective, it is necessary to: a) complete the support within rCUDA for new application areas such as Deep Learning, among others; b) develop an ecosystem around the rCUDA technology to schedule the shared use of virtualized GPUs; and c) prepare all the libraries developed during the project so that they are compatible with the latest versions of the commercial libraries with which they interact (CUDA, InfiniBand, SLURM, etc.) and optimize their operation, so that our developments are better accepted by industry.

More info here


In this consortium GAP people participates in two main core areas: improving the performance of the nodes of the sytem and in the interconnection networks. In the former, we work on hardware design (e.g. core microarchitecture, cache hierarch, main memory and heterogeneous systems) and on deleloping system software aware of the underlying hardware features (e.g. scheduling and thread to core allocation strategies. In the latter, GAP works on improving the interconnection network performance and on addressing the power problem (e.g. switching off specific links).

More info here


SELENE is aimed at proposing a new family of safety-critical computing platforms that builds upon open source components such as RISC-V cores, GNU/Linux, and Jailhouse hypervisor. SELENE will develop an advanced computing platform that is able to:

*adapt the system to the specific requirements of different application domains, to the changing environmental conditions, and to the internal conditions the system itself

*allow the integration of applications of different criticalities and performance demands in the same platform, guaranteeing functional and temporal isolation properties

*achieve flexible diverse redundancy by exploiting the inherent redundant capabilities of the multicore

*execute in an efficient way compute intensive applications by means of specific accelerators.

More info at SELENE


Project Leader José Flich

RECIPE (REliable power and time-ConstraInts-aware Predictive management of heterogeneous Exascale systems) provides a hierarchical runtime resource management infrastructure to optimise energy efficiency and minimise the occurrence of thermal hotspots, while enforcing the time constraints imposed by the applications, and ensuring reliability for both time-critical and throughput-oriented computation.

More info at RECIPE


Project Leader José Flich

The aim of DeepHealth is to offer a unified framework completely adapted to exploit underlying heterogeneous HPC and Big Data architectures; and assembled with state-of-the-art techniques in Deep Learning and Computer Vision.

More info at DEEPHEALTH


Project Leader José Flich

The eFlows4HPC project aims to deliver a workflow software stack and an additional set of services to enable the integration of HPC simulation and modelling with big data analytics and machinelearning in scientific and industrial applications. The software stack will allow to develop innovative adaptive workflows that efficiently use the computing resources and also considering innovative storage solutions. To widen the access to HPC to newcomers, the project will provide HPC Workflows as a Service (HPCWaaS), an environment for sharing, reusing, deploying and executing existing workflows on HPC systems.

More info at EFLOWS4HPC


Project Leader Enrique S. Quintana-Ortí

The Approximate Computing for Power and Energy Optimisation ETN will train 15 ESRs to tackle the challenges of future embedded and high-performance computing energy efficiency by using disruptive methodologies. APROPOS aims at decreasing energy consumption in both distributed computing and communications for cloud-based cyber-physical systems. We propose adaptive Approximate Computing to optimize energy-accuracy trade-offs.

More info at APROPOS


Project Leader Alberto González Salvador
COMunicación y compuTACión inTeligentes y Sociales advances in the field of distributed computing and the hardware-software available for it have made it possible to develop increasingly powerful information processing and exchange systems whose interaction with the environment is produced by increasingly numerous sets of transducers. These transducers in turn provide a constantly increasing volume of signals and data, making possible a more precise knowledge of the social environment and the physical environment where living beings, particularly humans, develop. At the same time, there is the rise of applications that have arisen around computing and communication devices for personal use, and their massive use with the advancement of communications; We could cite applications such as: human-machine interaction, control systems, location and tracking systems, telepresence, automatic classification, high-speed communications, diagnostic aid systems and telemedicine, etc. In this framework, intelligent and social computing and communication are defined as the hybridization of both disciplines to face challenges of clear socio-economic interest. It takes advantage of the basic science of communications and computing, and the properties of ubiquity, versatility, scalability, energy efficiency and cooperative processing of networks of heterogeneous computing and data acquisition devices. Its physical, computing, signal processing, technological, energy consumption, communication, etc. aspects are considered, particularly in distributed, collaborative scenarios and provided with massive and heterogeneous data.

In this way, the research group of this proposal addresses the design, development and implementation of products, systems, programs and algorithms for signal and communications processing, which make use of the latest generation architectures, advanced computing and efficient communications within the framework intelligent computing and communication aimed at tackling social challenges.


Project Leader Enrique S. Quintana-Ortí

The project TACCERE will contribute, as a general objective, to the design and development of algorithmic techniques, programming interfaces and tools, computational kernels, libraries of algorithms and runtime frameworks, that reduce energy consumption, increase resilience to errors, and improve productivity in the development of applications that deal with vast amounts of data and exhibit irregular parallel patterns. As a generic computer target, the project will consider an heterogeneous parallel architecture, due to the energy efficiency advantage of this type of systems, together with a hybrid MPI+X programming model, where X can be replaced by any of the current multi-threaded programming languages such as OpenMP, OpenACC, OpenCL, CUDA, etc.