Publications

Authors: Andreas Tsagkaropoulos, Yiannis Verginadis , Maxime Compastié, Dimitris Apostolou and Gregoris Mentzas

Source: https://www.mdpi.com/2079-9292/10/6/737

Abstract: The emergence of fog and edge computing has complemented cloud computing in the design of pervasive, computing-intensive applications. The proximity of fog resources to data sources has contributed to minimizing network operating expenditure and has permitted latency-aware processing. Furthermore, novel approaches such as serverless computing change the structure of applications and challenge the monopoly of traditional Virtual Machine (VM)-based applications. However, the efforts directed to the modeling of cloud applications have not yet evolved to exploit these breakthroughs and handle the whole application lifecycle efficiently. In this work, we present a set of Topology and Orchestration Specification for Cloud Applications (TOSCA) extensions to model applications relying on any combination of the aforementioned technologies. Our approach features a design-time “type-level” flavor and a run time “instance-level” flavor. The introduction of semantic enhancements and the use of two TOSCA flavors enables the optimization of a candidate topology before its deployment. The optimization modeling is achieved using a set of constraints, requirements, and criteria independent from the underlying hosting infrastructure (i.e., clouds, multi-clouds, edge devices). Furthermore, we discuss the advantages of such an approach in comparison to other notable cloud application deployment approaches and provide directions for future research.

Authors: Elisabeth Roggenhofer, Sandrine Muller, Emiliano Santarnecchi, Lester Melie-Garcia, Roland Wiest, Ferath Kherif, Bogdan Draganski

Source: https://www.mdpi.com/2079-9292/10/6/737

Abstract: Mesial temporal lobe epilepsy (TLE) is one of the most widespread neurological network disorders. Computational anatomy MRI studies demonstrate a robust pattern of cortical volume loss. Most statistical analyses provide information about localization of significant focal differences in a segregationist way. Multivariate Bayesian modeling provides a framework allowing inferences about inter-regional dependencies. We adopt this approach to answer following questions: Which structures within a pattern of dynamic epilepsy-associated brain anatomy reorganization best predict TLE pathology. Do these structures differ between TLE subtypes?

Authors: Alberto Redolfi1, Silvia De Francesco, Fulvia Palesi, Samantha Galluzzi, Cristina Muscio, Gloria Castellazzi, Pietro Tiraboschi, Giovanni Savini, Anna Nigri, Gabriella Bottini, Maria Grazia Bruzzone, Matteo Cotta Ramusino, Stefania Ferraro, Claudia A. M. Gandini Wheeler-Kingshott, Fabrizio Tagliavini5, Giovanni B. Frisoni, Philippe Ryvlin, Jean-François Demonet, Ferath Kherif, Stefano F. Cappa and Egidio D’Angelo

Source: https://www.mdpi.com/2079-9292/10/6/737

Abstract: With the shift of research focus to personalized medicine in Alzheimer’s Dementia (AD), there is an urgent need for tools that are capable of quantifying a patient’s risk using diagnostic biomarkers. The Medical Informatics Platform (MIP) is a distributed e-infrastructure federating large amounts of data coupled with machine-learning (ML) algorithms and statistical models to define the biological signature of the disease. The present study assessed (i) the accuracy of two ML algorithms, i.e., supervised Gradient Boosting (GB) and semi-unsupervised 3C strategy (Categorize, Cluster, Classify—CCC) implemented in the MIP and (ii) their contribution over the standard diagnostic workup.

Authors: Gretel Sanabria-Diaz, Lester Melie-Garcia, Bogdan Draganski, Jean-Francois Demonet & Ferath Kherif

Source: https://www.mdpi.com/2079-9292/10/6/737

Abstract: The Apolipoprotein E isoform E4 (ApoE4) is consistently associated with an elevated risk of developing late-onset Alzheimer’s Disease (AD); however, less is known about the potential genetic modulation of the brain networks organization during prodromal stages like Mild Cognitive Impairment (MCI). To investigate this issue during this critical stage, we used a dataset with a cross-sectional sample of 253 MCI patients divided into ApoE4-positive (‛Carriers’) and ApoE4-negative (‘non-Carriers’). We estimated the cortical thickness (CT) from high-resolution T1-weighted structural magnetic images to calculate the correlation among anatomical regions across subjects and build the CT covariance networks (CT-Nets). The topological properties of CT-Nets were described through the graph theory approach. Specifically, our results showed a significant decrease in characteristic path length, clustering-index, local efficiency, global connectivity, modularity, and increased global efficiency for Carriers compared to non-Carriers. Overall, we found that ApoE4 in MCI shaped the topological organization of CT-Nets. Our results suggest that in the MCI stage, the ApoE4 disrupting the CT correlation between regions may be due to adaptive mechanisms to sustain the information transmission across distant brain regions to maintain the cognitive and behavioral abilities before the occurrence of the most severe symptoms.

Authors: Ladina Weitnauera, Stefan Frischbde, Lester Melie-Garciaa, Martin Preisigf, Matthias L.SchroeterbInes, Sajfutdinowc , Ferath Kherifa, Bogdan Draganskiab

Source: https://www.mdpi.com/2079-9292/10/6/737

Abstract: There is ongoing debate about the role of cortical and subcortical brain areas in force modulation. In a whole-brain approach, we sought to investigate the anatomical basis of grip force whilst acknowledging interindividual differences in connectivity patterns. We tested if brain lesion mapping in patients with unilateral motor deficits can inform whole-brain structural connectivity analysis in healthy controls to uncover the networks underlying grip force.

Authors: Andreas Tsagkaropoulos, Yiannis Verginadis, Nikos Papageorgiou, Fotis Paraskevopoulos, Dimitris Apostolou & Gregoris Mentzas

Source: https://www.mdpi.com/2079-9292/10/6/737

Abstract: While a multitude of cloud vendors exist today offering flexible application hosting services, the application adaptation capabilities provided in terms of autoscaling are rather limited. In most cases, a static adaptation action is used having a fixed scaling response. In the cases that a dynamic adaptation action is provided, this is based on a single scaling variable. We propose Severity, a novel algorithmic approach aiding the adaptation of cloud applications. Based on the input of the DevOps, our approach detects situations, calculates their Severity and proposes adaptations which can lead to better application performance. Severity can be calculated for any number of application QoS attributes and any type of such attributes, whether bounded or unbounded. Evaluation with four distinct workload types and a variety of monitoring attributes shows that QoS for particular application categories is improved. The feasibility of our approach is demonstrated with a prototype implementation of an application adaptation manager, for which the source code is provided.

Authors: Yiannis Verginadis, Kyriakos Kritikos, Ioannis Patiniotakis

Source: https://www.mdpi.com/2079-9292/10/6/737

Abstract: Multi-cloud management prevents vendor lock-in as well as improves the provisioning of cloud applications. However, the optimal deployment of such applications is still impossible, not only due to the dynamicity of the cloud and hybrid environments that host these applications but also due to the use of potentially unsuitable forms or configurations of application components. As such, the real cloud application optimisation can only be achieved by considering all possible component forms and selecting the best one, based on both application requirements and the current context. This gives rise to the era of polymorphic applications which can change form at runtime based on their context. A major pre-requisite for the management of such applications is their proper modelling. Therefore, this paper presents extensions on two well-integrated cloud modelling solutions to support the complete specification of polymorphic applications and it presents an illustrative example of their use.

Authors: Jean-Didier Totow Tom-Ata and Dimosthenis Kyriazis

Source: https://www.mdpi.com/2079-9292/10/6/737

Abstract: Applications performance is strongly linked with the total load, the application deployment architecture and the amount of resources allocated by the cloud or edge computing environments. Considering that the majority of the applications tends to be data intensive, the load becomes quite dynamic and depends on the data aspects, such as the data sources locations, their distribution and the data processing aspects within an application that consists of micro-services. In this paper we introduce an analysis and prediction model that takes into account the characteristics of an application in terms of data aspects and the edge computing resources attributes, such as utilization and concurrency, in order to propose optimized resources allocation during runtime.

Authors: Alessandra Bagnato, Etienne Brosse and Kaïs Chaabouni

Source: https://www.mdpi.com/2079-9292/10/6/737

Abstract: The Morphemic H2020 project covers several features from modelling cross-cloud applications, continuous and autonomous optimization and deployment and providing access to several cloud capabilities. This demo paper describes the MORPHEMIC CAMEL Designer tool responsible of the Cloud Application Modelling and Execution Language (CAMEL) design for the modelling Environment Modelio. CAMEL Designer is an open source module for graphically creating, editing and exporting CAMEL Models in XMI format.

Authors: Marta Różańska, Geir Horn

Source: https://dl.acm.org/doi/abs/10.1145/3492323.3495587

Abstract: Managing Cloud applications with variable resource requirements over time is an insipid task that could benefit from autonomic application management. The management platform will then need to know what the application owner considers a good deployment for the current execution context, which is normally captured by a utility function. However, it is often difficult to define such a function directly by first principles in a way that would perfectly capture the application owner’s preferences. This paper proposes a methodology for defining the utility function only from the monitoring measurements taken to assess the state and context of the running application.

Authors: Marta Różańska, Paweł Skrzypek, Katarzyna Materka, Geir Horn

Source: https://link.springer.com/chapter/10.1007/978-3-030-99619-2_53

Abstract: Existing autonomous Cloud application management platforms continuously monitor the load and the environment of the application react when optimization is needed. This paper introduces the concepts of polymorphic architecture optimization and proactive adaptation of Cloud applications, which is a significant improvement of the standard reactive optimization. Polymorphic architecture optimization considers the change of the technical form of the component while proactive adaptation uses the predicted future workload and context. Based on this, we propose an architecture for a Cross-Cloud application management platform that supports complex optimization of Cloud applications in terms of architecture, resources, and available offers.

Authors: Geir Horn, Rudolf Schlatte & Einar Broch Johnsen 

Source: https://link.springer.com/chapter/10.1007/978-3-030-99619-2_14

Abstract: Cloud applications are distributed in nature, and it is challenging to orchestrate an application across different Cloud providers and for the different capabilities along the Cloud continuum, from the centralized data centers to the edge of the network. Furthermore, optimal dynamic reconfiguration of an application often takes more time than available at runtime. The approach presented in this paper uses a concurrent simulation model of the application that is continuously updated with real-time monitoring data, optimizing, and validating deployment reconfiguration decisions prior to enacting them for the running applications. This enables proactive decisions to be taken for a future time point, thereby allowing ample time for the reconfiguration actions, as well as realistic Bayesian estimation of the application’s time variate operational parameters for the optimization process.

Authors: Marta Różańska, Kyriakos Kritikos, Jan Marchel, Damian Folga, Geir Horn

Source: https://link.springer.com/chapter/10.1007/978-3-031-28694-0_58

Abstract: Cloud computing promises unlimited elasticity and allows Cloud applications to scale or change the configuration in response to demand. To automate the application management one must capture the goals and preferences of the application owner, and the most flexible way to represent these is as a utility function. However, it is often difficult for the application owner to define a such mathematical function. Therefore, we propose the Utility Function Creator, a software component that can create the utility function for Cloud application optimization based on a set of predefined utility function policies as well as by template utility function shapes.

Authors: Marta Różańska, Geir Horn

Source: https://ieeexplore.ieee.org/abstract/document/10061767

Abstract: Applications running in the Cloud can adapt to the varying demands by autonomic management of their resource configurations. The reconfiguration can be done as a reaction to the changed situation, or proactively to ensure the good performance of the application at some future point. However, it is difficult to predict the future behaviour of the application as it depends both on the changing contexts and the reconfiguration actions. This paper describes the approach for proactive autonomic Cloud application management and introduces a distinction between ‘independent metrics’,’performance metrics’ influenced by the reconfigurations, and ‘performance indicators’ related to the application’s utility and reconfigurations. It is shown how performance metrics and performance indicators can be learned as regression functions, and used in the proactive autonomic Cloud application optimization. Finally, the results of an evaluation by simulation show that the proposed approach is more accurate than reactive application control, and it gives better results in terms of the application’s utility.

Authors: Yiannis Verginadis

Source: https://ieeexplore.ieee.org/abstract/document/10061767

Abstract: Nowadays, we consider optimizing data-intensive applications imperative for the digital enterprise to exploit the vast amounts of available data and maximize its business value. This fact necessitated the broad adoption of multicloud and fog deployment models towards enhanced use of distributed hosting resources that may reach the edge of the network. However, this poses significant research challenges concerning how one can automatically discover the best initial deployment of such an application and then continuously adapt it according to the defined Service Level Objectives (SLOs), even in extreme scenarios of workload fluctuations. Among the key tools for managing such multicloud applications are advanced distributed monitoring mechanisms. In this work, we consider some of their fundamental components, which refer to the means for efficiently measuring and propagating information on the application components and their hosting. Specifically, we analyze the 20 most well-known monitoring tools and compare them against several criteria. This comparison allows us to discuss their fit for the distributed complex event processing frameworks of the future that can efficiently monitor applications and trigger reconfigurations across the Cloud Computing Continuum.

Authors: Amina Moussaoui, Alessandra Bagnato, Etienne Brosse, Józefina Krasnodębska and Paweł Skrzypek

Source: https://ceur-ws.org/Vol-3144/RP-paper13.pdf

Abstract:
This paper describes the MORPHEMIC Project and gives some details on its graphical user interface. The MORPHEMIC project proposes a unique way of adapting and optimizing cloud computing applications by introducing the novel concepts of polymorphic architecture and proactive adaptation. It allows application adaptation based on forecasted usage, number of users and other business metrics. The applications will be automatically reconfigured based on forecasting future needs. Also, the architecture of the applications can be changed, to better utilize available hardware.

The MORPHEMIC User Interfaces include the tools available for the user to access all
MORPHEMIC Platform features for crosscloud applications modeled in CAMEL language to manage cloud deployment, optimization and monitoring. The User Interface acts as a central MORPHEMIC element where the execution task of applications deployment and the polymorphic adaptation is implemented by the Web User Interface Client which includes managing heterogeneous resources such as cloud offers and material accelerators, managing CAMEL Models, optimizing, deploying and monitoring the crosscloud Application. The developed User Interface takes into consideration the quality of the User eXperience and the use case studies requirements as they represent the first users of the MORPHEMIC Platform.

Authors: Marta Różańska, Katarzyna Karnas, Geir Horn

Source: https://ieeexplore.ieee.org/abstract/document/9825959/

Abstract:
Autonomic Cloud application optimization is neces-sary to maintain the maximal application utility while ensuring efficient use of Cloud resources across providers and infrastructures. This requires continuous monitoring of the application to detect the needs for application reconfiguration. The vector of monitored parameters is a multivariate time series and, therefore, one can predict the future metric values and optimize Cloud applications proactively ensuring the new resources to be available when needed. However, the predictions are uncertain, and the optimization must be resilient to the inherent prediction errors. This paper presents a new supervised learning solver to find the Cloud application configuration with the highest utility value for predicted runtime conditions. The solver is trained on uncertain measurements, and it is evaluated for a real-world data intensive Cloud application in terms of the quality of returned solutions for various loss functions used during training.