We are researchers with a solid
background in both foundational and emerging areas
of science and technology.
Why the name 'Morphostate'?
The notion of morph (from greek μορφη for form) extends beyond static geometry to encompass system dynamics. In complex systems, form and state are inseparably linked: the visible appearance (morph) emerges from the underlying conditions and interactions that define a system’s state. Changes in state manifest as transformations of form, a process often described as “morphing”, or dynamic transitions.
A morphostate is the state of form a system occupies at a given time, expressing a temporal equilibrium between internal emergent processes and external constraints. Tangent to both self‑organization and morphogenesis, it represents form shaped by endogenous dynamics yet conditioned by environmental flows, boundary conditions, and historical trajectories. In this way, the morphostate serves as a conceptual bridge, linking a system’s internal emergent processes with the evolution of its form while acknowledging the role of external reality in shaping both state and structure.
Morphostate reflects the intention to understand, model, and shape the form and function of complex systems by numerically simulating their self-organizing, morphogenetic dynamics and applying AI-based tools to guide their evolution toward desired structures and behaviors—an essential challenge in contemporary research and industrial applications that demand adaptive, optimized structures and functions.
Our work
Our work centers on the conception and implementation of innovative ideas and solutions to complex, interdisciplinary problems in research, as well as in applied and industrial science.
Examples include applied scientific domains such as medical diagnostics, environmental modeling, and materials science, and industrial areas such as pharmaceutical development, manufacturing optimization, and automated quality control.
In recent years, we have focused extensively on AI algorithms, with particular emphasis on the trustworthiness, reproducibility, and explainability of results.
We also investigate the self-organization of complex systems, employing not only traditional neural networks but also methods such as Reservoir Computing.
Our areas
We work on foundational and applied problems in machine learning, evolutionary AI, reservoir computing, and the prediction and control of chaotic dynamical systems.
This includes the modeling of highly nonlinear phenomena such as weather and climate dynamics, turbulence, and other large-scale complex processes.
Beyond computational intelligence, our activities extend into quantum computing, synthetic biology, and advanced clean-energy technologies, including fusion energy and green hydrogen. We also pursue research in neurotechnology, advanced robotics, and automation, with an emphasis on developing systems that are adaptive, resilient, and explainable.
Across all these areas, we aim to develop rigorous methodologies, interpretable models, and innovative solutions that address interdisciplinary challenges and contribute to technological and scientific progress.
Our vision
We provide consultancy services and assistance with both theoretical and practical aspects of research and
development projects, and we are prepared to take responsibility for entire work packages or participate as cooperative partners in larger initiatives.
Our Consulting Services
Expertise in Science and Technology
Data Analytics Consulting
Utilize advanced data analytics to uncover insights and drive informed decision-making.
Technology Integration
Seamlessly integrate emerging technologies into your existing systems for enhanced performance.
Key features
Our services at Morphostate are characterized by precise analysis, innovative solutions and tailor-made strategies that ensure the success of our customers.
Data-driven insights
Individual strategies
Innovative technologies
Our reprentatives
Our team consists of experienced scientists and engineers who are passionate about developing innovative solutions.
Dr. Anastasia-Maria Leventi Peetz
Selected Publications
Modeling Biological Multifunctionality with Echo State Networks
A three-dimensional multicomponent reaction-diffusion model has been developed, combining excitable-system dynamics with diffusion processes and sharing conceptual features with the FitzHugh-Nagumo model. Designed to capture the spatiotemporal behavior of biological systems, particularly electrophysiological processes, the model was solved numerically to generate time-series data. These data were subsequently used to train and evaluate an Echo State Network (ESN), which successfully reproduced the system’s dynamic behavior. The results demonstrate that simulating biological dynamics using data-driven, multifunctional ESN models is both feasible and effective.
Scope and Sense of Explainability for AI-Systems
Certain aspects of the explainability of AI systems will be critically discussed. This especially with focus on the feasibility of the task of making every AI system explainable. Emphasis will be given to difficulties related to the explainability of highly complex and efficient AI systems which deliver decisions whose explanation defies classical logical schemes of cause and effect. AI systems have provably delivered unintelligible solutions which in retrospect were characterized as ingenious (for example move 37 of the game 2 of AlphaGo). It will be elaborated on arguments supporting the notion that if AI-solutions were to be discarded in advance because of their not being thoroughly comprehensible, a great deal of the potentiality of intelligent systems would be wasted.
Deep Learning Reproducibility and Explainable AI (XAI)
The nondeterminism of Deep Learning (DL) training algorithms and its influence on the explainability of neural network (NN) models are investigated in this work with the help of image classification examples. To discuss the issue, two convolutional neural networks (CNN) have been trained and their results compared. The comparison serves the exploration of the feasibility of creating deterministic, robust DL models and deterministic explainable artificial intelligence (XAI) in practice. Successes and limitation of all here carried out efforts are described in detail. The source code of the attained deterministic models has been listed in this work. Reproducibility is indexed as a development-phase-component of the Model Governance Framework, proposed by the EU within their excellence in AI approach. Furthermore, reproducibility is a requirement for establishing causality for the interpretation of model results and building of trust towards the overwhelming expansion of AI systems applications. Problems that have to be solved on the way to reproducibility and ways to deal with some of them, are examined in this work.
Rashomon Effect and Consistency in Explainable Artificial Intelligence (XAI)
The Rashomon Effect describes the following phenomenon: for a given dataset there may exist many models with equally good performance but with different solution strategies. The Rashomon Effect has implications for Explainable Machine Learning, especially for the comparability of explanations. We provide a unified view on three different comparison scenarios and conduct a quantitative evaluation across different datasets, models, attribution methods, and metrics. We find that hyperparameter-tuning plays a role and that metric selection matters. Our results provide empirical support for previously anecdotal evidence and exhibit challenges for both scientists and practitioners.
Contact
Take the opportunity to speak to the experts at Morphostate. Our innovative science and technology solutions are exactly what you need to take your projects to the next level.
Adress
Dr. Anastasia Leventi-Preetz
Galgenpfad 14
53343 Wachtberg
Mail: info@morphostate.com


