Main Content
DFG projects
Inhalt ausklappen Inhalt einklappen Agile quality assurance of cultural object metadata in the context of data integration processes
The quality of metadata related to cultural objects plays a crucial role in their accessibility and further utilization. This applies in principle to any data offerings, and it is particularly relevant for collaborative platforms like the German Digital Library (DDB) and the Graphic Portal, as well as for the growing data offerings of NFDI consortia such as NFDI4Culture and Text+. Data to be integrated often exhibit different, and sometimes lower, quality than desired by the target systems. Prior to integration into a target system, the data must be analyzed and, if necessary, adjusted through a typically resource-intensive process and in dialogue between data providers and data recipients. When software for quality assurance is used, it must be adapted by software developers whenever changes are made to the quality requirements. Domain experts have had little opportunity to independently conduct quality assurance using existing tools, let alone adapt existing quality requirements.
The aim of the project is to empower domain experts with limited technological knowledge to define and carry out domain-specific metadata quality assurance independently. For this purpose, a process and a web-based software will be developed, allowing domain experts to semiautomatically define quality requirements prior to the actual quality assurance process. Subsequently, the required input for various quality assurance techniques, such as Schematron or existing software and frameworks like the Metadata Quality Assessment Framework, will be automatically generated from it. These existing components will be integrated into the quality assurance process and can be used without the previously required technical expertise. The quality analysis and potential quality improvements will be defined separately and independently implementable.
The software developed in the project will form the core of a standalone agile quality assurance process that can be integrated into existing data integration processes but can also be used independently. This process will be more automated and easily adaptable than existing quality assurance mechanisms. Concrete use cases include the quality assurance of LIDO data for integration into the DDB and the Graphic Portal, as well as TEI Header data in the TextGrid Repository. The evaluation of the approach is embedded in the NFDI consortia NFDI4Culture and Text+. Since the approach is generic and independent of specific data formats and technologies, it can be applied to quality assurance for metadata in other application areas as well.
Participating: Prof. Dr. Gabriele Taentzer
Cooperation partner: Péter Király, PhD, Gesellschaft für wissenschaftliche Datenverarbeitung Göttingen (GWDG), Regine Stein, Niedersächsische Staats- und Universitätsbibliothek Göttingen (SUB)
Funded by: DFG
Funding since: 2023Inhalt ausklappen Inhalt einklappen Moduli spaces for complex Nilmannian manifolds
Nilmannian manifolds are an important class of examples of exotic complex or Hermitian structures because the properties of left-invariant structures are often already determined at the level of the associated Lie algebra and are thus accessible by methods of Lie theory. The classification of such structures on Lie algebras has received much attention in the past, but the additional difficulties that arise when considering the geometric setting have usually been neglected.In this project, we aim to classify left-invariant complex structures on certain compact Nilmannian manifolds. The goal is to identify cases where a moduli space exists as a complex space and to study its properties.
Participating: Prof. Dr. Sönke Rollenske
Funded by: DFG (Sachbeihilfen)
Funding since: 2022Inhalt ausklappen Inhalt einklappen Efficient Algorithms for Group Centrality (EAGR)
Group Centrality problems are used to identify important groups of actors in social networks. The computation of optimal solutions for group centrality problems is algorithmically difficult. Therefore, these problems have been solved heuristically so far. In EAGR, we will investigate whether there are efficient algorithms that solve these problems optimally on typical input networks. In order to develop the improved algorithms, we will analyze how the structure of the input network influences the difficulty of group centrality problems: Are there network properties that can be exploited algorithmically or do group centrality problems remain difficult even on very restricted networks? On the practical side, we will investigate whether an existing software framework for solving hard subgraph problems can be extended to efficiently solve the algorithmically more challenging Group Centrality problems.
Applicant: Prof. Dr. Christian Komusiewicz
Funded by: DFG (Sachbeihilfen)
Funding since: 2021Inhalt ausklappen Inhalt einklappen Model-driven optimization in software engineering
Many problems in software engineering can be viewed as optimization problems, such as software modularization, software testing, and planning new releases. In search-based software engineering, metaheuristic techniques are used to solve optimization problems in software engineering. One widely used approach for iteratively exploring a search space is evolutionary algorithms. Problem domains in software engineering are typically encoded with vectors or trees because evolutionary operators can be easily specified. When the quality of optimization results is not as high as expected, one explanation for this effect may be that domain-specific knowledge is not adequately captured during the exploratory search. Model-Driven Engineering (MDE) provides concepts, methods, and techniques for uniformly processing domain-specific models. The use of MDE in search-based software engineering is called Model-Driven Optimization (MDO), which has been demonstrated on well-known optimization problems in the literature. MDO is promising because domain-specific knowledge can be systematically incorporated into SBSE. To strengthen the vision of MDO, this project aims to consolidate MDO, i.e., to develop a scientific foundation for the results achieved so far and gain a deeper understanding of when and how MDO should be used to solve optimization problems in software engineering. This project vision can be divided into the following objectives: Develop a formal framework for MDO, defining a unified approach for specifying optimization problems and evolutionary algorithms using domain-specific knowledge. This framework will be used to clarify concepts and argue about the quality of evolutionary algorithms, enabling developers to make informed decisions.
Conduct an empirical evaluation of MDO to examine its practical relevance. Two current topics in SBSE, mutation testing, and genetic improvement of programs, have been identified for this evaluation. As a prerequisite for this evaluation, an integrated tool environment for MDO will be developed, considering all practical concepts and results of the formal framework.
Applicant: Prof. Dr. Gabriele Taentzer
Funded by: DFG (Sachbeihilfen)
Funding since: 2021Inhalt ausklappen Inhalt einklappen High-order adaptive quarklet frame methods for elliptic operator equations.
We are concerned with the design, convergence analysis and efficient realization of a new class of adaptive high-order numerical methods for partial differential equations. In the project, we consider basis-oriented methods that work with a wavelet version of hp finite element systems, so-called quarklet systems. These piecewise polynomial oscillating functions have the spectral approximation properties of an hp-FE system, and are also frames in a number of function spaces, such as positive and negative order Sobolev spaces. This enables, for example, anisotropic tensor product approximation techniques. In this project, we will apply the approximation and stability properties of quarklet systems to the design of adaptive discretization methods that converge at sub-exponential rates in many cases. To this end, we will use a combination of new multi-scale regularity estimators, associated refinement strategies, and adaptive spatial decompositions. The resulting adaptive quarklet methods will be applied to the numerical solution of elliptic boundary value problems as well as parabolic evolution equations in a least-squares formulation of a spatio-temporal system of first-order differential equations. We expect that the convergence analysis of adaptive quarklet methods will allow some conclusions to be drawn about the convergence theory of adaptive hp finite element methods.
Applicant: Prof. Dr. Stephan Dahlke, Prof. Dr. Thorsten Raasch
Funded by: DFG (Sachbeihilfen)
Funding since: 2020Inhalt ausklappen Inhalt einklappen Geometric realization of GKM fiber bundles, nonrigidity of GKM manifolds, and cohomotopy in dimension 6.
The famous Delzant theorem states that the momentum map defines a bijection between the class of toric symplectic manifolds modulo equivariant symplectomorphisms and Delzant polytopes modulo translations. This project aims to investigate a possible bijection analogous to this in the context of GKM (Goresky-Kottwitz-MacPherson) theory, which associates a labeled graph to a manifold with a torus action of GKM type. The project is divided into three main parts:
First, we will contribute to the GKM realization problem, which asks whether every GKM graph can be realized by a GKM manifold. On one hand, we will realize various classes of GKM fiber bundles geometrically as equivariant fiber bundles and investigate their invariant (stable) complex, symplectic, and Kähler structures. On the other hand, we will construct the first examples of GKM graphs that cannot be realized geometrically. Second, we will explore the problem of (equivariant) cohomological rigidity for GKM manifolds. We plan to construct exotic GKM manifolds, which are examples of simply connected, integral GKM manifolds of dimension 8 and higher, demonstrating that the GKM graph of a GKM manifold does not determine its homotopy type. Finally, the existence of certain exotic GKM manifolds is related to the existence of special complex vector bundles of rank 2 over toric 6-manifolds with structure group SU(2). To achieve this, we will compute the fourth cohomotopy group of a 6-manifold, connecting the seemingly unrelated areas of GKM theory and homotopy theory.
In summary, this project aims to explore the relationships and structures within GKM theory and its connections to manifold realization, cohomological rigidity, and homotopy theory, potentially shedding light on previously unexplored aspects of these mathematical subjects.
Applicant: Prof. Dr. Oliver Goertsches, Dr. Panagiotis Konstantis
Funded by: DFG (Sachbeihilfen)
Funding since: 2020Inhalt ausklappen Inhalt einklappen Kernel-basierte Multilevel-Methoden für hochdimensionale Approximationsprobleme auf spärlichen Gittern - Herleitung, Analyse und Anwendung bei der Quantifizierung von Unsicherheit
This project aims to develop, analyze, and implement various kernel-based multilevel methods for solving high-dimensional approximation problems. Kernel-based methods offer a significant advantage over other techniques as they do not require assumptions about the data structure and can work with arbitrarily distributed data. The project will particularly focus on tensor products of low-dimensional point clouds, with thinning of these products leading to generalized sparse grids. Additionally, these methods can easily generate high-order approximation spaces through suitable kernel selection. Multilevel methods provide the additional benefit of enabling adaptive versions, data compression versions, and efficient implementations, especially for large datasets. High-dimensional approximation problems frequently arise in parameter-dependent partial differential equations used in modeling complex systems. These parameters are typically modeled as random variables, making the solutions of partial differential equations functions over a probability space. This leads to a model in which the parameters come from a high-dimensional space. Relevant characteristics of the model can often be represented as functionals on the space of parametrized solutions. Approximating such characteristics requires a high-dimensional reconstruction method, taking pairs of parameters and numerically computed solutions of the partial differential equation as input. The methods developed in this project will be designed for general function spaces, with a specific focus on tensor products of intervals and low-dimensional spheres as potential parameter spaces. Furthermore, an a priori error theory for such high-dimensional methods will be derived, explicitly incorporating all relevant discretization parameters. The applicability of these methods for numerically determining a Karhunen-Loève expansion and for addressing questions in the areas of Design of Experiment and Reduced Order Modeling will be explored.Finally, the developed methods will be applied to concrete examples in Uncertainty Quantification to assess their effectiveness in practical scenarios.
Applicant: Prof. Dr. Christian Rieger, Prof. Dr. Holger Wendland
Funded by: DFG (Sachbeihilfen)
Funding since: 2020Inhalt ausklappen Inhalt einklappen Left coideal subalgebras of Nichols algebras
Hopf algebras play an important role in many noncommutative algebras with high symmetry. In turn, Nichols algebras are of enormous importance for the structure theory of Hopf algebras. Some classes of Nichols algebras and their relation to Lie theory are now well understood, and this results from a good understanding of certain types of their coideal subalgebras. To better understand other classes of Nicholsalgebras, including those about symmetric groups, which have been considered for 20 years, new nontrivial classes of coideal subalgebras should be looked at to obtain unknown fundamental information (not infrequently even dimension). It should be investigated what is the cardinality of the set of left coideal subalgebras of a Nichols algebra and what is the structure of the half-order on these sets.
Applicant: Prof. Dr. István Heckenberger
Funded by: DFG (Sachbeihilfen)
Funding since: 2019Inhalt ausklappen Inhalt einklappen Operational Parameterization for Heuristics (OPERAH)
Many practically motivated computational problems, such as data clustering and transportation optimization, are NP-hard. Therefore, these problems are generally considered computationally intractable. One theory-driven approach to address this algorithmic challenge is parameterized complexity theory. It attempts to obtain efficient algorithms by exploiting the structure of typical input data. This structure is described by problem-specific parameters that depend on the input data. Parameterized algorithms are fast when these parameters take on small values. Some parameterized algorithms can serve as the basis for state-of-the-art algorithms for NP-hard problems. However, in practice, where heuristics are predominantly used, they are rarely employed. Two important heuristic techniques are local search and greedy algorithms. The "Operational Parameterization for Heuristics (OPERAH)" project aims to investigate the potential benefits of incorporating parameterized algorithms into local search procedures and greedy heuristics. The central focus of the research is an operational parameter. In contrast to the standard approach in parameterized complexity theory, this parameter is not determined by the structure of the input but is chosen by the user. The operational parameter represents a trade-off between longer runtime and improved solution quality: a higher parameter value yields a better solution but also increases the runtime. By selecting the parameter value, a user can control the algorithm's runtime and obtain a solution of the desired quality. This type of parameterization aims to mitigate a drawback of parameterized algorithms, which is the tendency to yield excessively large parameter values for realistic input instances. Thus, the project seeks to increase the practical relevance of parameterized complexity theory. Specifically, the goal is to develop efficient algorithms for parameterized local search and a technique known as "turbocharging" for greedy heuristics for several practical optimization problems. Alternatively, the project will demonstrate that such algorithms do not exist within the current state of the art. The developed algorithms will be implemented and experimentally compared to state-of-the-art heuristics. The trade-off between runtime and solution quality, as described above, will be a key aspect of the investigation.
Applicant: Prof. Dr. Christian Komusiewicz
Funded by: DFG (Sachbeihilfen)
Funding since: 2019Inhalt ausklappen Inhalt einklappen Triple Graph Grammars (TGG) 3.0: A Framework for Reliable Continuous Model Integration
Model-Driven Engineering (MDE) is an established approach for managing the increasing complexity of technical products. Models help capture the essence of product development. As engineering projects tend to become more complex and development teams increasingly work in distributed environments, support for collaborative modeling processes on networks of models becomes increasingly important. Collaborative modeling in model networks is not yet mature enough to automatically detect and resolve inconsistencies and conflicts between model changes. Current MDE methods either allow only synchronous modeling activities, provide a "team" variant with pessimistic locking at the level of model elements, or offer limited options for concurrent editing of model pairs. Bidirectional transformations (BX) promise to greatly simplify the development of model synchronization tasks. While BX approaches for basic model synchronization processes on model pairs are well-established, they still have significant shortcomings in practical use, as model networks are often changed concurrently, and inconsistencies in models cannot always be resolved immediately. Existing approaches do not always scale sufficiently in practice or guarantee the correctness and completeness of computed model synchronizations. To strengthen the MDE vision for modeling in large projects, we will develop a framework for reliable and continuous model integration, providing a conceptual and technological basis for collaborative modeling processes across multiple application domains. This framework will support the continuous integration of simultaneous changes to models in a network while tolerating temporary model inconsistencies. Since Triple Graph Grammars (TGGs), a rule-based BX approach, have proven practical and have a comprehensive formal foundation, we will develop the framework in the context of TGGs. Building on improved model synchronization methods and tools for TGGs that we developed in the first phase of funding, our framework will support the development of networks of cooperating model integrators. Each model integrator is responsible for performing reliable continuous model integration for a model pair. To achieve this goal, each model integrator uses a Monitor-Analyze-Plan-Execute cycle controlled by a Knowledge component (MAPE-K cycle), a concept borrowed from Self-X systems. For large-scale model integration, we consider networks of concurrently active model integrators. Our framework will be evaluated on Arcadia, a current methodology for model-based development in the industry.
Applicant: Prof. Dr. Gabriele Taentzer, Prof. Dr. Andreas Schürr
Funded by: DFG (Sachbeihilfen)
Funding since: 2016