mastodon.world is one of the many independent Mastodon servers you can use to participate in the fediverse.
Generic Mastodon server for anyone to use.

Server stats:

9K
active users

#csms

0 posts0 participants0 posts today
arXiv logo
arXiv.orgSymbolic and User-friendly Geometric Algebra Routines (SUGAR) for Computations in MatlabGeometric algebra (GA) is a mathematical tool for geometric computing, providing a framework that allows a unified and compact approach to geometric relations which in other mathematical systems are typically described using different more complicated elements. This fact has led to an increasing adoption of GA in applied mathematics and engineering problems. However, the scarcity of symbolic implementations of GA and its inherent complexity, requiring a specific mathematical background, make it challenging and less intuitive for engineers to work with. This prevents wider adoption among more applied professionals. To address this challenge, this paper introduces SUGAR (Symbolic and User-friendly Geometric Algebra Routines), an open-source toolbox designed for Matlab and licensed under the MIT License. SUGAR facilitates the translation of GA concepts into Matlab and provides a collection of user-friendly functions tailored for GA computations, including support for symbolic operations. It supports both numeric and symbolic computations in high-dimensional GAs. Specifically tailored for applied mathematics and engineering applications, SUGAR has been meticulously engineered to represent geometric elements and transformations within two and three-dimensional projective and conformal geometric algebras, aligning with established computational methodologies in the literature. Furthermore, SUGAR efficiently handles functions of multivectors, such as exponential, logarithmic, sinusoidal, and cosine functions, enhancing its applicability across various engineering domains, including robotics, control systems, and power electronics. Finally, this work includes four distinct validation examples, demonstrating SUGAR's capabilities across the above-mentioned fields and its practical utility in addressing real-world applied mathematics and engineering problems.
#csms#csro#cssy
arXiv logo
arXiv.orgMAGNET: an open-source library for mesh agglomeration by Graph Neural NetworksWe introduce MAGNET, an open-source Python library designed for mesh agglomeration in both two- and three-dimensions, based on employing Graph Neural Networks (GNN). MAGNET serves as a comprehensive solution for training a variety of GNN models, integrating deep learning and other advanced algorithms such as METIS and k-means to facilitate mesh agglomeration and quality metric computation. The library's introduction is outlined through its code structure and primary features. The GNN framework adopts a graph bisection methodology that capitalizes on connectivity and geometric mesh information via SAGE convolutional layers, in line with the methodology proposed by Antonietti et al. (2024). Additionally, the proposed MAGNET library incorporates reinforcement learning to enhance the accuracy and robustness of the model for predicting coarse partitions within a multilevel framework. A detailed tutorial is provided to guide the user through the process of mesh agglomeration and the training of a GNN bisection model. We present several examples of mesh agglomeration conducted by MAGNET, demonstrating the library's applicability across various scenarios. Furthermore, the performance of the newly introduced models is contrasted with that of METIS and k-means, illustrating that the proposed GNN models are competitive regarding partition quality and computational efficiency. Finally, we exhibit the versatility of MAGNET's interface through its integration with Lymph, an open-source library implementing discontinuous Galerkin methods on polytopal grids for the numerical discretization of multiphysics differential problems.
arXiv.orgBurTorch: Revisiting Training from First Principles by Coupling Autodiff, Math Optimization, and SystemsIn this work, we introduce BurTorch, a compact high-performance framework designed to optimize Deep Learning (DL) training on single-node workstations through an exceptionally efficient CPU-based backpropagation (Rumelhart et al., 1986; Linnainmaa, 1970) implementation. Although modern DL frameworks rely on compilerlike optimizations internally, BurTorch takes a different path. It adopts a minimalist design and demonstrates that, in these circumstances, classical compiled programming languages can play a significant role in DL research. By eliminating the overhead of large frameworks and making efficient implementation choices, BurTorch achieves orders-of-magnitude improvements in performance and memory efficiency when computing $\nabla f(x)$ on a CPU. BurTorch features a compact codebase designed to achieve two key goals simultaneously. First, it provides a user experience similar to script-based programming environments. Second, it dramatically minimizes runtime overheads. In large DL frameworks, the primary source of memory overhead for relatively small computation graphs $f(x)$ is due to feature-heavy implementations. We benchmarked BurTorch against widely used DL frameworks in their execution modes: JAX (Bradbury et al., 2018), PyTorch (Paszke et al., 2019), TensorFlow (Abadi et al., 2016); and several standalone libraries: Autograd (Maclaurin et al., 2015), Micrograd (Karpathy, 2020), Apple MLX (Hannun et al., 2023). For small compute graphs, BurTorch outperforms best-practice solutions by up to $\times 2000$ in runtime and reduces memory consumption by up to $\times 3500$. For a miniaturized GPT-3 model (Brown et al., 2020), BurTorch achieves up to a $\times 20$ speedup and reduces memory up to $\times 80$ compared to PyTorch.
arXiv.orgHigh-dimensional multidisciplinary design optimization for aircraft eco-design / Optimisation multi-disciplinaire en grande dimension pour l'éco-conception avion en avant-projetThe objective of this Philosophiae Doctor (Ph.D) thesis is to propose an efficient approach for optimizing a multidisciplinary black-box model when the optimization problem is constrained and involves a large number of mixed integer design variables (typically 100 variables). The targeted optimization approach, called EGO, is based on a sequential enrichment of an adaptive surrogate model and, in this context, GP surrogate models are one of the most widely used in engineering problems to approximate time-consuming high fidelity models. EGO is a heuristic BO method that performs well in terms of solution quality. However, like any other global optimization method, EGO suffers from the curse of dimensionality, meaning that its performance is satisfactory on lower dimensional problems, but deteriorates as the dimensionality of the optimization search space increases. For realistic aircraft design problems, the typical size of the design variables can even exceed 100 and, thus, trying to solve directly the problems using EGO is ruled out. The latter is especially true when the problems involve both continuous and categorical variables increasing even more the size of the search space. In this Ph.D thesis, effective parameterization tools are investigated, including techniques like partial least squares regression, to significantly reduce the number of design variables. Additionally, Bayesian optimization is adapted to handle discrete variables and high-dimensional spaces in order to reduce the number of evaluations when optimizing innovative aircraft concepts such as the "DRAGON" hybrid airplane to reduce their climate impact.