We consider the problem of solving dual monotone inclusions involving sums of composite parallel-sum type operators. A feature of this work is to exploit explicitly the properties of the cocoercive operators appearing in the model. Several splitting algorithms recently proposed in the literature are recovered as special cases.

This paper introduces a weak Galerkin (WG) finite element method for the Stokes equations in the primal velocity-pressure formulation. This WG method is equipped with stable finite elements consisting of usual polynomials of degree k≥1 for the velocity and polynomials of degree k−1 for the pressure, both are discontinuous. The velocity element is enhanced by polynomials of degree k−1 on the interface of the finite element partition. All the finite element functions are discontinuous for which the usual gradient and divergence operators are implemented as distributions in properly-defined spaces. Optimal-order error estimates are established for the corresponding numerical approximation in various norms. It must be emphasized that the WG finite element method is designed on finite element partitions consisting of arbitrary shape of polygons or polyhedra which are shape regular.

An efficient implementation of an adaptive finite element method on distributed memory systems requires an efficient linear solver. Most solver methods, which show scalability to a large number of processors make use of some geometric information of the mesh. This information has to be provided to the solver in an efficient and solver specific way. We introduce data structures and numerical algorithms which fulfill this task and allow in addition for an user-friendly implementation of a large class of linear solvers. The concepts and algorithms are demonstrated for global matrix solvers and domain decomposition methods for various problems in fluid dynamics, continuum mechanics and materials science. Weak and strong scaling is shown for up to 16.384 processors.

The problem of constructing a normalized hierarchical basis for adaptively refined spline spaces is addressed. Multilevel representations are defined in terms of a hierarchy of basis functions, reflecting different levels of refinement. When the hierarchical model is constructed by considering an underlying sequence of bases { Γ ℓ } ℓ = 0 , … , N − 1 $\{\Gamma ^{\ell }\}_{\ell =0,\ldots ,N-1}$ with properties analogous to classical tensor-product B-splines, we can define a set of locally supported basis functions that form a partition of unity and possess the property of coefficient preservation, i.e., they preserve the coefficients of functions represented with respect to one of the bases Γ ℓ $\Gamma ^{\ell }$ . Our construction relies on a certain truncation procedure, which eliminates the contributions of functions from finer levels in the hierarchy to coarser level ones. Consequently, the support of the original basis functions defined on coarse grids is possibly reduced according to finer levels in the hierarchy. This truncation mechanism not only decreases the overlapping of basis supports, but it also guarantees strong stability of the construction. In addition to presenting the theory for the general framework, we apply it to hierarchically refined tensor-product spline spaces, under certain reasonable assumptions on the given knot configuration.

In this paper we are concerned with plane wave discretizations of nonhomogeneous Helmholtz equation and time-harmonic Maxwell equations. To this end, we design a plane wave method combined with local spectral elements for the discretization of such nonhomogeneous equations. This method contains two steps: we first solve a series of nonhomogeneous local problems on auxiliary smooth subdomains by the spectral element method, and then apply the plane wave method to the discretization of the resulting (locally homogeneous) residue problem on the global solution domain. We derive error estimates of the approximate solutions generated by this method. The numerical results show that the resulting approximate solutions possess high accuracy.

Many reduced-order models are neither robust with respect to parameter changes nor cost-effective enough for handling the nonlinear dependence of complex dynamical systems. In this study, we put forth a robust machine learning framework for projection-based reduced-order modeling of such nonlinear and nonstationary systems. As a demonstration, we focus on a nonlinear advection-diffusion system given by the viscous Burgers equation, which is a prototypical setting of more realistic fluid dynamics applications due to its quadratic nonlinearity. In our proposed methodology the effects of truncated modes are modeled using a single layer feed-forward neural network architecture. The neural network architecture is trained by utilizing both the Bayesian regularization and extreme learning machine approaches, where the latter one is found to be more computationally efficient. A significant emphasis is laid on the selection of basis functions through the use of both Fourier bases and proper orthogonal decomposition. It is shown that the proposed model yields significant improvements in accuracy over the standard Galerkin projection methodology with a negligibly small computational overhead and provide reliable predictions with respect to parameter changes.

Gabor analysis is one of the most common instances of time-frequency signal analysis. Choosing a suitable window for the Gabor transform of a signal is often a challenge for practical applications, in particular in audio signal processing. Many time-frequency (TF) patterns of different shapes may be present in a signal and they can not all be sparsely represented in the same spectrogram. We propose several algorithms, which provide optimal windows for a user-selected TF pattern with respect to different concentration criteria. We base our optimization algorithm on l p -norms as measure of TF spreading. For a given number of sampling points in the TF plane we also propose optimal lattices to be used with the obtained windows. We illustrate the potentiality of the method on selected numerical examples.

We develop two linear, second order energy stable schemes for solving the governing system of partial differential equations of a hydrodynamic phase field model of binary fluid mixtures. We first apply the Fourier pseudo-spectral approximation to the partial differential equations in space to obtain a semi-discrete, time-dependent, ordinary differential and algebraic equation (DAE) system, which preserves the energy dissipation law at the semi-discrete level. Then, we discretize the DAE system by the Crank-Nicolson (CN) and the second-order backward differentiation/extrapolation (BDF/EP) method in time, respectively, to obtain two fully discrete systems. We show that the CN method preserves the energy dissipation law while the BDF/EP method does not preserve it exactly but respects the energy dissipation property of the hydrodynamic model. The two new fully discrete schemes are linear, unconditional stable, second order accurate in time and high order in space, and uniquely solvable as linear systems. Numerical examples are presented to show the convergence property as well as the efficiency and accuracy of the new schemes in simulating mixing dynamics of binary polymeric solutions.

The singular value decomposition (SVD) has a crucial role in model order reduction. It is often utilized in the offline stage to compute basis functions that project the high-dimensional nonlinear problem into a low-dimensional model which is then evaluated cheaply. It constitutes a building block for many techniques such as the proper orthogonal decomposition (POD) and dynamic mode decomposition (DMD). The aim of this work is to provide an efficient computation of low-rank POD and/or DMD modes via randomized matrix decompositions. This is possible due to the randomized singular value decomposition (rSVD) which is a fast and accurate alternative of the SVD. Although this is considered an offline stage, this computation may be extremely expensive therefore, the use of compressed techniques drastically reduce its cost. Numerical examples show the effectiveness of the method for both POD and DMD.

Lévy flight models whose jumps have infinite moments are mathematically used to describe the superdiffusion in complex systems. Exponentially tempering Lévy measure of Lévy flights leads to the tempered stable Lévy processes which combine both the α-stable and Gaussian trends; and the very large jumps are unlikely and all their moments exist. The probability density functions of the tempered stable Lévy processes solve the tempered fractional diffusion equation. This paper focuses on designing the high order difference schemes for the tempered fractional diffusion equation on bounded domain. The high order difference approximations, called the tempered and weighted and shifted Grünwald difference (tempered-WSGD) operators, in space are obtained by using the properties of the tempered fractional calculus and weighting and shifting their first order Grünwald type difference approximations. And the Crank-Nicolson discretization is used in the time direction. The stability and convergence of the presented numerical schemes are established; and the numerical experiments are performed to confirm the theoretical results and testify the effectiveness of the schemes.

Randomized algorithms provide a powerful tool for scientific computing. Compared with standard deterministic algorithms, randomized algorithms are often faster and robust. The main purpose of this paper is to design adaptive randomized algorithms for computing the approximate tensor decompositions. We give an adaptive randomized algorithm for the computation of a low multilinear rank approximation of the tensors with unknown multilinear rank and analyze its probabilistic error bound under certain assumptions. Finally, we design an adaptive randomized algorithm for computing the tensor train approximations of the tensors. Based on the bounds about the singular values of sub-Gaussian matrices with independent columns or independent rows, we analyze these randomized algorithms. We illustrate our adaptive randomized algorithms via several numerical examples.

We investigate the recovery of vectors from magnitudes of frame coefficients when the frames have a low redundancy, meaning a small number of frame vectors compared to the dimension of the Hilbert space. We first show that for complex vectors in d dimensions, 4d−4 suitably chosen frame vectors are sufficient to uniquely determine each signal, up to an overall unimodular constant, from the magnitudes of its frame coefficients. Then we discuss the effect of noise and show that 8d−4 frame vectors provide a stable recovery if part of the frame coefficients is bounded away from zero. In this regime, perturbing the magnitudes of the frame coefficients by noise that is sufficiently small results in a recovery error that is at most proportional to the noise level.

In this article we investigate model order reduction of large-scale systems using time-limited balanced truncation, which restricts the well known balanced truncation framework to prescribed finite time intervals. The main emphasis is on the efficient numerical realization of this model reduction approach in case of large system dimensions. We discuss numerical methods to deal with the resulting matrix exponential functions and Lyapunov equations which are solved for low-rank approximations. Our main tool for this purpose are rational Krylov subspace methods. We also discuss the eigenvalue decay and numerical rank of the solutions of the Lyapunov equations. These results, and also numerical experiments, will show that depending on the final time horizon, the numerical rank of the Lyapunov solutions in time-limited balanced truncation can be smaller compared to standard balanced truncation. In numerical experiments we test the approaches for computing low-rank factors of the involved Lyapunov solutions and illustrate that time-limited balanced truncation can generate reduced order models having a higher accuracy in the considered time region.

We study decomposition of functions in the Hardy space $H^2(\mathbb{D} )$ into linear combinations of the basic functions (modified Blaschke products) in the system 1 $$\label{Walsh like} {B}_n(z)= \frac{\sqrt{1-|a_n|^2}}{1-\overline{a}_{n}z}\prod\limits_{k=1}^{n-1}\frac{z-a_k}{1-\overline{a}_{k}z}, \quad n=1,2,..., $$ where the points a n ’s in the unit disc $\mathbb{D}$ are adaptively chosen in relation to the function to be decomposed. The chosen points a n ’s do not necessarily satisfy the usually assumed hyperbolic non-separability condition 2 $$\label{condition} \sum\limits_{k=1}^\infty (1-|a_k|)=\infty $$ in the traditional studies of the system. Under the proposed procedure functions are decomposed into their intrinsic components of successively increasing non-negative analytic instantaneous frequencies, whilst fast convergence is resumed. The algorithm is considered as a variation and realization of greedy algorithm.

This paper introduces a proximity operator framework for studying the L1/TV image denoising model which minimizes the sum of a data fidelity term measured in the ℓ1-norm and the total-variation regularization term. Both terms in the model are non-differentiable. This causes algorithmic difficulties for its numerical treatment. To overcome the difficulties, we formulate the total-variation as a composition of a convex function (the ℓ1-norm or the ℓ2-norm) and the first order difference operator, and then express the solution of the model in terms of the proximity operator of the composition. By developing a “chain rule” for the proximity operator of the composition, we identify the solution as fixed point of a nonlinear mapping expressed in terms of the proximity operator of the ℓ1-norm or the ℓ2-norm, each of which is explicitly given. This formulation naturally leads to fixed-point algorithms for the numerical treatment of the model. We propose an alternative model by replacing the non-differentiable convex function in the formulation of the total variation with its differentiable Moreau envelope and develop corresponding fixed-point algorithms for solving the new model. When partial information of the underlying image is available, we modify the model by adding an indicator function to the minimization functional and derive its corresponding fixed-point algorithms. Numerical experiments are conducted to test the approximation accuracy and computational efficiency of the proposed algorithms. Also, we provide a comparison of our approach to two state-of-the-art algorithms available in the literature. Numerical results confirm that our algorithms perform favorably, in terms of PSNR-values and CPU-time, in comparison to the two algorithms.

The focus of this paper is on the optimal error bounds of two finite difference schemes for solving the d-dimensional (d = 2, 3) nonlinear Klein-Gordon-Schrodinger (KGS) equations. The proposed finite difference schemes not only conserve the mass and energy in the discrete level but also are efficient in practical computation because only two linear systems need to be solved at each time step. Besides the standard energy method, an induction argument as well as a 'lifting' technique are introduced to establish rigorously the optimal H (2)-error estimates without any restrictions on the grid ratios, while the previous works either are not rigorous enough or often require certain restriction on the grid ratios. The convergence rates of the proposed schemes are proved to be at O(h (2) + tau (2)) with mesh-size h and time step tau in the discrete H (2)-norm. The analysis method can be directly extended to other linear finite difference schemes for solving the KGS equations in high dimensions. Numerical results are reported to confirm the theoretical analysis for the proposed finite difference schemes.

This paper addresses the problem of the numerical computation of generalized Mittag–Leffler functions with two parameters, with applications in fractional calculus. The inversion of their Laplace transform is an effective tool in this direction; however, the choice of the integration contour is crucial. Here parabolic contours are investigated and combined with quadrature rules for the numerical integration. An in-depth error analysis is carried out to select suitable contour’s parameters, depending on the parameters of the Mittag–Leffler function, in order to achieve any fixed accuracy. We present numerical experiments to validate theoretical results and some computational issues are discussed.