contour extends up to the line of sight of 45 ◦ due to the pres- ence of **compact** disk and massive disk, respectively. On the other hand, the density structure is flatter in Model 2. Thus, the NIR continuum emission is emitted through the high tempera- ture region (mostly the red region in the temperature plot) and encounters more material along the line of sight to an observer in **Models** 1 and 3. Consequently, the emission from the region close to the midplane of the disk will not contribute to the NIR continuum. Figure 13 presents an example of the radial contri- bution to the column density along a line of sight from the star. At viewing angles <45 ◦ , most of the warm material is located at 10−40 AU from the star along the line of sight while most of the cold material is located >50 AU. The inner disk and high

Show more
17 Read more

The existence of periodic and almost periodic solutions of diﬀerential equations has an important theoretical and practical signiﬁcance and is a problem of great interest. The existence of such solutions for ordinary as well as abstract diﬀerential equations has been intensively studied [–]. Such dynamics can be found in celestial mechanics, electronic circuits, problems of ecology and many other physical and biological systems. The pa- rameters of such nonautonomous **models** are usually assumed to be periodic with respect to **time** due to periodic **time**-ﬂuctuating environment. For example in epidemiology, the periodic aspect comes from the periodic seasonal eﬀects. Even if the parameters of the system are periodic in **time**, the overall **time** dependence may not be periodic; i.e., if the quotient of periods of these functions is not rational, the overall **time** dependence will not be periodic but almost periodic in the sense of Bohr.

Show more
21 Read more

Several architectures have been proposed for reducing the memory footprint required by the acoustic **models**. Vector Quantization (VQ) was introduced 25 years ago [1, 2], initially in the field of information encoding for network tra ﬃ c reduction. VQ is a very low level approach. Our focus in this paper is on the modification in the modelling scheme to achieve memory footprint reduction. Moreover, VQ could be combined with the proposed modelling approach without any problem. In [3] a subspace distribution clustering method was proposed. It consists of splitting the acoustic space into streams where the distributions may be e ﬃ cently clustered and tied. This method has been developed within several contexts, demonstrating a very good tradeoﬀ between storage cost and model accuracy. Most of the recent ASR systems rely on Gaussian or state sharing, where parameter tying reduces computational **time** and the memory footprint, whilst providing an eﬃcient way of estimating large context- **dependent** **models** [4–6]. In [7] a method of full Gaussian tying was proposed. It introduced Semi-continuous HMMs, for LVCSR tasks. In this architecture, all Gaussian compo- nents are grouped in a common codebook, state-**dependent** **models** being obtained by Maximum LikeLihood Estimation (MLE) based selection and weighting of the dictionary

Show more
12 Read more

This paper presents a high-order approximation scheme based on **compact** integrated radial basis function (RBF) stencils and second-order Adams-Bashforth/Crank-Nicolson algorithms for solving **time**-**dependent** problems. We employ **compact** integrated-RBF stencils, where the RBF approximations are locally constructed through integration and expressed in terms of nodal values of the function and its derivatives, to discretise the spatial derivatives in the governing equations. We adopt the Adams-Bashforth and Crank-Nicolson algorithms, which are second-order accurate, to discretise the temporal derivatives. Numerical investigations in several analytic test problems show that the proposed scheme is stable and high-order accurate.

Show more
This section reviews the theoretical framework of a DisCoCat. Its structure is as follows: in Section 3.1, we will review the distributional semantic **models**. We show how the motivating idea of these **models** are formalized in terms of vector representations and describe some theoretical and experimental parameters of the model and some of the major applications thereof. In Section 3.2. we review the grammatical model that was first used as a basis for compositional distributional **models**, namely the pregroup grammars of Lambek. We review the theory of pregroup algebras and exemplify its applications to reasoning about grammatical structures in natural language. In Section 3.3, we show how a functorial passage can be developed between a pregroup grammar, seen as a **compact** closed category, and the category of finite dimensional vector spaces and linear maps. We describe how this passage allows one to assign compositional vector semantics to words and sentences of language. This passage is similar to the one used in TQFT, where the grammatical part is replaced by the category of manifolds and cobordisms. Section 3.4, describes the theory of Frobenius and Bi algebras over **compact** closed categories. In Section 3.5, we show how these algebras can model meanings of relative and quantified clauses and sentences. In Section 3.6, we go through the graphical calculus of **compact** closed categories and Frobenius and Bi algebras over them. We exemplify how these are used in linguistics, where they depict flows of information between words of a sentence.

Show more
12 Read more

Predicting the future value of a given variable is based on numerous statistical and machine learning **models**. These are **models** that can predict for single variable to multiple variables. Usually the **models** are built from a simple linearly to complex neural network **models**. These **models** and methods start with fundamental mathematical relation between single or multiple independent variables to **dependent** variable. Linear Regression is a machine learning algorithm based on supervised learning. It performs a regression task. Regression **models** a target prediction value based on independent variables. It is mostly used for finding out the relationship between variables and forecasting. Different regression **models** differ based on – the kind of relationship between **dependent** and independent variables, they are considering, and the number of independent variables being used. Learn regression **models** are more suitable for parameters which are linearly **dependent** on each other. Current research aims at finding the **time** to failure as accurately as possible hence initially **time** to failure for all the sensor values for all engines. Hence in current problem **time** to failure is treated as **dependent** variable.

Show more
14 Read more

axisymmetry condition is assumed in the finite element analysis. Parametric studies have been carried out to find out how wide the model should be to have a negligible influence of the outer boundary on the results. An extent of 3m from the symmetry axis in the horizontal direction was found to be sufficient. The depth of the model is selected to be the same as the length of the pile (i.e., 6m) in order to avoid modeling the pile tip, which can cause numerical instabilities. Roller boundaries are applied to all sides in order to enable the soil moving freely due to cavity expansion. Drained conditions and zero initial pore pressures are assumed above the water table. Also a drainage boundary is considered at the ground level and dynamic effects are ignored in the numerical model. A finite element mesh with 4048 15-noded triangular elements, resulting in 33021 nodes, is used in the analyses. Each element has pore water pressure degrees of freedom at corner nodes. Mesh sensitivity studies have been carried out to ensure that the mesh is dense enough to produce accurate results for both constitutive **models**. Towards the cavity wall much finer elements are used in order to provide better resolution in this zone with expected high strain gradients. The problem is modeled using large strain analyses with updated pore pressures, taking advantage of the updated Lagrangian formulation in PLAXIS.

Show more
55 Read more

This paper deals a volume flexible manufacturing inventory model for deteriorating items with multi items and machine breakdown. Demand rate is taken as selling price **dependent**. Usually, a reduced price encourages a customer to buy more. Production rate is taken as decision variable. Shortages are allowed with partial backlogging. This model has been solved numerically with sensitivity analysis. From the analysis of this model, it has been observed that (i) The cost of ideal **time** of management units is indirectly proportional to the production rate and the profit. (ii) Mean **time** of successive breakdowns is reversely proportional to the production rate and the profit. (iii) The mean **time** to repair gives the reverse effect on the production rate and the profit.This model is much more realistic and

Show more
conjugate
of
Shatsky
oceanic
plateau
[Liu
et
al.,
2010].
We
could
not
match
observations
of
differential
vertical
motions
and
flooding
without
the
introduction
of
a
shallow‐dipping
slab,
with
a
dip
of
about
10 ° .
As
North
America
moves
westward
in
the
mantle
reference
frame
from
the
Late
Cretaceous
to
the
present,
and
the
Farallon
slab
sinks
into
the
lower
mantle,
the
dynamic
topography
low
migrates
eastward
in
the
plate
frame,
resulting
in
overall
Cenozoic
subsidence
of
eastern
North
America,
and
dynamic
uplift
in
the
west
(Fig.
6.5A).
The
Cenozoic
subsidence
does
not
create
land
subsidence,
as
it
is
contemporaneous
with
an
overall
sea‐level
fall.
The
proposed
dynamic
subsidence
may
help
explain
discrepancies
between
most
global
sea‐level
curves
and
those
derived
exclusively
on
the
New
Jersey
coastal
margin
[Spasojevic
et
al.,
2008].
The
timing
of
the
Laramide
orogeny
is
currently
debated
but
it
was
probably
initiated
in
the
Late
Cretaceous.
Timing
of
termination
of
Laramide
orogeny
is
also
debated,
with
the
proposed
end
varying
35
to
50
Ma
[Bird,
1998]
to
recent
times
[Bird,
1998;
Liu
et
al.,
2010].
We
find
that
there
is
a
continuous
dynamic
contribution
to
western
US
uplift
since
the
end
of
the
Late
Cretaceous
(Fig.
6.5A),
resulting
from
movement
of
Farallon
slab
away
from
this
region,
similar
to
findings
of
adjoint
geodynamic
**models**
[Spasojevic
et
al.,
2009].

Show more
252 Read more

the use of continuous-**time** long–range **dependent** processes has become a common feature of many applications, especially in econometrics and finance (see Baillie and King 1996; Comte and Renault 1996, 1998). This is probably due to the following two reasons. The first one is that the class of continuous **time** stochastic processes most commonly employed in finance can be extended to encompass long-range **dependent** **models**, which have already been used to model real financial data (see Comte and Renault 1998, p. 311). Existing studies show not only that this extension is possible, but also that it is the natural one in order to get variations (of prices or rates) which have an instantaneous variance of order less than two (but not necessarily integer). The usual short–range dependence case (diffusion processes) corresponds to the order one. This property is fundamental in the modern continuous–**time** finance theory (see Merton 1990, Chapter 1 for example) and corresponds to some kind of ’instantaneous unpredictability’ of asset prices in the sense of Sims (1984). The second reason is more statistical. Since existing studies (see Ding, Granger and Engle 1993; Ding and Granger 1996) already suggest that some financial derivatives (the Standard & Poor (SP) 500 stock market daily closing price index for example) display some kind of LRD property, existing studies for short–range **dependent** processes are therefore not applicable to the LRD case.

Show more
18 Read more

The DDMSVAR software has demonstrated to work fine, even though I must recognize that it is far from being fully optimized: there is too much looping in the code for an interpreted, although very efficient, language as Ox. Future versions will be more efficient. The Gibbs sampling approach has many advantages but also a big disadvantage: the former are that (i) it allows prior information to be exploited, (ii) it avoids the computa- tional problems pointed out by Hamilton (1994) that can arise with maximum likelihood estimation, (iii) it does not relay on asymptotic inference (read note 1.), (iv) the infer- ence on the state variables is not conditional on the set of estimated parameters. The big disadvantage is a long computation **time**: the 21000 Gibbs sampler iterations generated for last section’s results took more than 13 hours 12 .

Show more
19 Read more

The aim of the tick library is to provide for the Python community a large set of tools for statistical learning, previously not available in any framework. Though tick focuses on **time**-**dependent** modeling, it actually introduces a set of tools that go way beyond this particular set of **models**, thanks to a highly modular optimization toolbox. It benefits from thorough documentation (including tutorials with many examples), and a strongly tested Python API that brings to the scientific community cutting-edge algorithms with a high level of customization. Optimization algorithms such as SVRG (Johnson and Zhang, 2013) or SDCA (Shalev-Shwartz and Zhang, 2013) are among the several optimization algorithms available in tick that can be applied (in a modular way) to a large variety of **models**. An emphasis is placed on **time**-**dependent** **models**: from the Cox regression model (Andersen et al., 2012), a very popular model in survival analysis, to Hawkes processes, used in a wide range of applications such as geophysics (Ogata, 1988), finance (Bacry et al., 2015) and more recently social networks (Zhou et al., 2013; Xu et al., 2016). To the best of our knowledge, tick is the most comprehensive library that deals with Hawkes processes, since it brings parametric and nonparametric estimators of these **models** to a new accessibility level.

Show more
Comparisons of baseline variables between groups were analyzed using Chi-squared tests, Fischer’s exact tests, One-way ANOVA’s, and Kruskal-Wallis tests, as appro- priate.We assessed the association between macrolide and fluoroquinolone exposure and cardiac events using an extended Cox proportional hazards model with **time**-varying covariates for exposure to these antibi- otics. Antibiotic exposure was modelled as present from the starting date of the antibiotic prescription until the end of admission, as previously mentioned. This ensured that cardiac events occurring before an antibiotic was started could not be attributed to this antibiotic. Crude **models** included all six antibiotics (azithromycin, clarithromycin, erythromycin, ciproflox- acin, levofloxacin, and moxifloxacin) as **time**-**dependent** covariates and the different cardiac events (any cardiac event, heart failure, and arrhythmia) as outcomes. The calculated hazard ratios for **time**-**dependent** exposure to macrolides or quinolones are in comparison to pa- tients who did not receive macrolide or fluoroquino- lone antibiotics at any **time** during admission i.e. who received beta-lactam monotherapy. Hospital discharge, transfer or death during admission led to right censor- ing in the analysis.

Show more
12 Read more

The Boundary Element Method (BEM) can be used to predict the scattering of sound in rooms. It reduces the problem of modelling the volume of air to one involving only the surfaces; hence the number of unknowns scales more favourably with problem size and frequency than it does for volumetric methods such as FEM and FDTD. The **time** domain BEM predicts the transient scattering of sound, and is usually solved in an iterative manner by marching on in **time** from known initial conditions.

Review of literature was undertaken in the domain of energy forecasting and optimization. The meta-analysis carried out by Suganthi and Samuel [6] highlights the energy planning **models** used. It includes from **time** series, regression and econometric **models**. Furthermore, autoregressive integrated moving average (ARIMA) have also been used for load forecasting wherein variations are hourly, daily and monthly. In addition soft computing tools have also been used namely particle swarm optimization (PSO), grey prediction, unit root test and co-integration **models**, support vector regression. Hybrid **models** such as artificial neural network, neuro-fuzzy, have also used.

Show more
also constant, hence the model is anisotropic throughout the evolution of the universe except at m = 1 i.e. the model does not approach isotropy. In Figure 2, the plot of energy density verses **time** is given which indicates that the model starts with infinite density and as **time** increases the energy density tends to a finite value. Hence, after some finite **time**, the model approaches steady state. In Figure 3, the plot of deceleration parameter verses **time** is given from which we conclude that the model is decelerating at an initial phase and changes from dece- lerating to accelerating. Hence the model is consistent with the recent cosmological observations (Perlmutter et al. [1]-[3], Riess et al. [4] [5], Schmidt et al. [101], Garnavich et al. [102]). Thus, our DE model is consistent with the results of recent observations.

Show more
22 Read more

Similar to our findings some other researchers reported that the sorption/desorption of phenanthrene and heavy metals (such as lead, arsenic, and cadmium) does not follow a pseudo-first order model over the entire range of contact **time** [11, 39]. Over a long contact **time**, the contaminants may first physically adsorb onto the surface of the soil particles and organic matter but then some chemical bonds may form between the contaminants and soil organic matter over **time**. For the desorption, contaminants on the surface of the particles may release faster while longer **time** is needed for the contaminants in stronger and/or deeper chemical bonds within the soil minerals and organic matter to be removed. This may affect the kinetic modeling of multiple contaminants such as PAHs, over the entire contact **time**. That is why some previous researchers reported a two phase desorption kinetics of desorption [37, 40]. The majority of Pb probably exist in available forms (exchangeable and carbonate fraction) [41]. For the first hours, Pb desorption gradient is sharper follows the high stability of the Pb-EDTA complex (logk =17.9) and more Pb availability. We observed that although Ni- EDTA complex had a higher stability than Zn-EDTA,

Show more
10 Read more

Certain methods for the Markov and semi-Markov modeling have been reformulated and adapted for the **time**-**dependent** availability evaluation of the components in subsystems of nuclear power plants. Some more mathematics on the semi/Markov processsses have been presented. A subsequent development of the **models** proposed will be focused on effective ways to assemble the (sub)system transition matrices and to evaluate it's availability along an operational cycle. In fact, we have continued our research of [6], [7] and [8], giving more details on the mathematical nature of the semi-Markov systems. We have also discussed possibilities to consider systems with a larger variety of possible states. Certainly, the availability evaluation for such systems would imply rather sophisticated and complex mathematical (stochastic) **models**. The semi-Markov **models** provide the important advantage that they take into account the sojourn times in the possible states, while the renewal **models** assume that any failing component is immediately replaced by a new one. It still remains to go further with the application of semi-Markov **models** and interval reliability in the stochastic evaluation of the availability of subsystems and systems.

Show more
10 Read more

Though some SFVE methods for the steady Stokes equations, **time**-**dependent** Navier- Stokes equations, **time**-**dependent** parabolized Navier-Stokes equations, and **time**- **dependent** incompressible Boussinesq equations (see [–]) have been set up, these four types of equations are thoroughly diﬀerent from the D nonlinear incompressible vis- coelastic ﬂow equation here, which includes the unsymmetrical stress matrix complexly coupling with ﬂow velocity. Thus, the study of the existence and convergence for the SFVE solutions of Problem I is confronted with more diﬃculties, needs more technique, and has greater challenges than the existing methods as aforesaid. However, Problem I holds cer- tain particular applications. Hence, in this article, we ﬁrst review the weak and the **time** semi-discretized solutions of the D nonlinear incompressible viscoelastic ﬂow equation in Section . We then build the SFVE method with a non-dimensional real and two Gaus- sian quadratures of the D nonlinear incompressible viscoelastic ﬂow equation and an- alyze the existence, stability, and error estimates of the SFVE solutions by means of the SMFE method in Section . Next, we employ some numerical experiments to validate the validity of the preceding theoretical conclusions in Section . Finally, we draw some conclusions in Section .

Show more
17 Read more

We have developed a methodology which is based on analyzing the force of mortality of one life. Our paper sheds new light on theoretical properties of copulas. Besides, it gives an answer to the important question which copula **models** may be suitable for modelling dependence, which exhibits itself in the practice of the insurance of couples. Some of the copulas discussed in this contribution have not been widely studied before, and this has led to some interesting findings. Our approach helps the actuary to choose an appropriate copula and provides a framework for the calculation of provisions of contracts on two lives.

Show more
30 Read more