-
-
-
-
-
-
Stochastic Models
In the study of stochastic models, our works focus on setting up a unified theoretic framwwork through using the RG-factorizations of, such as, Markov processes, Markov reward processes, Markov decision processes, stochastic games, evolutionary game and so forth. See Li's 2010 book: Constructive Computation in Stochastic Models with Applications: The RG-factorizations. Springer.
넶86 2021-09-29 -
The censoring technique
The censored Markov chain, also called watched Markov chain, was first considered by Lévy [1951, 1952, 1958]. Since then, the censored Markov chains have been very useful in the study of Markov chains. Kemeny, Snell and Knapp [1976] applied the censoring technique to show that each recurrent Markov chain has a positive regular measure unique to multiplication by a scalar. Freedman [1983] used the censoring technique to approximate countable Markov chains for the limiting behavior.
넶73 2021-09-29 -
RG-factorizations
Using the censoring technique, we systematically devoloped two types of RG-factorizations. These RG-factorizations are very useful in the study of, such as, QBD processes, Markov processes, Markov renewal processes, Markov reward processes, Markov decision processes, stochastic game theory and their practical applications. Our recent works indicate that the two types of RG-factorizations play a different important role in the study of stochastic models.
넶59 2021-09-29 -
QBD processes
QBD processes with either finitely-many levels or infinitely-many levels have provided a useful mathematical tool in the study of stochastic models, such as queueing systems, manufacturing systems and computer networks. Chapter 3 of Neuts [1981] gave a complete picture of level independent QBD processes. Li's 2010 book provides a detailed analysis for level dependent QBD processes and various useful generalized models.
넶60 2021-09-29 -
Block-structured Markov renewal processes
Block-structured Markov renewal process is a generalization of Markov renewal process. The block-structured Markov renewal process was given a detailed analysis in Li and Zhao [2004] and Chapter 7 in Li's 2010 book: Constructive Computation in Stochastic Models with Applications: RG-Factorizations.
넶125 2021-09-29 -
Markov reward processes
Markov reward processes can accurately model practical systems that evolve stochastically over time. A Markov reward process consists of two elements: A Markov environment and an associated reward structure. Chapter 11 in Li's book (Constructive Computation in Stochastic Models with Applications: RG-Factorizations) provides an excellent survey on this research direction.
넶98 2021-09-29
-
-
- 2012-12-08
- 2011-08-16
- 2011-08-16
- 2011-08-16