Use Single SignOn (SSO) to login with your home institute's login service and credentials.
Focus on:
All days
Aug 14, 2017
Aug 15, 2017
Aug 16, 2017
Aug 17, 2017
Aug 18, 2017
Indico style
Indico style - inline minutes
Indico style - numbered
Indico style - numbered + minutes
Indico Weeks View
Back to Conference View
Choose Timezone
Use the event/category timezone
Specify a timezone
Africa/Abidjan
Africa/Accra
Africa/Addis_Ababa
Africa/Algiers
Africa/Asmara
Africa/Bamako
Africa/Bangui
Africa/Banjul
Africa/Bissau
Africa/Blantyre
Africa/Brazzaville
Africa/Bujumbura
Africa/Cairo
Africa/Casablanca
Africa/Ceuta
Africa/Conakry
Africa/Dakar
Africa/Dar_es_Salaam
Africa/Djibouti
Africa/Douala
Africa/El_Aaiun
Africa/Freetown
Africa/Gaborone
Africa/Harare
Africa/Johannesburg
Africa/Juba
Africa/Kampala
Africa/Khartoum
Africa/Kigali
Africa/Kinshasa
Africa/Lagos
Africa/Libreville
Africa/Lome
Africa/Luanda
Africa/Lubumbashi
Africa/Lusaka
Africa/Malabo
Africa/Maputo
Africa/Maseru
Africa/Mbabane
Africa/Mogadishu
Africa/Monrovia
Africa/Nairobi
Africa/Ndjamena
Africa/Niamey
Africa/Nouakchott
Africa/Ouagadougou
Africa/Porto-Novo
Africa/Sao_Tome
Africa/Tripoli
Africa/Tunis
Africa/Windhoek
America/Adak
America/Anchorage
America/Anguilla
America/Antigua
America/Araguaina
America/Argentina/Buenos_Aires
America/Argentina/Catamarca
America/Argentina/Cordoba
America/Argentina/Jujuy
America/Argentina/La_Rioja
America/Argentina/Mendoza
America/Argentina/Rio_Gallegos
America/Argentina/Salta
America/Argentina/San_Juan
America/Argentina/San_Luis
America/Argentina/Tucuman
America/Argentina/Ushuaia
America/Aruba
America/Asuncion
America/Atikokan
America/Bahia
America/Bahia_Banderas
America/Barbados
America/Belem
America/Belize
America/Blanc-Sablon
America/Boa_Vista
America/Bogota
America/Boise
America/Cambridge_Bay
America/Campo_Grande
America/Cancun
America/Caracas
America/Cayenne
America/Cayman
America/Chicago
America/Chihuahua
America/Ciudad_Juarez
America/Costa_Rica
America/Creston
America/Cuiaba
America/Curacao
America/Danmarkshavn
America/Dawson
America/Dawson_Creek
America/Denver
America/Detroit
America/Dominica
America/Edmonton
America/Eirunepe
America/El_Salvador
America/Fort_Nelson
America/Fortaleza
America/Glace_Bay
America/Goose_Bay
America/Grand_Turk
America/Grenada
America/Guadeloupe
America/Guatemala
America/Guayaquil
America/Guyana
America/Halifax
America/Havana
America/Hermosillo
America/Indiana/Indianapolis
America/Indiana/Knox
America/Indiana/Marengo
America/Indiana/Petersburg
America/Indiana/Tell_City
America/Indiana/Vevay
America/Indiana/Vincennes
America/Indiana/Winamac
America/Inuvik
America/Iqaluit
America/Jamaica
America/Juneau
America/Kentucky/Louisville
America/Kentucky/Monticello
America/Kralendijk
America/La_Paz
America/Lima
America/Los_Angeles
America/Lower_Princes
America/Maceio
America/Managua
America/Manaus
America/Marigot
America/Martinique
America/Matamoros
America/Mazatlan
America/Menominee
America/Merida
America/Metlakatla
America/Mexico_City
America/Miquelon
America/Moncton
America/Monterrey
America/Montevideo
America/Montserrat
America/Nassau
America/New_York
America/Nome
America/Noronha
America/North_Dakota/Beulah
America/North_Dakota/Center
America/North_Dakota/New_Salem
America/Nuuk
America/Ojinaga
America/Panama
America/Paramaribo
America/Phoenix
America/Port-au-Prince
America/Port_of_Spain
America/Porto_Velho
America/Puerto_Rico
America/Punta_Arenas
America/Rankin_Inlet
America/Recife
America/Regina
America/Resolute
America/Rio_Branco
America/Santarem
America/Santiago
America/Santo_Domingo
America/Sao_Paulo
America/Scoresbysund
America/Sitka
America/St_Barthelemy
America/St_Johns
America/St_Kitts
America/St_Lucia
America/St_Thomas
America/St_Vincent
America/Swift_Current
America/Tegucigalpa
America/Thule
America/Tijuana
America/Toronto
America/Tortola
America/Vancouver
America/Whitehorse
America/Winnipeg
America/Yakutat
Antarctica/Casey
Antarctica/Davis
Antarctica/DumontDUrville
Antarctica/Macquarie
Antarctica/Mawson
Antarctica/McMurdo
Antarctica/Palmer
Antarctica/Rothera
Antarctica/Syowa
Antarctica/Troll
Antarctica/Vostok
Arctic/Longyearbyen
Asia/Aden
Asia/Almaty
Asia/Amman
Asia/Anadyr
Asia/Aqtau
Asia/Aqtobe
Asia/Ashgabat
Asia/Atyrau
Asia/Baghdad
Asia/Bahrain
Asia/Baku
Asia/Bangkok
Asia/Barnaul
Asia/Beirut
Asia/Bishkek
Asia/Brunei
Asia/Chita
Asia/Choibalsan
Asia/Colombo
Asia/Damascus
Asia/Dhaka
Asia/Dili
Asia/Dubai
Asia/Dushanbe
Asia/Famagusta
Asia/Gaza
Asia/Hebron
Asia/Ho_Chi_Minh
Asia/Hong_Kong
Asia/Hovd
Asia/Irkutsk
Asia/Jakarta
Asia/Jayapura
Asia/Jerusalem
Asia/Kabul
Asia/Kamchatka
Asia/Karachi
Asia/Kathmandu
Asia/Khandyga
Asia/Kolkata
Asia/Krasnoyarsk
Asia/Kuala_Lumpur
Asia/Kuching
Asia/Kuwait
Asia/Macau
Asia/Magadan
Asia/Makassar
Asia/Manila
Asia/Muscat
Asia/Nicosia
Asia/Novokuznetsk
Asia/Novosibirsk
Asia/Omsk
Asia/Oral
Asia/Phnom_Penh
Asia/Pontianak
Asia/Pyongyang
Asia/Qatar
Asia/Qostanay
Asia/Qyzylorda
Asia/Riyadh
Asia/Sakhalin
Asia/Samarkand
Asia/Seoul
Asia/Shanghai
Asia/Singapore
Asia/Srednekolymsk
Asia/Taipei
Asia/Tashkent
Asia/Tbilisi
Asia/Tehran
Asia/Thimphu
Asia/Tokyo
Asia/Tomsk
Asia/Ulaanbaatar
Asia/Urumqi
Asia/Ust-Nera
Asia/Vientiane
Asia/Vladivostok
Asia/Yakutsk
Asia/Yangon
Asia/Yekaterinburg
Asia/Yerevan
Atlantic/Azores
Atlantic/Bermuda
Atlantic/Canary
Atlantic/Cape_Verde
Atlantic/Faroe
Atlantic/Madeira
Atlantic/Reykjavik
Atlantic/South_Georgia
Atlantic/St_Helena
Atlantic/Stanley
Australia/Adelaide
Australia/Brisbane
Australia/Broken_Hill
Australia/Darwin
Australia/Eucla
Australia/Hobart
Australia/Lindeman
Australia/Lord_Howe
Australia/Melbourne
Australia/Perth
Australia/Sydney
Canada/Atlantic
Canada/Central
Canada/Eastern
Canada/Mountain
Canada/Newfoundland
Canada/Pacific
Europe/Amsterdam
Europe/Andorra
Europe/Astrakhan
Europe/Athens
Europe/Belgrade
Europe/Berlin
Europe/Bratislava
Europe/Brussels
Europe/Bucharest
Europe/Budapest
Europe/Busingen
Europe/Chisinau
Europe/Copenhagen
Europe/Dublin
Europe/Gibraltar
Europe/Guernsey
Europe/Helsinki
Europe/Isle_of_Man
Europe/Istanbul
Europe/Jersey
Europe/Kaliningrad
Europe/Kirov
Europe/Kyiv
Europe/Lisbon
Europe/Ljubljana
Europe/London
Europe/Luxembourg
Europe/Madrid
Europe/Malta
Europe/Mariehamn
Europe/Minsk
Europe/Monaco
Europe/Moscow
Europe/Oslo
Europe/Paris
Europe/Podgorica
Europe/Prague
Europe/Riga
Europe/Rome
Europe/Samara
Europe/San_Marino
Europe/Sarajevo
Europe/Saratov
Europe/Simferopol
Europe/Skopje
Europe/Sofia
Europe/Stockholm
Europe/Tallinn
Europe/Tirane
Europe/Ulyanovsk
Europe/Vaduz
Europe/Vatican
Europe/Vienna
Europe/Vilnius
Europe/Volgograd
Europe/Warsaw
Europe/Zagreb
Europe/Zurich
GMT
Indian/Antananarivo
Indian/Chagos
Indian/Christmas
Indian/Cocos
Indian/Comoro
Indian/Kerguelen
Indian/Mahe
Indian/Maldives
Indian/Mauritius
Indian/Mayotte
Indian/Reunion
Pacific/Apia
Pacific/Auckland
Pacific/Bougainville
Pacific/Chatham
Pacific/Chuuk
Pacific/Easter
Pacific/Efate
Pacific/Fakaofo
Pacific/Fiji
Pacific/Funafuti
Pacific/Galapagos
Pacific/Gambier
Pacific/Guadalcanal
Pacific/Guam
Pacific/Honolulu
Pacific/Kanton
Pacific/Kiritimati
Pacific/Kosrae
Pacific/Kwajalein
Pacific/Majuro
Pacific/Marquesas
Pacific/Midway
Pacific/Nauru
Pacific/Niue
Pacific/Norfolk
Pacific/Noumea
Pacific/Pago_Pago
Pacific/Palau
Pacific/Pitcairn
Pacific/Pohnpei
Pacific/Port_Moresby
Pacific/Rarotonga
Pacific/Saipan
Pacific/Tahiti
Pacific/Tarawa
Pacific/Tongatapu
Pacific/Wake
Pacific/Wallis
US/Alaska
US/Arizona
US/Central
US/Eastern
US/Hawaii
US/Mountain
US/Pacific
UTC
Save
Europe/Stockholm
English (United States)
Deutsch (Deutschland)
English (United Kingdom)
English (United States)
Español (España)
Français (France)
Italiano (Italia)
Polski (Polska)
Português (Brasil)
Türkçe (Türkiye)
Čeština (Česko)
Монгол (Монгол)
Українська (Україна)
中文 (中国)
Login
20th European Young Statisticians Meeting
from
Monday, August 14, 2017 (9:00 AM)
to
Friday, August 18, 2017 (1:00 PM)
Monday, August 14, 2017
9:00 AM
Registration
Registration
9:00 AM - 9:30 AM
Room: Ångströmslaboratoriet
9:30 AM
Introduction
Introduction
9:30 AM - 10:30 AM
Room: Ångströmslaboratoriet
10:30 AM
Coffee
Coffee
10:30 AM - 11:00 AM
Room: Ångströmslaboratoriet
11:00 AM
Invited speaker - Sequential Monte Carlo: basic principles and algorithmic inference
-
Jimmy Olsson
(
Royal Institute of Technology
)
Invited speaker - Sequential Monte Carlo: basic principles and algorithmic inference
Jimmy Olsson
(
Royal Institute of Technology
)
11:00 AM - 12:00 PM
Room: Ångströmslaboratoriet
Sequential Monte Carlo methods form a class of genetic-type algorithms sampling, on-the-fly and in a very general context, sequences of probability measures. Today these methods constitute a standard device in the statistician's tool box and are successfully applied within a wide range of scientific and engineering disciplines. This talk is split into two parts, where the first provides an introduction to the SMC methodology and the second discusses some novel results concerning the stochastic stability and variance estimation in SMC.
12:00 PM
Lunch
Lunch
12:00 PM - 1:00 PM
Room: Ångströmslaboratoriet
1:00 PM
Invited speaker - Sequential Monte Carlo: basic principles and algorithmic inference
-
Jimmy Olsson
(
Royal Institute of Technology
)
Invited speaker - Sequential Monte Carlo: basic principles and algorithmic inference
Jimmy Olsson
(
Royal Institute of Technology
)
1:00 PM - 2:00 PM
Room: Ångströmslaboratoriet
Sequential Monte Carlo methods form a class of genetic-type algorithms sampling, on-the-fly and in a very general context, sequences of probability measures. Today these methods constitute a standard device in the statistician's tool box and are successfully applied within a wide range of scientific and engineering disciplines. This talk is split into two parts, where the first provides an introduction to the SMC methodology and the second discusses some novel results concerning the stochastic stability and variance estimation in SMC.
2:00 PM
Can humans be replaced by computers in taxa recognition?
-
Johanna Ärje
(
University of Jyväskylä, Department of Mathematics and Statistics
)
Can humans be replaced by computers in taxa recognition?
Johanna Ärje
(
University of Jyväskylä, Department of Mathematics and Statistics
)
2:00 PM - 2:30 PM
Room: Ångströmslaboratoriet
Biomonitoring of waterbodies is vital as the number of anthropogenic stressors on aquatic ecosystems keeps growing. However, the continuous decrease in funding makes it impossible to meet monitoring goals or sustain traditional manual sample processing. We review what kind of statistical tools can be used to enhance the cost efficiency of biomonitoring: We explore automated identification of freshwater macroinvertebrates which are used as one indicator group in biomonitoring of aquatic ecosystems. We present the first classification results of a new imaging system producing multiple images per specimen. Moreover, these results are compared with the results of human experts. On a data set of 29 taxonomical groups, automated classification produces a higher average accuracy than human experts.
2:30 PM
Nonparametric estimation of gradual change points in the jump behaviour of an Ito semimartingale
-
Michael Hoffmann
(
Ruhr-Universität Bochum
)
Nonparametric estimation of gradual change points in the jump behaviour of an Ito semimartingale
Michael Hoffmann
(
Ruhr-Universität Bochum
)
2:30 PM - 3:00 PM
Room: Ångströmslaboratoriet
In applications the properties of a stochastic feature often change gradually rather than abruptly, that is: after a constant phase for some time they slowly start to vary. The goal of this talk is to introduce an estimator for the location of a gradual change point in the jump characteristic of a discretely observed Ito semimartingale. To this end we propose a measure of time variation for the jump behaviour of the process and consistency of the desired estimator is a consequence of weak convergence of a suitable empirical process in some function space. Finally, we discuss simulation results which verify that the new estimator has advantages compared to the classical argmax-estimator.
3:00 PM
Coffee
Coffee
3:00 PM - 3:30 PM
Room: Ångströmslaboratoriet
3:30 PM
Matrix Independent Component Analysis
-
Joni Virta
(
University of Turku
)
Matrix Independent Component Analysis
Joni Virta
(
University of Turku
)
3:30 PM - 4:00 PM
Room: Ångströmslaboratoriet
Independent component analysis (ICA) is a popular means of dimension reduction for vector-valued random variables. In this short note we review its extension to arbitrary tensor-valued random variables by considering the special case of two dimensions where the tensors are simply matrices.
4:00 PM
AIC post-selection inference in linear regression
-
Ali Charkhi
(
KULeuven
)
AIC post-selection inference in linear regression
Ali Charkhi
(
KULeuven
)
4:00 PM - 4:30 PM
Room: Ångströmslaboratoriet
Post-selection inference has been considered a crucial topic in data analysis. In this article, we develop a new method to obtain correct inference after model selection by the Akaike's information criterion Akaike (1973) in linear regression models. Confidence intervals can be calculated by incorporating the randomness of the model selection in the distribution of the parameter estimators which act as pivotal quantities. Simulation results show the accuracy of the proposed method.
4:30 PM
Multilevel Functional Principal Component Analysis for Unbalanced Data
-
Zuzana Rošťáková
(
Institute of Measurement Science, Slovak Academy of Sciences
)
Multilevel Functional Principal Component Analysis for Unbalanced Data
Zuzana Rošťáková
(
Institute of Measurement Science, Slovak Academy of Sciences
)
4:30 PM - 5:00 PM
Room: Ångströmslaboratoriet
Functional principal component analysis (FPCA) is the key technique for dimensionality reduction and detection of main directions of variability present in functional data. However, it is not the most suitable tool for the situation when analyzed dataset contains repeated or multiple observations, because information about repeatability of measurements is not taken into account. Multilevel functional principal component analysis (MFPCA) is the modified version of FPCA developed for data observed at multiple visits. The original MFPCA method was designed for balanced data only, where for each subject the same number of measurements is available. In this article we propose the modified MFPCA algorithm which can be applied for unbalanced functional data; that is, in the situation where a different number of observations can be present for every subject. The modified algorithm is validated and tested on real-world sleep data.
Tuesday, August 15, 2017
9:00 AM
Invited Speaker - Non-limiting spatial extremes
-
Jenny Wadsworth
(
Lancaster University
)
Invited Speaker - Non-limiting spatial extremes
Jenny Wadsworth
(
Lancaster University
)
9:00 AM - 10:00 AM
Room: Ångströmslaboratoriet
Many questions concerning environmental risk can be phrased as spatial extreme value problems. Classical extreme value theory provides limiting models for maxima or threshold exceedances of a wide class of underlying spatial processes. These models can then be fitted to suitably defined extremes of spatial datasets and used, for example, to estimate the probability of events more extreme than we have observed to date. However, a major practical problem is that frequently the data do not appear to follow these limiting models at observable levels, and assuming otherwise leads to bias in estimation of rare event probabilities. To deal with this we require models that allow flexibility in both what the limit should be, and in the mode of convergence towards it. I will present a construction for such a model and discuss its application to some wave height data from the North Sea.
10:00 AM
Coffee
Coffee
10:00 AM - 10:30 AM
Room: Ångströmslaboratoriet
10:30 AM
Delete or Merge Regressors algorithm
-
Agnieszka Prochenka
(
Warsaw University
)
Delete or Merge Regressors algorithm
Agnieszka Prochenka
(
Warsaw University
)
10:30 AM - 11:00 AM
Room: Ångströmslaboratoriet
This paper addresses a problem of linear and logistic model selection in the presence of both continuous and categorical predictors. In the literature two types of algorithms dealing with this problem can be found. The first one well known group lasso (\cite{group}) selects a subset of continuous and a subset of categorical predictors. Hence, it either deletes or not an entire factor. The second one is CAS-ANOVA (\cite{cas}) which selects a subset of continuous predictors and partitions of factors. Therefore, it merges levels within factors. Both these algorithms are based on the lasso regularization. In the article a new algorithm called DMR (Delete or Merge Regressors) is described. Like CAS-ANOVA it selects a subset of continuous predictors and partitions of factors. However, instead of using regularization, it is based on a stepwise procedure, where in each step either one continuous variable is deleted or two levels of a factor are merged. The order of accepting consecutive hypotheses is based on sorting t-statistics or linear regression and likelihood ratio test statistics for logistic regression. The final model is chosen according to information criterion. Some of the preliminary results for DMR are described in \cite{pro}. DMR algorithm works only for data sets where $p < n$ (number of columns in the model matrix is smaller than the number of observations). In the paper a modification of DMR called DMRnet is introduced that works also for data sets where $p \gg n$. DMRnet uses regularization in the screening step and DMR after decreasing the model matrix to $p < n$. Theoretical results are proofs that DMR for linear and logistic regression are consistent model selection methods even when $p$ tends to infinity with $n$. Furthermore, upper bounds on the error of selection are given. Practical results are based on an analysis of real data sets and simulation setups. It is shown that DMRnet chooses smaller models with not higher prediction error than the competitive methods. Furthermore, in simulations it gives most often the highest rate of true model selection.
11:00 AM
The Elicitation Problem
-
Tobias Fissler
(
University of Bern
)
The Elicitation Problem
Tobias Fissler
(
University of Bern
)
11:00 AM - 11:30 AM
Room: Ångströmslaboratoriet
Competing point forecasts for functionals such as the mean, a quantile, or a certain risk measure are commonly compared in terms of loss functions. These should be incentive compatible, i.e., the expected score should be minimized by the correctly specified functional of interest. A functional is called *elicitable* if it possesses such an incentive compatible loss function. With the squared loss and the absolute loss, the mean and the median possess such incentive compatible loss functions, which means they are elicitable. In contrast, variance or Expected Shortfall are not elicitable. Besides investigating the elicitability of a functional, it is important to determine the whole class of incentive compatible loss functions as well as to give recommendations which loss function to use in practice, taking into regard secondary quality criteria of loss functions such as order-sensitivity, convexity, or homogeneity.
11:30 AM
Testing independence for multivariate time series by the auto-distance correlation matrix
-
Maria Pitsillou
(
Department of Mathematics & Statistics, Cyprus
)
Testing independence for multivariate time series by the auto-distance correlation matrix
Maria Pitsillou
(
Department of Mathematics & Statistics, Cyprus
)
11:30 AM - 12:00 PM
Room: Ångströmslaboratoriet
We introduce the notions of multivariate auto-distance covariance and correlation functions for time series analysis. These concepts have been recently discussed in the context of both independent and dependent data but we extend them in a different direction by putting forward their matrix version. Their matrix version allows us to identify possible interrelationships among the components of a multivariate time series. Interpretation and consistent estimators of these new concepts are discussed. Additionally, we develop a test for testing the i.i.d. hypothesis for multivariate time series data. The resulting test statistic performs better than the standard multivariate Ljung-Box test statistic. All the above methodology is included in the R package dCovTS which is briefly introduced in this talk.
12:00 PM
Lunch
Lunch
12:00 PM - 1:00 PM
Room: Ångströmslaboratoriet
1:00 PM
Some recent characterization based goodness of fit tests
-
Bojana Milošević
(
Faculty of Mathematics
)
Some recent characterization based goodness of fit tests
Bojana Milošević
(
Faculty of Mathematics
)
1:00 PM - 1:30 PM
Room: Ångströmslaboratoriet
In this paper some recent advances in goodness of fit testing are presented. Special attention is given to goodness of fit tests based on equidistribution and independence characterizations. New concepts are described through some modern exponentiality tests. Their natural generalizations are also proposed. All tests are compared in Bahadur sense.
1:30 PM
Methods for bandwidth detection in kernel conditional density estimations
-
Katerina Konecna
(
Masaryk University
)
Methods for bandwidth detection in kernel conditional density estimations
Katerina Konecna
(
Masaryk University
)
1:30 PM - 2:00 PM
Room: Ångströmslaboratoriet
This contribution is focused on the kernel conditional density estimations (KCDE). The estimation depends on the smoothing parameters which influence the final density estimation significantly. This is the reason why a requirement of any data-driven method is needed for bandwidth estimation. In this contribution, the cross-validation method, the iterative method and the maximum likelihood approach are conducted for bandwidth selection of the estimator. An application on a real data set is included and the proposed methods are compared.
2:00 PM
Controlled branching processes in Biology: a model for cell proliferation
-
Carmen Minuesa Abril
(
University of Extremadura
)
Controlled branching processes in Biology: a model for cell proliferation
Carmen Minuesa Abril
(
University of Extremadura
)
2:00 PM - 2:30 PM
Room: Ångströmslaboratoriet
Branching processes are relevant models in the development of theoretical approaches to problems in applied fields such as, for instance, growth and extinction of populations, biology, epidemiology, cell proliferation kinetics, genetics and algorithm and data structures. The most basic model, the so-called Bienaymé-Galton-Watson process, consists of individuals that reproduce independently of the others following the same probability distribution, known as offspring distribution. A natural generalization is to incorporate a random control function which determines the number of progenitors in each generation. The resulting process is called controlled branching process. In this talk, we deal with a problem arising in cell biology. More specifically, we focus our attention on experimental data generated by time-lapse video recording of cultured in vitro oligodendrocyte cells. In A.Y. Yakovlev et al. (2008) (Branching Processes as Models of Progenitor Cell Populations and Estimation of the Offspring Distributions, *Journal of the American Statistical Association*, 103(484):1357--1366), a two-type age dependent branching process with emigration is considered to describe the kinetics of cell populations. The two types of cells considered are referred as type $T_1$ (immediate precursors of oligodendrocytes) and type $T_2$ (terminally differentiated oligodendrocytes). The reproduction process of these cells is as follows: when stimulating to divide under in vitro conditions, the progenitor cells are capable of producing either their direct progeny (two daughter cells of the same type) or a single, terminally differentiated nondividing oligodendrocyte. Moreover, censoring effects as a consequence of the migration of progenitor cells out of the microscopic field of observation are modelled as the process of emigration of the type $T_1$ cells. In this work, we propose a two-type controlled branching process to describe the embedded discrete branching structure of the age-dependent branching process aforementioned. We address the estimation of the offspring distribution of the cell population in a Bayesian outlook by making use of disparities. The importance of this problem yields in the fact that the behaviour of these populations is strongly related to the main parameters of the offspring distribution and in practice, these values are unknown and their estimation is necessary. The proposed methodology introduced in M. Gonz\'alez et al. (2017) (Robust estimation in controlled branching processes: Bayesian estimators via disparities. *Work in progress*), is illustrated with an application to the real data set given in A.Y. Yakovlev et al. (2008).
2:30 PM
Parameter Estimation for Discretely Observed Infinite-Server Queues with Markov-Modulated Input
-
Birgit Sollie
(
Vrije Universiteit Amsterdam
)
Parameter Estimation for Discretely Observed Infinite-Server Queues with Markov-Modulated Input
Birgit Sollie
(
Vrije Universiteit Amsterdam
)
2:30 PM - 3:00 PM
Room: Ångströmslaboratoriet
The Markov-modulated infinite-server queue is a queueing system with infinitely many servers, where the arrivals follow a Markov-modulated Poisson process (MMPP), i.e. a Poisson process with rate modulating between several values. The modulation is driven by an underlying and unobserved continuous time Markov chain $\{X_t\}_{t\geq 0}$. The inhomogeneous rate of the Poisson process, $\lambda(t)$, stochastically alternates between $d$ different rates, $\lambda_1,\dots,\lambda_d$, in such a way that $\lambda(t) = \lambda_i$ if $X_t = i$, $i=1,\dots,d$. We are interested in estimating the parameters of the arrival process for this queueing system based on observations of the queue length at discrete times only. We assume exponentially distributed service times with rate $\mu$, where $\mu$ is time-independent and known. Estimation of the parameters of the arrival process has not yet been studied for this particular queueing system. Two types of missing data are intrinsic to the model, which complicates the estimation problem. First, the underlying continuous time Markov chain in the Markov-modulated arrival process is not observed. Second, the queue length is only observed at a finite number of discrete time points. As a result, it is not possible to distinguish the number of arrivals and the number of departures between two consecutive observations. In this talk we show how we derive an explicit algorithm to find maximum likelihood estimates of the parameters of the arrival process, making use of the EM algorithm. Our approach extends the one used in Okamura et al. (2009), where the parameters of an MMPP are estimated based on observations of the process at discrete times. However, in contrast to our setting, Okamura et al. (2009) do not consider departures and therefore do not deal with the second type of missing data. We illustrate the accuracy of the proposed estimation algorithm with a simulation study. Reference: Okamura H., Dohi T., Trivedi K.S. (2009). Markovian Arrival Process Parameter Estimation With Group Data. IEEE/ACM Transactions on Networking. Vol. 17, No. 4, pp. 1326--1339
3:00 PM
Coffee
Coffee
3:00 PM - 3:30 PM
Room: Ångströmslaboratoriet
3:30 PM
Fréchet means and Procrustes analysis in Wasserstein space
-
Yoav Zemel
(
Ecole polytechnique fédérale de Lausanne
)
Fréchet means and Procrustes analysis in Wasserstein space
Yoav Zemel
(
Ecole polytechnique fédérale de Lausanne
)
3:30 PM - 4:00 PM
Room: Ångströmslaboratoriet
We consider three interlinked problems in stochastic geometry: (1) constructing optimal multicouplings of random vectors; (2) determining the Fréchet mean of probability measures in Wasserstein space; and (3) registering collections of randomly deformed spatial point processes. We demonstrate how these problems are canonically interpreted through the prism of the theory of optimal transportation of measure on $\mathbb R^d$. We provide explicit solutions in the one dimensional case, consistently solve the registration problem and establish convergence rates and a (tangent space) central limit theorem for Cox processes. When $d>1$, the solutions are no longer explicit and we propose a steepest descent algorithm for deducing the Fréchet mean in problem (2). Supplemented by uniform convergence results for the optimal maps, this furnishes a solution to the multicoupling problem (1). The latter is then utilised, as in the case $d=1$, in order to construct consistent estimators for the registration problem (3). While the consistency results parallel their one-dimensional counterparts, their derivation requires more sophisticated techniques from convex analysis. This is joint work with Victor M. Panaretos
4:00 PM
Modeling of vertical and horizontal variation in multivariate functional data
-
Niels Olsen
(
Københvans Universitet
)
Modeling of vertical and horizontal variation in multivariate functional data
Niels Olsen
(
Københvans Universitet
)
4:00 PM - 4:30 PM
Room: Ångströmslaboratoriet
We present a model for multivariate functional data that simultaneously model vertical and horisontal variation. Horisontal variation is modeled using warping functions represented by latent gaussian variables. Vertical variation is modeled using Gaussian processes using a generally applicable low-parametric covariance structure. We devise a method for maximum likelihood estimation using a Laplace approximation and apply it to three different data sets.
4:30 PM
Best Unbiased Estimators for Doubly Multivariate Data
-
Arkadiusz Kozioł
(
Faculty of Mathematics, Computer Science and Econometrics University of Zielona Góra, Szafrana 4a, 65-516 Zielona Góra, Poland
)
Best Unbiased Estimators for Doubly Multivariate Data
Arkadiusz Kozioł
(
Faculty of Mathematics, Computer Science and Econometrics University of Zielona Góra, Szafrana 4a, 65-516 Zielona Góra, Poland
)
4:30 PM - 5:00 PM
Room: Ångströmslaboratoriet
The article addresses the best unbiased estimators of the block compound symmetric covariance structure for m-variate observations with equal mean vector over each level of factor or each time point (model with structured mean vector). Under multivariate normality, the free-coordinate approach is used to obtain unbiased linear and quadratic estimates for the model parameters. Optimality of these estimators follows from sufficiency and completeness of their distributions. Additionally, strong consistency is proven. The properties of the estimators in the proposed model are compared with the ones in the model with unstructured mean vector (the mean vector changes over levels of factor or time points).
Wednesday, August 16, 2017
9:00 AM
Inference on covariance matrices and operators using concentration inequalities
-
Adam Kashlak
(
Cambridge Centre for Analysis, University of Cambridge
)
Inference on covariance matrices and operators using concentration inequalities
Adam Kashlak
(
Cambridge Centre for Analysis, University of Cambridge
)
9:00 AM - 9:30 AM
Room: Museum Gustavianum
In the modern era of high and infinite dimensional data, classical statistical methodology is often rendered inefficient and ineffective when confronted with such big data problems as arise in genomics, medical imaging, speech analysis, and many other areas of research. Many problems manifest when the practitioner is required to take into account the covariance structure of the data during his or her analysis, which takes on the form of either a high dimensional low rank matrix or a finite dimensional representation of an infinite dimensional operator acting on some underlying function space. Thus, we propose using tools from the concentration of measure literature to construct rigorous descriptive and inferential statistical methodology for covariance matrices and operators. A variety of concentration inequalities are considered, which allow for the construction of nonasymptotic dimension-free confidence sets for the unknown matrices and operators. Given such confidence sets a wide range of estimation and inferential procedures can be and are subsequently developed.
9:30 AM
Predict extreme influenza epidemics
-
Maud Thomas
(
Université Pierre et Marie Curie
)
Predict extreme influenza epidemics
Maud Thomas
(
Université Pierre et Marie Curie
)
9:30 AM - 10:00 AM
Room: Museum Gustavianum
Influenza viruses are responsible for annual epidemics, causing more than 500,000 deaths per year worldwide. A crucial question for resource planning in public health is to predict the morbidity burden of extreme epidemics. We say that an epidemic is extreme whenever the influenza incidence rate exceeds a high threshold for at least one week. Our objective is to predict whether an extreme epidemic will occur in the near future, say the next couple of weeks. The weekly numbers of influenza-like illness (ILI) incidence rates in France are available from the Sentinel network for the period 1991-2017. ILI incidence rates exhibit two different regimes, an epidemic regime during winter and a non-epidemic regime during the rest of the year. To identify epidemic periods, we use a two-state autoregressive hidden Markov model. A main goal of Extreme Value Theory is to assess, from a series of observations, the probability of events that are more extreme than those previously recorded. Because of the autoregressive structure of the data, we choose to fit one of the mul- tivariate generalized Pareto distribution models proposed in Rootzén et al. (2016a) [Multivariate peaks over threshold models. arXiv:1603.06619v2]; see also Rootzén et al. (2016b) [Peaks over thresholds modeling with multivariate generalized Pareto distributions. arXiv:1612.01773v1]. For these models, explicit densities are given, and formulas for conditional probabilities can then be deduced, from which we can predict if an epidemic will be extreme, given the first weeks of observation.
10:00 AM
Guided tour of the museum
Guided tour of the museum
10:00 AM - 10:30 AM
Room: Museum Gustavianum
10:30 AM
Coffee
Coffee
10:30 AM - 11:00 AM
Room: Museum Gustavianum
11:00 AM
Invited speaker - Formal languages for stochastic modelling
-
Jane Hillston
(
University of Edinburgh
)
Invited speaker - Formal languages for stochastic modelling
Jane Hillston
(
University of Edinburgh
)
11:00 AM - 12:00 PM
Room: Museum Gustavianum
12:00 PM
Lunch
Lunch
12:00 PM - 1:00 PM
Room: Museum Gustavianum
1:00 PM
Invited Speaker - Embedding machine learning in stochastic process algebra
-
Jane Hillston
(
University of Edinburgh
)
Invited Speaker - Embedding machine learning in stochastic process algebra
Jane Hillston
(
University of Edinburgh
)
1:00 PM - 2:00 PM
Room: Museum Gustavianum
2:00 PM
Efficient estimation for diffusions
-
Nina Munkholt Jakobsen
(
University of Copenhagen
)
Efficient estimation for diffusions
Nina Munkholt Jakobsen
(
University of Copenhagen
)
2:00 PM - 2:30 PM
Room: Museum Gustavianum
This talk concerns estimation of the diffusion parameter of a diffusion process observed over a fixed time interval. We present conditions on approximate martingale estimating functions under which estimators are consistent, rate optimal, and efficient under high frequency (in-fill) asymptotics. Here, limit distributions of the estimators are non-standard in the sense that they are generally normal variance-mixture distributions. In particular, the mixing distribution depends on the full sample path of the diffusion process over the observation time interval. Making use of stable convergence in distribution, we also present the more easily applicable result that estimators normalized by a suitable data-dependent transformation converge in distribution to a standard normal distribution. The theory is illustrated by a simulation study. The work presented in this talk is published in: Jakobsen, N. M. and Sørensen, M. (2017). *Efficient estimation for diffusions sampled at high frequency over a fixed time interval.* Bernoulli, 23(3):1874-1910.
2:30 PM
Estimates for distributions of Hölder semi-norms of random processes from spaces F_ψ(Ω)
-
Dmytro Zatula
(
Taras Shevchenko National University of Kyiv
)
Estimates for distributions of Hölder semi-norms of random processes from spaces F_ψ(Ω)
Dmytro Zatula
(
Taras Shevchenko National University of Kyiv
)
2:30 PM - 3:00 PM
Room: Ångströmslaboratoriet
In the following we deal with estimates for distributions of Hölder semi-norms of sample functions of random processes from spaces $\mathbb{F}_\psi(\Omega)$, defined on a compact metric space and on an infinite interval $[0,\infty)$, i.e. probabilities $$\mathsf{P}\left\{\sup\limits_{\substack{0<\rho(t,s)\le\varepsilon \\ t,s\in\mathbb{T}}} \frac{|X(t)-X(s)|}{f(\rho(t,s))}>x\right\}.$$ Such estimates and assumptions under which semi-norms of sample functions of random processes from spaces $\mathbb{F}_\psi(\Omega)$, defined on a compact space, satisfy the Hölder condition were obtained by Kozachenko and Zatula (2015). Similar results were provided for Gaussian processes, defined on a compact space, by Dudley (1973). Kozachenko (1985) generalized Dudley's results for random processes belonging to Orlicz spaces, see also Buldygin and Kozachenko (2000). Marcus and Rosen (2008) obtained $L^p$ moduli of continuity for a wide class of continuous Gaussian processes. Kozachenko et al. (2011) studied the Lipschitz continuity of generalized sub-Gaussian processes and provided estimates for the distribution of Lipschitz norms of such processes. But all these problems were not considered yet for processes, defined on an infinite interval.
Thursday, August 17, 2017
9:00 AM
Finite Mixture of C-vines for Complex Dependence
-
O. Ozan Evkaya
(
Atılım University
)
Finite Mixture of C-vines for Complex Dependence
O. Ozan Evkaya
(
Atılım University
)
9:00 AM - 9:30 AM
Room: Ångströmslaboratoriet
Recently, there has been an increasing interest on the combination of copulas with a finite mixture model. Such a framework is useful to reveal the hidden dependence patterns observed for random variables flexibly in terms of statistical modeling. The combination of vine copulas incorporated into a finite mixture model is also beneficial for capturing hidden structures on a multivariate data set. In this respect, the main goal of this study is extending the study of Kim et al. (2013) with different scenarios. For this reason, finite mixture of C-vine is proposed for multivariate data with different dependence structures. The performance of the proposed model has been tested by different simulated data set including various tail dependence properties.
9:30 AM
Joint Bayesian nonparametric reconstruction of dynamical equations
-
Christos Merkatas
(
Department of Mathematics, University of the Aegean, Greece
)
Joint Bayesian nonparametric reconstruction of dynamical equations
Christos Merkatas
(
Department of Mathematics, University of the Aegean, Greece
)
9:30 AM - 10:00 AM
Room: Ångströmslaboratoriet
We propose a Bayesian nonparametric mixture model for the joint full reconstruction of $m$ dynamical equations, given $m$ observed dynamically-noisy-corrupted chaotic time series. The method of reconstruction is based on the Pairwise Dependent Geometric Stick Breaking Processes mixture priors (PDGSBP) first proposed by Hatjispyros et al. (2017). We assume that each set of dynamical equations has a deterministic part with a known functional form i.e. $$ x_{ji} = g_{j}(\vartheta_j, x_{j,i-1},\ldots,x_{j,i-l_j}) + \epsilon_{x_{ji}},\,\,\, 1\leq j \leq m,\,\, 1\leq i \leq n_{j}. $$ under the assumption that the noise processes $(\epsilon_{x_{ji}})$ are independent and identically distributed for all $j$ and $i$ from some unknown zero mean process $f_j(\cdot)$. Additionally, we assume that a-priori we have the knowledge that the processes $(\epsilon_{x_{ji}})$ for $j=1,\ldots,m$ have common characteristics, e.g. they may have common variances or even have similar tail behavior etc. For a full reconstruction, we would like to jointly estimate the following quantities $$ (\vartheta_{j})\in\Theta\subseteq{\cal R}^{k_j},\quad (x_{j,0},\ldots, x_{j,l_j-1})\in{\cal X}_j\subseteq{\cal R}^{l_j}, $$ and perform density estimation to the $m$ noise components $(f_j)$. Our contention is that whenever there is at least one sufficiently large data set, using carefully selected informative borrowing-of-strength-prior-specifications we are able to reconstruct those dynamical processes that are responsible for the generation of time series with small sample sizes; namely sample sizes that are inadequate for an independent reconstruction. We illustrate the joint estimation process for the case $m=2$, when the two time series are coming from a quadratic and a cubic stochastic process of lag one and the noise processes are zero mean normal mixtures with common components.
10:00 AM
Viterbi process for pairwise Markov models
-
Joonas Sova
(
University of Tartu
)
Viterbi process for pairwise Markov models
Joonas Sova
(
University of Tartu
)
10:00 AM - 10:30 AM
Room: Ångströmslaboratoriet
My talk is based on ongoing joint work with my supervisor Jüri Lember. We consider a Markov chain $Z = \{Z_k\}_{k \geq 1}$ with product state space $\mathcal{X}\times \mathcal{Y}$, where $\mathcal{Y}$ is a finite set (state space) and $\mathcal{X}$ is an arbitrary separable metric space (observation space). Thus, the process $Z$ decomposes as $Z=(X,Y)$, where $X=\{X_k \}_{k\geq 1}$ and $Y=\{Y_k \}_{k\geq 1}$ are random processes taking values in $\mathcal{X}$ and $\mathcal{Y}$, respectively. Following cite{pairwise,pairwise2,pairwise3}, we call the process $Z$ a \textit{pairwise Markov model}. The process $X$ is identified as an observation process and the process $Y$, sometimes called the \textit{regime}, models the observations-driving hidden state sequence. Therefore our general model contains many well-known stochastic models as a special case: hidden Markov models, Markov switching models, hidden Markov models with dependent noise and many more. The \textit{segmentation} or \textit{path estimation} problem consists of estimating the realization of $(Y_1,\ldots,Y_n)$ given a realization $x_{1:n}$ of $(X_1,\ldots,X_n)$. A standard estimate is any path $v_{1:n}\in \mathcal{Y}^n$ having maximum posterior probability: $$v_{1:n}=\mathop{\mathrm{argmax}}_{y_{1:n}}P(Y_{1:n}=y_{1:n}|X_{1:n}=x_{1:n}).$$ Any such path is called \textit{Viterbi path} and we are interested in the behaviour of $v_{1:n}$ as $n$ grows. The study of asymptotics of Viterbi path is complicated by the fact that adding one more observation, $x_{n+1}$ can change the whole path, and so it is not clear, whether there exists a limiting infinite Viterbi path. We show that under some conditions the infinite Viterbi path indeed exists for almost every realization $x_{1:\infty}$ of $X$, thereby defining an infinite Viterbi decoding of $X$, called the \textit{Viterbi process.} This is done trough construction of \textit{barriers}. A barrier is a fixed-sized block in the observations $x_{1:n}$ that fixes the Viterbi path up to itself: for every continuation of $x_{1:n}$, the Viterbi path up to the barrier remains unchanged. Therefore, if almost every realization of $X$-process contains infinitely many barriers, then the Viterbi process exists. Having infinitely many barriers is not necessary for existence of infinite Viterbi path, but the barrier-construction has several advantages. One of them is that it allows to construct the infinite path \textit{piecewise}, meaning that to determine the first $k$ elements $v_{1:k}$ of the infinite path it suffices to observe $x_{1:n}$ for $n$ big enough. Barrier construction has another great advantage: namely, the process $(Z,V)=\{(Z_k,V_k)\}_{k \geq 1}$, where $V= \{V_k\}_{k \geq 1}$ denotes the Viterbi process, is under certain conditions regenerative. This is can be proven by, roughly speaking, applying the Markov splitting method to construct regeneration times for $Z$ which coincide with the occurrences of barriers. Regenerativity of $(Z,V)$ allows to easily prove limit theorems to understand the asymptotic behaviour of inferences based on Viterbi paths. In fact, in a special case of hidden Markov model this regenerative property has already been known to hold and has found several applications cite{AV,AVacta,Vsmoothing,Vrisk, iowa}.
10:30 AM
Coffee
Coffee
10:30 AM - 11:00 AM
Room: Ångströmslaboratoriet
11:00 AM
Invited Speaker - Independent component analysis using third and fourth cumulants
-
Hannu Oja
(
University of Turku
)
Invited Speaker - Independent component analysis using third and fourth cumulants
Hannu Oja
(
University of Turku
)
11:00 AM - 12:00 PM
Room: Ångströmslaboratoriet
In independent component analysis it is assumed that the observed random variables are linear combinations of latent, mutually independent random variables called the independent components. It is then often thought that only the non-Gaussian independent components are of interest and the Gaussian components simply present noise. The idea is then to make inference on the unknown number of non-Gaussian components and to estimate the transformations back to the non-Gaussian components. In this talk we show how the classical skewness and kurtosis measures, namely third and fourth cumulants, can be used in the estimation. First, univariate cumulants are used as projection indices in search for independent components (projection pursuit, fastICA). Second, multivariate fourth cumulant matrices are jointly used to solve the problem (FOBI, JADE). The properties of the estimates are considered through corresponding optimization problems, estimating equations, algorithms and asymptotic statistical properties. The theory is illustrated with several examples.
12:00 PM
Lunch
Lunch
12:00 PM - 1:00 PM
Room: Ångströmslaboratoriet
1:00 PM
Invited Speaker - Independent component analysis using third and fourth cumulants
-
Hannu Oja
(
University of Turku
)
Invited Speaker - Independent component analysis using third and fourth cumulants
Hannu Oja
(
University of Turku
)
1:00 PM - 2:00 PM
Room: Ångströmslaboratoriet
In independent component analysis it is assumed that the observed random variables are linear combinations of latent, mutually independent random variables called the independent components. It is then often thought that only the non-Gaussian independent components are of interest and the Gaussian components simply present noise. The idea is then to make inference on the unknown number of non-Gaussian components and to estimate the transformations back to the non-Gaussian components. In this talk we show how the classical skewness and kurtosis measures, namely third and fourth cumulants, can be used in the estimation. First, univariate cumulants are used as projection indices in search for independent components (projection pursuit, fastICA). Second, multivariate fourth cumulant matrices are jointly used to solve the problem (FOBI, JADE). The properties of the estimates are considered through corresponding optimization problems, estimating equations, algorithms and asymptotic statistical properties. The theory is illustrated with several examples.
2:00 PM
E-optimal approximate block designs for treatment-control comparisons
-
Samuel Rosa
(
Comenius University in Bratislava
)
E-optimal approximate block designs for treatment-control comparisons
Samuel Rosa
(
Comenius University in Bratislava
)
2:00 PM - 2:30 PM
Room: Ångströmslaboratoriet
We study $E$-optimal block designs for comparing a set of test treatments with a control treatment. We provide the complete class of all $E$-optimal approximate block designs and we show that these designs are characterized by simple linear constraints. Employing the provided characterization, we obtain a class of $E$-optimal exact block designs with unequal block sizes for comparing test treatments with a control.
2:30 PM
Information criteria for structured sparse variable selection
-
Bastien Marquis
(
Université Libre de Bruxelles
)
Information criteria for structured sparse variable selection
Bastien Marquis
(
Université Libre de Bruxelles
)
2:30 PM - 3:00 PM
Room: Ångströmslaboratoriet
In contrast to the low dimensional case, variable selection under the assumption of sparsity in high dimensional models is strongly influenced by the effects of false positives. The effects of false positives are tempered by combining the variable selection with a shrinkage estimator, such as in the lasso, where the selection is realized by minimizing the sum of squared residuals regularized by an $\ell_1$ norm of the selected variables. Optimal variable selection is then equivalent to finding the best balance between closeness of fit and regularity, i.e., to optimization of the regularization parameter with respect to an information criterion such as Mallows's Cp or AIC. For use in this optimization procedure, the lasso regularization is found to be too tolerant towards false positives, leading to a considerable overestimation of the model size. Using an $\ell_0$ regularization instead requires careful consideration of the false positives, as they have a major impact on the optimal regularization parameter. As the framework of the classical linear model has been analysed in previous work, the current paper concentrates on structured models and, more specifically, on grouped variables. Although the imposed structure in the selected models can be understood to somehow reduce the effect of false positives, we observe a qualitatively similar behavior as in the unstructured linear model.
3:00 PM
Coffee
Coffee
3:00 PM - 3:30 PM
Room: Ångströmslaboratoriet
3:30 PM
Mallows' Model Based on Lee Distance
-
Nikolay Nikolov
(
Institute of Mathematics and Informatics, Bulgarian Academy of Sciences, Acad. G.Bontchev str., block 8, 1113 Sofia, Bulgaria
)
Mallows' Model Based on Lee Distance
Nikolay Nikolov
(
Institute of Mathematics and Informatics, Bulgarian Academy of Sciences, Acad. G.Bontchev str., block 8, 1113 Sofia, Bulgaria
)
3:30 PM - 4:00 PM
Room: Ångströmslaboratoriet
In this paper the Mallows' model based on Lee distance is considered and compared to models induced by other metrics on the permutation group. As an illustration, the complete rankings from the American Psychological Association election data are analyzed.
4:00 PM
Confidence regions in Cox proportional hazards model with measurement errors
-
Oksana Chernova
(
Taras Shevchenko National University of Kyiv
)
Confidence regions in Cox proportional hazards model with measurement errors
Oksana Chernova
(
Taras Shevchenko National University of Kyiv
)
4:00 PM - 4:30 PM
Room: Ångströmslaboratoriet
Cox proportional hazards model with measurement errors in covariates is considered. It is the ubiquitous technique in biomedical data analysis. In Kukush et al. (2011) [ Journal of Statistical Research **45**, 77-94 ] and Chimisov and Kukush (2014) [ Modern Stochastics: Theory and Applications **1**, 13-32 ] asymptotic properties of a simultaneous estimator $(\lambda_n;\beta_n)$ for the baseline hazard rate $\lambda(\cdot)$ and the regression parameter $\beta$ were studied, at that the parameter set $\Theta=\Theta_{\lambda}\times \Theta_{\beta}$ was assumed bounded. In Kukush and Chernova (2017) [ Theory of Probability and Mathematical Statistics **96**, 100-109 ] we dealt with the simultaneous estimator $(\lambda_n;\beta_n)$ in the case, where the $\Theta_{\lambda}$ was unbounded from above and not separated away from $0$. The estimator was constructed in two steps: first we derived a strongly consistent estimator and then modified it to provide its asymptotic normality. In this talk, we construct the confidence interval for an integral functional of $\lambda(\cdot)$ and the confidence region for $\beta$. We reach our goal in each of the three cases: (a) the measurement error is bounded, (b) it is normally distributed, or (c) it is a shifted Poisson random variable. The censor is assumed to have a continuous pdf. In future research we intend to elaborate a method for heavy tailed error distributions.
4:30 PM
Stability of the Spectral EnKF under nested covariance estimators
-
Marie Turčičová
(
Charles University, Prague
)
Stability of the Spectral EnKF under nested covariance estimators
Marie Turčičová
(
Charles University, Prague
)
4:30 PM - 5:00 PM
Room: Ångströmslaboratoriet
In the case of traditional Ensemble Kalman Filter (EnKF), it is known that the filter error does not grow faster than exponentially for a fixed ensemble size. The question posted in this contribution is whether the upper bound for the filter error can be improved by using an improved covariance estimator that comes from the right parameter subspace and has smaller asymptotic variance. Its effect on Spectral EnKF is explored by a simulation.
Friday, August 18, 2017
9:00 AM
Invited Speaker - Random Networks
-
Svante Janson
(
Uppsala University
)
Invited Speaker - Random Networks
Svante Janson
(
Uppsala University
)
9:00 AM - 10:00 AM
Room: Ångströmslaboratoriet
10:00 AM
Coffee
Coffee
10:00 AM - 10:30 AM
Room: Ångströmslaboratoriet
10:30 AM
Theoretical and simulation results on heavy-tailed fractional Pearson diffusions
-
Ivan Papić
(
Department of Mathematics, J.J. Strossmayer University of Osijek
)
Theoretical and simulation results on heavy-tailed fractional Pearson diffusions
Ivan Papić
(
Department of Mathematics, J.J. Strossmayer University of Osijek
)
10:30 AM - 11:00 AM
Room: Ångströmslaboratoriet
We define heavy-tailed fractional reciprocal gamma and Fisher-Snedecor diffusions by a non-Markovian time change in the corresponding Pearson diffusions. We illustrate known theoretical results regarding these fractional diffusions via simulations.
11:00 AM
Copula based BINAR models with applications
-
Andrius Buteikis
(
Faculty of Mathematics and Informatics, Vilnius University
)
Copula based BINAR models with applications
Andrius Buteikis
(
Faculty of Mathematics and Informatics, Vilnius University
)
11:00 AM - 11:30 AM
Room: Ångströmslaboratoriet
In this paper we study the problem of modelling the integer-valued vector observations. We consider the BINAR(1) models defined via copula-joint innovations. We review different parameter estimation methods and analyse estimation methods of the copula dependence parameter. We also examine the case where seasonality is present in integer-valued data and suggest a method of deseasonalizing them. Finally, an empirical application is carried out.
11:30 AM
Simulating and Forecasting Human Population with General Branching Process
-
Plamen Trayanov
(
Sofia Univeristy "St. Kliment Ohridski"
)
Simulating and Forecasting Human Population with General Branching Process
Plamen Trayanov
(
Sofia Univeristy "St. Kliment Ohridski"
)
11:30 AM - 12:00 PM
Room: Ångströmslaboratoriet
The branching process theory is widely used to describe a population dynamics in which particles live and produce other particles through their life, according to given stochastic birth and death laws. The theory of General Branching Processes (GBP) presents a continuous time model in which every woman has random life length and gives birth to children in random intervals of time. The flexibility of the GBP makes it very useful for modelling and forecasting human population. This paper is a continuation of previous developments in the theory, necessary to model the specifics of human population, and presents their application in forecasting the population age structure of Bulgaria. It also introduces confidence intervals of the forecasts, calculated by GBP simulations, which reflect both the stochastic nature of the birth and death laws and the branching process itself. The simulations are also used to determine the main sources of risk to the forecast.
12:00 PM
Lunch
Lunch
12:00 PM - 1:00 PM
Room: Ångströmslaboratoriet