Tags:
Statistics,
Stochastic analysis and finance and insurance and risk
Publications
-
Huseby, Arne Bang
(2023).
Environmental contours and time dependence.
In Brito, Mario P.; Aven, Terje; Baraldi, Piero; Cepin, Marko & Zio, Enrico (Ed.),
ESREL 2023 - Proceedings of the 33rd European Safety and Reliability Conference : The Future of Safety in the Reconnected World,
3 – 7 September 2023, University of Southampton, United Kingdom.
Research Publishing Services.
ISSN 978-981-18-8071-1.
p. 1295–1302.
doi:
10.3850/978-981-18-8071-1_P048-cd.
Show summary
Environmental contours are widely used as a basis for e.g., ship design. Such contours are typically used in early design when the strength and failure properties of the object under consideration are not known. An environmental contour describes the tail properties of some relevant environmental variables, and is used as input to the design process. A methodology for constructing environmental contours based on the Rosenblatt transformation has beed used extensively. More recently alternative approach where environmental contours are constructed using Monte Carlo simulation have been developed. Typically, the strength of a structural design is chosen so that the expected return period of a failure event exceeds the desired lifetime of the structure. If time dependence in the environmental variables is neglected, the expected return period is simply the inverse of the failure probability. In a more realistic model, however, such dependence should be included. In this paper we describe a method for constructing an environmental contour where time dependence is taken into account. The method is illustrated with a simple numerical example.
-
Dahl, Kristina Rognlien; Huseby, Arne Bang & Havgar, Marius
(2022).
Optimal Reinsurance Contracts under Conditional Value-at-risk.
In Leva, Maria Chiara; Patelli, Edoardo; Podofillini, Luca & Wilson, Simon (Ed.),
Proceedings of the 32nd European
Safety and Reliability Conference (ESREL 2022).
Research Publishing Services.
ISSN 978-981-18-5183-4.
Show summary
An insurance contract implies that risk is ceded from ordinary policy holders to companies. However, companies do the same thing between themselves, and this is known as reinsurance. The problem of determining reinsurance contracts which are optimal with respect to some reasonable criterion has been studied extensively within actuarial science. Different contract types are considered such as stop-loss contracts where the reinsurance company covers risk above a certain level, and insurance layer contracts where the reinsurance company covers risk within an interval. The contracts are then optimized with respect to some risk measure, such as value-at-risk or conditional value-at- risk. In the present paper we consider the problem of minimizing conditional value-at-risk in the case of multiple stop-loss contracts. Such contracts are known to be optimal in the univariate case, and the optimal contract is easily determined. We show that the same holds in the multivariate case, both with dependent and independent risks. The results are illustrated with some numerical examples.
-
Huseby, Arne Bang
(2022).
Optimizing Multiple Reinsurance Contracts.
In Leva, Maria Chiara; Patelli, Edoardo; Podofillini, Luca & Wilson, Simon (Ed.),
Proceedings of the 32nd European
Safety and Reliability Conference (ESREL 2022).
Research Publishing Services.
ISSN 978-981-18-5183-4.
Show summary
An insurance contract implies that risk is ceded from ordinary policy holders to companies. Companies do the same thing between themselves, and this is known as reinsurance. The problem of determining reinsurance contracts which are optimal with respect to some reasonable criterion has been studied extensively. Different contract types are considered such as stop-loss contracts where the reinsurance company covers risk above a certain level, and insurance layer contracts where the reinsurance company covers risk within an interval. The contracts are then optimized with respect to some risk measure, such as value-at-risk or conditional tail expectation. In the present paper we investigate this problem further and show that the optimal solution depends on the tail hazard rates of the risk distributions. If the tail hazard rates are decreasing, which is the case for heavy tailed distributions like lognormal and pareto distributions, the optimal solution is balanced. That is, reinsurance contracts for identically distributed risks should be identical insurance layer contracts. However, if the tail hazard rate is increasing, which is the case for light tailed distributions like truncated normal distributions, the optimal solution is typically not balanced. Even for identically distributed risks, some contracts should be insurance layer contracts, while others should be stop-loss contracts. In the limiting case, where the hazard rate is constant, i.e., when the risks are exponentially distributed, we show that a balanced solution is optimal. We also present an efficient importance sampling method for estimating optimal contracts.
-
Midtfjord, Alise Danielle; De Bin, Riccardo & Huseby, Arne Bang
(2022).
A boosting model for survival analysis with dependent censoring.
In Torelli, Nicola; BELLIO, RUGGERO & MUGGEO, VITO (Ed.),
Proceedings of the 36th International Workshop on Statistical Modelling.
EUT Edizioni Università di Trieste.
ISSN 978-88-5511-309-0.
-
-
-
Huseby, Arne Bang & Innholt, Madeleine G
(2021).
Importance Measures in Repairable Multistate Systems With Aging.
In Castanier, Bruno; Cepin, Marko; Bigaud, David & Berenguer, Christophe (Ed.),
Proceedings of the 31st European Safety and Reliability Conference.
Research Publishing Services.
ISSN 978-981-18-2016-8.
p. 652–659.
doi:
10.3850/978-981-18-2016-8_165-cd.
Full text in Research Archive
Show summary
Within the field of reliability multistate systems represent a natural extension of the classical binary approach. For an extensive introduction to this topic, see Natvig (2011b). Repairable multistate systems quickly become too complex for exact analytical calculations. Fortunately, however, such systems can be studied efficiently using discrete event simulations. See Huseby and Natvig (2012). In the binary case importance is usually measured using the approach by Birnbaum (1969). Several authors have extended the notion of importance measures to multi-state systems. See e.g., Zio et al. (2007) and Huseby et al. (2020). In the latter paper the component state processes were modelled as homogenous semi-Markov processes. Such processes typically reach stationary states very quickly. Thus, most properties of the system can be analysed using asymptotic distributions which typically are determined by mean waiting times and the transition matrix of the built-in Markov chain. In the present paper we follow the approach suggested by Huseby et al. (2020). Here, however, we focus on the non-homogenous case. This is relevant in systems subject to e.g., seasonal variations or aging. In order to model this we use an approach similar to Lindqvist et al. (2003). When the component processes are not homogenous, the analysis should cover the entire time frame, not just the asymptotic properties. This makes comparison of importance more complicated. Several numerical examples are included in order to illustrate the methodology.
-
Midtfjord, Alise Danielle & Huseby, Arne Bang
(2021).
A Machine Learning Approach to Assess Runway Conditions Using Weather Data.
In Castanier, Bruno; Cepin, Marko; Bigaud, David & Berenguer, Christophe (Ed.),
Proceedings of the 31st European Safety and Reliability Conference.
Research Publishing Services.
ISSN 978-981-18-2016-8.
p. 833–840.
doi:
10.3850/978-981-18-2016-8_474-cd.
Show summary
Contamination of runway surfaces with snow, ice, or slush causes potential economic and safety threats for the aviation industry during winter season. The presence of these materials reduces the available tire-pavement friction needed for retardation and directional control. To activate appropriate safety procedures, pilots need accurate and timely information on the actual runway surface conditions. Previous research on how available runway friction is affected by weather conditions and runway contamination has mainly been reduced to engineering- or physics-based models. The complexity of the physical relationships controlling the surface friction and their dependency on each other makes this a difficult task. Machine learning methods have in several occasions shown to be able to model complex physical phenomena with a good accuracy, when domain knowledge is included. In this paper, we build a model using the state-of-the-art boosting algorithm XGBoost, to predict runway conditions using weather data and runway reports. The model is trained to predict the runway surface conditions represented by the tire-pavement friction coefficient. Our model is compared to a currently in-use system at several Norwegian Airports, which is a scenario-based model created based on meteorological and runway knowledge. The machine learning model is tested and compared using cross validation, and the results show the strong abilities of machine learning to find and use patterns to model physical phenomena.
-
Midtfjord, Alise Danielle; De Bin, Riccardo & Huseby, Arne Bang
(2021).
A Machine Learning Approach to Safer Airplane Landings: Predicting Runway Conditions using Weather and Flight Data.
arXiv.org.
ISSN 2331-8422.
Show summary
The presence of snow and ice on runway surfaces reduces the available tire-pavement friction needed for retardation and directional control and causes potential economic and safety threats for the aviation industry. To activate appropriate safety procedures, pilots need accurate and timely information on the actual runway surface conditions. In this study, XGBoost is used to create a combined runway assessment system, which includes a classification model to identify slippery conditions and a regression model to predict the level of slipperiness. The models are trained on weather data and runway reports. The runway surface conditions are represented by the tire-pavement friction coefficient, which is estimated from flight sensor data from landing aircrafts. The XGBoost models are combined with SHAP approximations to provide a reliable decision support system for airport operators and pilots, which can contribute to safer and more economic operations of airport runways. To evaluate the performance of the prediction models, they are compared to several state-of-the-art runway assessment methods. The XGBoost models identify slippery runway conditions with a ROC AUC of 0.95, predict the friction coefficient with a MAE of 0.0254, and outperforms all the previous methods. The results show the strong abilities of machine learning methods to model complex, physical phenomena with a good accuracy.
-
-
Huseby, Arne; Kalinowska, Martyna & Abrahamsen, Tobias
(2020).
Birnbaum criticality and importance measures for multistate systems with repairable components.
Probability in the engineering and informational sciences (Print).
ISSN 0269-9648.
36(1),
p. 66–86.
doi:
10.1017/S0269964820000340.
Full text in Research Archive
Show summary
We suggest four new measures of importance for repairable multistate systems based on the classical Birnbaum measure. Periodic component life cycles and general semi-Markov processes are considered. Similar to the Birnbaum measure, the proposed measures are generic in the sense that they only depend on the probabilistic properties of the compo- nents and the system structure. The multistate system model encodes physical properties of the components and the system directly into the structure function. As a result, cal- culating importance is easy, especially in the asymptotic case. Moreover, the proposed measures are composite measures, combining importance for all component states into a unified quantity. This simplifies ranking of the components with respect to importance. The proposed measures can be characterized with respect to two features: forward-looking versus backward-looking and with respect to how criticality is measured. Forward-looking importance measures focus on the next component states, while backward-looking impor- tance measures focus on the previous component states. Two approaches to measuring criticality are considered: probability of criticality versus expected impact. Examples show that the different importance measures may result in unequal rankings.
-
Huseby, Arne & Christensen, Daniel
(2020).
Optimal reinsurance contracts in the multivariate case.
In Baraldi, Piero; Di Maio, Francesco P. & Zio, Enrico (Ed.),
e-proceedings of the 30th European Safety and Reliability Conference and 15th Probabilistic Safety Assessment and Management Conference (ESREL2020 PSAM15).
Research Publishing Services.
ISSN 9789811485930.
p. 465–472.
doi:
978-981-14-8593-0.
Show summary
An insurance contract implies that risk is ceded from ordinary policy holders to companies. Companies do the same thing between themselves as well. The rationale could be the same; i.e., that a financially weaker agent is passing risk to a stronger one. In reality even the largest companies do this to diversify risk, and financially the cedent may be as strong as the reinsurer. The problem of determining reinsurance contracts which are optimal with respect to some reasonable criterion has been studied extensively within actuarial science. Different contact types are considered such as stop-loss contracts where the reinsurance company covers risk above a certain level, and insurance layer contracts where the reinsurance company covers risk within an interval. The contracts are then optimized with respect to some risk measure, such as value-at-risk or conditional tail expectation. In the present paper we consider the problem of minimizing value-at-risk in the case of multiple insurance layer contracts. Such contracts are known to be optimal in the univariate case, and the optimal contract is easily determined. In the multivariate case, however, finding an optimal set of contracts is not easy. By considering solutions where the risk is balanced between the contracts, we show how to find a solution using an efficient iterative Monte Carlo method. We also consider more general unbalanced solutions for which a slightly more complex optimization method must be applied. The methods are illustrated by numerical examples.
-
Dahl, Kristina Rognlien & Huseby, Arne
(2020).
Environmental contours and optimal design.
In Baraldi, Piero; Di Maio, Francesco P. & Zio, Enrico (Ed.),
e-proceedings of the 30th European Safety and Reliability Conference and 15th Probabilistic Safety Assessment and Management Conference (ESREL2020 PSAM15).
Research Publishing Services.
ISSN 9789811485930.
p. 3233–3240.
-
Midtfjord, Alise Danielle & Huseby, Arne
(2020).
Estimating Runway Friction Using Flight Data.
In Baraldi, Piero; Di Maio, Francesco P. & Zio, Enrico (Ed.),
e-proceedings of the 30th European Safety and Reliability Conference and 15th Probabilistic Safety Assessment and Management Conference (ESREL2020 PSAM15).
Research Publishing Services.
ISSN 9789811485930.
p. 9–16.
Show summary
During the winter season, contamination of runway surfaces with snow, ice, or slush causes potential economic and safety threats for the aviation industry. The presence of these materials reduces the available tire-pavement friction needed for retardation and directional control. Therefore, pilots operating on contaminated runways need accurate and timely information on the actual runway surface conditions. Avinor, the company that operates most civil airports in Norway, have developed an integrated runway information system, called IRIS, currently used on 16 Norwegian airports. The system uses a scenario approach to identify slippery conditions. In order to validate the scenario model, it is necessary to estimate runway friction. The present paper outlines how this can be done using flight data from the Quick Access Recorder (QAR) of Boeing 737-600/700/800 NG airplanes. Data such as longitudinal acceleration, airspeed, ground speed, flap settings, engine speed, brake pressures are sampled at least each second during landings. The paper discusses some of the challenges with this. In particular, issues related to calibration of data are considered, and two different regression methods are compared.
-
Wang, Yinzhi; Hobæk Haff, Ingrid & Huseby, Arne
(2020).
Modelling extreme claims via composite models and threshold selection methods.
Insurance, Mathematics & Economics.
ISSN 0167-6687.
91,
p. 257–268.
doi:
10.1016/j.insmatheco.2020.02.009.
Full text in Research Archive
Show summary
The existence of large and extreme claims of a non-life insurance portfolio influences the ability of (re)insurers to estimate the reserve. The excess over-threshold method provides a way to capture and model the typical behaviour of insurance claim data. This paper discusses several composite models with commonly used bulk distributions, combined with a 2-parameter Pareto distribution above the threshold. We have explored how several threshold selection methods perform when estimating the reserve as well as the effect of the choice of bulk distribution, with varying sample size and tail properties. To investigate this, a simulation study has been performed. Our study shows that when data are sufficient, the empirical rule has the overall best performance in terms of the quality of the reserve estimate. The second best are either the square root rule or the exponentiality test. The latter works better when the right tail of the data is extreme. As the sample size becomes small, the best performance is obtained with simultaneous estimation. Further, the influence of the choice of bulk distribution seems to be rather large, especially when the distribution is heavy-tailed. Moreover, it shows that the empirical estimate of p≤b, the probability that a claim is below the threshold, is more robust than the theoretical one.
-
-
Vanem, Erik & Huseby, Arne
(2019).
Environmental Contours for Safe Design of Ships and Other Marine Structures,
Book of Proceedings. 2nd International Conference on Structural Integrity for Offshore Energy Industry.
ASRANET.
ISSN 978-1-9996144-3-0.
p. 70–77.
-
-
Huseby, Arne; Vanem, Erik & Barbosa, Maria Hjelset
(2019).
Environmental contours for mixtures of distributions.
In Beer, Michael & Zio, Enrico (Ed.),
Proceedings of the
29th European Safety and Reliability Conference(ESREL). 22 – 26 September 2019
Hannover, Germany .
Research Publishing Services.
ISSN 978-981-11-2724-3.
p. 839–846.
doi:
10.3850/978-981-11-2724-3%200719-cd.
Full text in Research Archive
Show summary
Environmental contours are widely used as a basis for e.g., ship design, especially in early design phases. The traditional approach to such contours is based on the well-known Rosenblatt transformation. In the present paper we present a numerical method making it possible to apply the inverse Rosenblatt transformation to mixtures of distributions. Due to the effects of this transformation the probabilistic properties of the resulting environmental contour can be distorted. Based on a precise definition of the concept of exceedance probability, valid for all types of environmental contours, we show how to evaluate a given contour and adjust it so that it gets the desired properties. The methods are illustrated by a numerical example.
-
Agrell, Christian; Eldevik, Simen; Hafver, Andreas; Pedersen, Frank Børre; Stensrud, Erik & Huseby, Arne
(2018).
Pitfalls of machine learning for tail events in high risk environments.
In Haugen, Stein; Barros, Anne; van Gulijk, Coen; Kongsvik, Trond & Vinnem, Jan Erik (Ed.),
Safety and Reliability – Safe Societies in a Changing World. Proceedings of ESREL 2018, June 17-21, 2018, Trondheim, Norway.
CRC Press.
ISSN 9781351174657.
p. 3043–3051.
Full text in Research Archive
Show summary
Most of today’s Machine Learning (ML) methods and implementations are based on correlations, in the sense of a statistical relationship between a set of inputs and the output(s) under inves- tigation. The relationship might be obscure to the human mind, but through the use of ML, mathematics and statistics makes it seemingly apparent. However, to base safety critical decisions on such methods suffer from the same pitfalls as decisions based on any other correlation metric that disregards causality. Causality is key to ensure that applied mitigation tactics will actually affect the outcome in the desired way. This paper reviews the current situation and challenges of applying ML in high risk environments. It further outlines how phenomenological knowledge, together with an uncertainty-based risk perspective can be incorporated to alleviate the missing causality considerations in current practice.
-
Vanem, Erik & Huseby, Arne
(2018).
Seasonal and Omni-Seasonal Environmental Contours for Extreme Sea States,
Proceedings of the 7th International Maritime Conference on Design for Safety, DfS 2018.
Osaka University.
ISSN 978-4-908678-12-7.
p. 148–159.
-
Skutlaberg, Kristina; Huseby, Arne & Natvig, Bent
(2018).
Partial monitoring of multistate systems.
Reliability Engineering & System Safety.
ISSN 0951-8320.
180,
p. 434–452.
doi:
10.1016/j.ress.2018.08.006.
Full text in Research Archive
Show summary
For large multicomponent systems it is typically too costly to monitor the entire system constantly. In the presentpaper we consider a case where a component is unobserved in a time interval [0, T]. The time T is a stochasticvariable with a distribution which depends on the structure of the system and the lifetime distribution of theother components. Different systems will result in different distributions of T. The main focus is on how theunobserved period of time a ffects what we learn about the unobserved component during this period. Weanalyse this by considering one single component in three different cases. In the first case we consider both T aswell as the state of the unobserved component at time T as given. In the second case we allow the state of theunobserved component at time T to be stochastic, while in the third case both T and the state are treated asstochastic variables. In all cases we study the problem using preposterior analysis. That is, we investigate howmuch information we can expect to get by the end of the time interval [0, T]. The methodology is also illustratedon a more complex example.
-
Huseby, Arne & Rabbe, Marit
(2018).
Optimizing warnings for slippery runways based on weather data.
In Haugen, Stein; Barros, Anne; van Gulijk, Coen; Kongsvik, Trond & Vinnem, Jan Erik (Ed.),
Safety and Reliability – Safe Societies in a Changing World. Proceedings of ESREL 2018, June 17-21, 2018, Trondheim, Norway.
CRC Press.
ISSN 9781351174657.
p. 2789–2796.
doi:
10.1201/9781351174664-350.
Full text in Research Archive
Show summary
Slippery runways represent a significant risk to aircrafts especially during the winter season. In order to apply the appropriate braking action, the pilots need reliable information about the runway conditions. Unfortunately the accuracy of runway reports can sometimes be unsatisfactory. In order to obtain more precise and up-to-date information about the current conditions, a warning system based on various types of weather data was suggested by Huseby and Rabbe (2012). Huseby and Rabbe (2008) and Huseby et al. (2010). The system is based on a set of scenarios known to cause slippery conditions. By monitoring meteorological parameters like air and ground temperature, humidity, visibility and precipitation, and comparing these to the given scenarios, the system can issue warnings to the ground personnel. This system is currently being used on 16 Norwegian airports. In the present paper this warning system is reviewed. Ideally, the warning system should issue warnings whenever the estimated runway conditions are medium or worse. At the same time the system should not issue warnings when the runway conditions are good. Thus, there are two types of errors we need to take into consideration. Type 1 errors occur when the system does not issue a warning even though the conditions are medium or worse, while Type 2 errors occur if a warning is issued when the conditions are good. When designing the system, we need to find the optimal balance between these types of errors taking into account that a Type 1 error to a certain degree is considered to be worse than a Type 2 error. The paper describes how the system can be optimized using a combination of weather data and flight data.
-
Dahl, Kristina Rognlien & Huseby, Arne
(2018).
Buffered environmental contours.
In Haugen, Stein; Barros, Anne; van Gulijk, Coen; Kongsvik, Trond & Vinnem, Jan Erik (Ed.),
Safety and Reliability – Safe Societies in a Changing World. Proceedings of ESREL 2018, June 17-21, 2018, Trondheim, Norway.
CRC Press.
ISSN 9781351174657.
p. 2285–2292.
Full text in Research Archive
Show summary
The main idea of this paper is to use the notion of buffered failure probability from probabilistic structural design, first introduced by Rockafellar and Royset (2010), to introduce buffered environmental contours. Classical environmental contours are used in structural design in order to obtain upper bounds on the failure probabilities of a large class of designs. The purpose of buffered failure probabilities is the same. However, in contrast to classical environmental contours, this new concept does not just take into account failure vs. functioning, but also to which extent the system is failing. For example, this is relevant when considering the risk of flooding: We are not just interested in knowing whether a river has flooded. The damages caused by the flooding greatly depends on how much the water has risen above the standard level.
-
Vanem, Erik & Huseby, Arne
(2018).
Combined Long-Term and Short-Term Description of Extreme Ocean Wave Conditions by 3-Dimensional Environmental Contours,
The Proceedings of The Twenty-eighth (2018) International OCEAN AND POLAR ENGINEERING CONFERENCE, ISOPE 2018.
International Society of Offshore & Polar Engineers.
ISSN 978-1-880653-87-6.
p. 470–477.
-
Huseby, Arne; Vanem, Erik & Eskeland, Karoline
(2017).
Evaluating properties of environmental contours.
In Cepin, Marko & Bris, Radim (Ed.),
Safety & Reliability, Theory and Applications.
CRC Press.
ISSN 978-1138629370.
p. 2101–2109.
doi:
10.1201/9781315210469-265.
Full text in Research Archive
Show summary
Environmental contours are widely used as a basis for e.g., ship design. The traditional approach to environmental contours is based on the well-known Rosenblatt transformation. However, due to the effects of this transformation the probabilistic properties of the resulting environmental contour can be difficult to interpret. An alternative approach to environmental contours uses Monte Carlo simulations on the joint environmental model, and thus obtain a contour without the need for the Rosenblatt transformation. This contour have well-defined probabilistic properties, but may sometimes be overly conservative in certain areas. In this paper we give a precise definition of the concept of exceedence probability which is valid for all types of environmental contours. Moreover, we show how to estimate the exceedence probability of a given environmental contour, and use this to compare different approaches to contour construction. The methods are illustrated by numerical examples based on real-life data.
-
Huseby, Arne
(2017).
Optimizing energy production systems under uncertainty.
In Walls, Lesley; Revie, Matthew & Bedford, Tim (Ed.),
Risk, Reliability and Safety: Innovating Theory and Practice : Proceedings of ESREL 2016 (Glasgow, Scotland, 25-29 September 2016).
CRC Press.
ISSN 9781138029972.
p. 1619–1626.
doi:
10.1201/9781315374987-243.
Full text in Research Archive
Show summary
Electricity infrastructure has become a critical element of modern industrial society. In order to model and analyse this infrastructure, identify weaknesses, and optimize performance, one needs to take into account its distributed nature. Rather than modelling a single system, energy production and distribution systems consists of many more or less autonomous subsystems working together and trading with each other. Analytical models could perhaps be used to describe a single subsystem. However the complexity related to the interactions between the subsystems soon becomes unmanageable. Even establishing a simulation model for such phenomenons is a non-trivial task, especially if the model is required to be easily scaleable. In this paper we consider the problem of optimizing a simplified energy system with respect to supply stability. This is done using both deterministic methods and Monte Carlo methods. The system is broken into smaller units. These units may trade energy between them in order to maintain a stable supply covering the demand. An important element in the model is the ability to store energy within the unit. For some units, e.g., hydroelectric power plants, the energy can be easily stored in the form of a water reservoir. For other units, like wind power plants, storing energy is usually not feasible. By using an object oriented software framework, we can compare different production units, and study how these can interact in order to facilitate a stable total production.
-
Lindqvist, Bo Henry; Samaniego, Francisco J. & Huseby, Arne
(2016).
On the equivalence of systems of different sizes, with applications to system comparisons.
Advances in Applied Probability.
ISSN 0001-8678.
48(2),
p. 332–348.
doi:
10.1017/apr.2016.3.
-
Huseby, Arne & Thomsen, Jan
(2015).
Quantifying operational risk exposure by combining incident data and subjective risk assessments.
In Podofillini, Luca; Sudret, Bruno; Stojadinovic, Bozidar; Zio, Enrico & Kröder, Wolfgang (Ed.),
Safety and Reliability of Complex Engineered Systems.
CRC Press.
ISSN 9781138028791.
p. 443–451.
doi:
10.1201/b19094-62.
Show summary
Quantifying operational risk exposure typically involves gathering information from several sources, including historical data as well as subjective assessments. Using historical data one can estimate both an incident frequency distribution, as well as an incident consequence distribution. Based on these two distributions a simulation model can be established. However, by limiting the focus to data related to incidents which may reappear in the future, one is often left with a relatively short incident history. In order to improve the risk quantification, it is often necessary to include subjective risk assessments as well. In the present paper we propose three models for how to combine these two sources of information. In the first model we assume that the two sources are completely disjoint, while in the second model the two sources are assumed to overlap completely. The third model represents an intermediate situation where the two sources are partially overlapping. This third model contains the two first models as limiting cases. The models are illustrated and compared in an extensive numerical example.
-
-
Huseby, Arne; Vanem, Erik & Natvig, Bent
(2015).
Alternative environmental contours for structural reliability analysis.
Structural Safety.
ISSN 0167-4730.
54,
p. 32–45.
doi:
10.1016/j.strusafe.2014.12.003.
Show summary
This paper presents alternative methods for constructing environmental contours for probabilistic struc- tural reliability analysis of structures exposed to environmental forces such as wind and waves. For such structures, it is important to determine the environmental loads to apply in structural reliability calcula- tions and structural design. The environmental contour concept is an effective, risk-based approach in establishing such design conditions. Traditionally, such contours are established by way of a Rosenblatt transformation from the environmental parameter space to a standard normal space, which introduces uncertainties and may lead to biased results. The proposed alternative approach, however, eliminates the need for such transformations and established environmental contours based on direct Monte Carlo sampling from the joint distribution of the relevant environmental parameters. In this paper, three alter- native implementations of the proposed generic approach will be outlined.
-
Huseby, Arne; Vanem, Erik & Natvig, Bent
(2015).
A new Monte Carlo method for environmental contour estimation.
In Nowakowski, Tomasz; Mlynczak, Marek; Jodejko-Pietruczuk, Anna & Werbinska-Wojciechowska, Sylwia (Ed.),
Safety and Reliability : Methodology and Applications: Proceedings of the European safety and reliability Conference, ESREL 2014, Poland, 14-18 september 2014.
Taylor & Francis.
ISSN 978-1-138-02681-0.
p. 2091–2098.
doi:
10.1201/b17399-286.
Show summary
Environmental contour estimation is an efficient and widely used method for identifying extreme conditions as a basis for e.g., ship design. Monte Carlo simulation is a flexible method for estimating such contours. A main challenge with this approach, however, is that extreme conditions typically correspond to events with low probabilities. Thus, in order to obtain satisfactory estimates, large numbers of simulations are needed. While these simulations can be carried out very fast, the analysis of the resulting data can be very time-consuming. In the present paper we propose a new Monte Carlo method where only the extreme simulation results are stored and analyzed. This method utilizes the fact that an unbiased estimate of an environmental contour does not depend on the exact values of the non-extreme results. It is sufficient to know the number of such results. Probabilistic structural reliability analysis is performed to ensure that mechanical structures can withstand certain design loads. Obtaining precise environmental contours has become an important part of this analysis. The proposed method improves precision and speeds up calculations.
-
Huseby, Arne & Breivik, Olav Nikolai
(2014).
Optimal load sharing in a binary multicomponent system where the components have constant or increasing failure rates.
In Steenbergen, Raphaël; van Gelder, P.H.A.J.M.; Miraglia, S. & Vrouwenvelder, A.C.W.M. (Ed.),
Safety, reliability and risk analysis : beyond the horizon : proceedings of the European Safety and Reliability Conference, ESREL 2013, Amsterdam, the Netherlands, 29 September-2 October 2013.
CRC Press.
ISSN 978-1-138-00123-7.
p. 2865–2872.
doi:
10.1201/b15938-433.
Show summary
In the present paper we consider a system consisting of n components. The system is exposed to a certain load of supplying a certain amount of utility, e.g., electrical power. The load on the system is distributed among the components. For simplicity we assume that the system demand is constant over time. When functioning each component is capable of handling a certain amount of load which we refer to as the component's load capacity. The load capacity of a component is assumed to be constant throughout its lifetime. The main objective of the present paper is developing methods for optimal load sharing among the components subject to the constraints imposed by the load capacities and demand on the system. A load sharing strategy is optimal if it maximizes the expected lifetime of the system. In the paper we show how to solve the problem in the case where the components have constant failure rates. We also consider the case where the components have increasing failure rate, and show how to solve this in some special cases.
-
-
Huseby, Arne; Vanem, Erik & Natvig, Bent
(2013).
A new approach to environmental contours for ocean engineering applications based on direct Monte Carlo simulations.
Ocean Engineering.
ISSN 0029-8018.
60,
p. 124–135.
doi:
10.1016/j.oceaneng.2012.12.034.
-
-
Huseby, Arne; Vanem, Erik & Natvig, Bent
(2013).
A New Method for Environmental Contours in Marine Structural Design.
In ASME, conference (Eds.),
Proceedings of ASME 2013 32nd International Conference on Ocean, Offshore and Arctic Engineering (OMAE 2013). Volume 2A: Structures, Safety and Reliability.
The American Society of Mechanical Engineers (ASME).
ISSN 978-0-7918-5532-4.
doi:
10.1115/omae2013-10053.
-
Huseby, Arne & Natvig, Bent
(2013).
Discrete event simulation methods applied to advanced importance measures of repairable components in multistate network flow systems.
Reliability Engineering & System Safety.
ISSN 0951-8320.
119,
p. 186–198.
doi:
10.1016/j.ress.2013.05.025.
Show summary
Discrete event models are frequently used in simulation studies to model and analyze pure jump processes. A discrete event model can be viewed as a system consisting of a collection of stochastic processes, where the states of the individual processes change as results of various kinds of events occurring at random points of time. We always assume that each event only affects one of the processes. Between these events the states of the processes are considered to be constant. In the present paper we use discrete event simulation in order to analyze a multistate network flow system of repairable components. In order to study how the different components contribute to the system, it is necessary to describe the often complicated interaction between component processes and processes at the system level. While analytical considerations may throw some light on this, a simulation study often allows the analyst to explore more details. By producing stable curve estimates for the development of the various processes, one gets a much better insight in how such systems develop over time. These methods are particulary useful in the study of advanced importance measures of repairable components. Such measures can be very complicated, and thus impossible to calculate analytically. By using discrete event simulations, however, this can be done in a very natural and intuitive way. In particular significant differences between the Barlow-Proschan measure and the Natvig measure in multistate network flow systems can be explored.
-
-
-
-
-
Vanem, Erik; Huseby, Arne & Natvig, Bent
(2012).
A Stochastic Model in Space and Time for Monthly Maximum significant Wave Height,
Geostatistics Oslo 2012.
Springer.
ISSN 978-94-007-4152-2.
p. 505–517.
doi:
10.1007/978-94-007-4153-9_41.
-
Vanem, Erik; Huseby, Arne & Natvig, Bent
(2012).
Modelling ocean wave climate with a Bayesian hierarchical space-time model and a log-transform of the data.
Ocean Dynamics.
ISSN 1616-7341.
62(3),
p. 355–375.
doi:
10.1007/s10236-011-0505-5.
-
Huseby, Arne & Moratchevski, Nikita
(2012).
Sequential optimization of oil production under uncertainty.
In Bérenguer, Christophe; Grall, Antoine & Guedes Soares, Carlos (Ed.),
Advances in Safety, Reliability and Risk Management - proceedings of the European Safety and Reliability Conference, ESREL 2011.
CRC Press.
ISSN 978-0-415-68379-1.
p. 233–239.
doi:
10.1201/b11433-36.
Show summary
In the present paper we study how to optimize oil production with respect to revenue in a situation where the production rate is uncertain. The oil production in a given period is described in terms of a difference equation, where this equation contains several uncertain parameters. The uncertainty about these parameteres is expressed in terms of a suitable prior distribution. As the production develops, more information about the production parameters is gained. Hence, the uncertainty distributions need to be updated. However, the information comes in the form of inequalities and equalities which makes it very difficult to obtain exact analytical expressions for the posteriors. Still it is possible to estimate the dis- tributions using a combination of rejection sampling and the well-known Metropolis-Hastings algorithm. Armed with these techniques it is possible to solve the optimization problem using stochastic program- ming. The methods will be demonstrated on a few examples.
-
Huseby, Arne & Rabbe, Marit
(2012).
A SCENARIO BASED MODEL FOR ASSESSING RUNWAY CONDITIONS USING WEATHER DATA.
In PSAM&ESREL, . (Eds.),
11th International Probabilistic Safety Assessment and Management Conference and the Annual European Safety and Reliability Conference 2012, 25-29 June 2012, Helsinki, Finland.
Curran Associates, Inc..
ISSN 978-1-62276-436-5.
p. 5092–5101.
Show summary
Slippery runways represent a significant risk to aircrafts especially during the winter season. Recent accidents, such as the Southwest Airlines jet skidding off a runway at Chicago Midway Airport in December 2005, as well as the similar accident with the Delta Connection flight at the Cleveland Hopkins International Airport in Ohio in February 2007, show that this is indeed a serious problem. In order to apply the appropriate braking action, the pilots need reliable information about the runway conditions. Unfortunately the accuracy of runway reports can sometimes be unsatisfactory. In order to obtain more precise information about the current conditions, we suggest to use an assessment model based on various types of weather data. The model defines a set of scenarios known to cause slippery conditions. By monitoring meteorological parameters like air and ground temperature, humidity, visibility and precipitation, the model enables us to detect scenarios and issue warnings to the ground personnel. An information system using this model is currently implemented on 14 Norwegian airports. In the present paper we present this model. Morover, we show how the model can be evaluated using data from a large number of flights. A key concept in this evaluation is the notion of friction limited landings. Unless the pilot challenges the runway friction during the landing, the flight data cannot be used to measure the available friction. Thus, typically only a small subset of the available flight data is actually used in the evaluation.
-
Huseby, Arne & Sødal, Karen Marie
(2012).
SEQUENTIAL OPTIMIZATION OF OIL PRODUCTION FROM MULTIPLE RESERVOIRS UNDER UNCERTAINTY.
In PSAM&ESREL, . (Eds.),
11th International Probabilistic Safety Assessment and Management Conference and the Annual European Safety and Reliability Conference 2012, 25-29 June 2012, Helsinki, Finland.
Curran Associates, Inc..
ISSN 978-1-62276-436-5.
p. 633–642.
Show summary
In the present paper we study how to optimize oil production from several reservoirs sharing a common processing facility under uncertainty. The potential oil production from a reservoir in a given period is described in terms of a difference equation containing several uncertain parameters. Using Lagrange optimization a step by step production strategy can be determined. This strategy distributes the available processing capacity so that the expected total production from each period is maximized. This is done by ensuring that all the reservoirs have the same probability of producing according to the plan. The resulting strategy, however, is typically not optimal globally. In order to improve the global performance of the strategy, a different objective function is introduced. As the production develops, more information about the production parameters is gained. Hence, the uncertainty distributions need to be updated. This is done using a combination of rejection sampling and the Metropolis-Hastings algorithm. This updating is taken into account in the optimization procedure. The methods are illustrated by considering a specific example.
-
-
-
Huseby, Arne & Breivik, Olav Nikolai
(2011).
Optimal load sharing in a binary multicomponent system,
Proceedings of the 19th Advances in Risk and Reliability Technology Symposium.
University of Nottingham.
ISSN 9780904947656.
p. 381–391.
Show summary
In the present paper we consider a system consisting of n components that is exposed to the load of supplying a certain amount of utility, e.g., electrical power. The load on the system is distributed among the components. When functioning each component is capable of handling a certain amount of load. The load capacity of a component is assumed to be constant throughout its lifetime. The main objective of the present paper is developing methods for optimal load sharing among the components subject to the constraints imposed by the load capacities and demand on the system. In the paper we show how to solve the problem in several special cases, and outline a greedy algorithm for handling the general case.
-
Huseby, Arne
(2011).
Oriented matroid systems.
Discrete Applied Mathematics.
ISSN 0166-218X.
159(1),
p. 31–45.
doi:
10.1016/j.dam.2010.09.008.
Show summary
The domination invariant has played an important part in reliability theory. While most of the work in this field has been restricted to various types of network system models, many of the results can be generalized to much wider families of systems associated with matroids. Previous papers have explored the relation between undirected network systems and matroids. In this paper the main focus is on directed network systems and their relation to oriented matroids. An oriented matroid is a special type of matroid where the circuits are signed sets. Using these signed sets one can e.g., obtain a set theoretic representation of the direction of the edges of a directed network system. Classical results for directed network systems include the fact that the signed domination is either +1 or −1 if the network is acyclic, and zero otherwise. It turns out that these results can be generalized to systems derived from oriented matroids. Several classes of systems for which the generalized results hold will be discussed. These include oriented versions of k-out-of-n systems and a certain class of systems associated with matrices.
-
Huseby, Arne; Natvig, Bent; Gåsemyr, Jørund Inge; Skutlaberg, Kristina & Isaksen, Stefan
(2010).
Advanced discrete event simulation methods with application to importance measure estimation in reliability.
In Goti, Aitor (Eds.),
Discrete Event Simulation.
IntechOpen.
ISSN 978-953-307-115-2.
p. 205–222.
Show summary
In the present chapter we use discrete event simulation in order to analyze a binary monotone system of repairable components. Asymptotic statistical properties of such a system, e.g., the asymptotic system availability and component criticality, can easily be estimated by running a single discrete event simulation on the system over a sufficiently long time horizon, or by working directly on the stationary component availabilities. Sometimes, however, one needs to estimate how the statistical properties of the system evolve over time. In such cases it is necessary to run many simulations to obtain a stable curve estimate. At the same time one needs to store much more information from each simulation. A crude approach to this problem is to sample the system state at fixed points of time, and then use the mean values of the states at these points as estimates of the curve. Using a sufficiently high sampling rate a satisfactory estimate of the curve can be obtained. Still, all information about the process between the sampling points is thrown away. To handle this issue, we propose an alternative sampling procedure where we utilize process data between the sampling points as well. This simulation method is particularly useful when estimating various kinds of component importance measures for repairable systems. Such measures can often be expressed as weighted integrals of the time-dependent Birnbaum measure of importance. By using the proposed simulation methods, stable estimates of the Birnbaum measure as a function of time are obtained. Combined with the appropriate weight function the importance measures of interest can be estimated.
-
Huseby, Arne & Natvig, Bent
(2010).
Advanced discrete simulation methods applied to repairable multistate systems.
In Bris, Radim; Martorell, Sebastián & Guedes Soares, C. (Ed.),
Reliability, Risk and Safety. Theory and Applications.
CRC Press.
ISSN 978-0-415-55509-8.
p. 659–666.
-
Huseby, Arne & Haavardsson, Nils Fridthjov
(2010).
Multi-reservoir production optimization under uncertainty.
In Bris, Radim; Martorell, Sebastián & Guedes Soares, C. (Ed.),
Reliability, Risk and Safety. Theory and Applications.
CRC Press.
ISSN 978-0-415-55509-8.
p. 407–413.
Show summary
Oil and gas production from several reservoirs are often processed at a single processing facility. Due to limitations in the processing capacity, this implies that the production rates from the individual reservoirs have to be reduced. That is, for each reservoir the production rate is scaled down by a suitable choke factor between zero and one, chosen so that the total production does not exceed the processing capacity. Recent studies of production optimization include (Horne, 2002), (Merabet & Bellah, 2002) and (Neiro & Pinto, 2004). (Huseby & Haavardsson, 2009) introduced the concept of a production strategy, a vector valued function defined for all points of time t ≥ 0 representing the choke factors applied to the reservoirs at time t. As long as the total potential production rate is greater than the processing capacity, the choke factors should be chosen so that the processing capacity is fully utilized. When the production reaches a state where this is not possible, the production should be left unchoked. A production strategy satisfying these constraints is said to be admissible. (Huseby & Haavardsson, 2009) developed a general framework for optimizing production strategies with respect to various types of objective functions. The same problem was considered in (Haavardsson et. al., 2008) where a parametric class of production strategies were introduced. (Haavardsson et. al., 2008) also showed how to find an optimal strategy within this class given that all reservoir parameters were known. In the present paper we consider the problem of optimizing the production strategy when the reservoir parameters are uncertain. The uncertainty is represented using the stochastic models introduced in (Haavardsson & Huseby, 2007). In real- life reservoir uncertainty typically change over time. Thus, a production strategy considered to be optimal initially may turn out to be far from optimal as more knowledge about the reservoirs is gained. Ideally, one would prefer production strategies which could be updated dynamically as new information is obtained. However, optimizing such dynamic production strategies is a very difficult problem. In the present paper we instead focus on finding production strategies which are robust with respect to variations in the reservoir properties.
-
Huseby, Arne; Klein-Paste, Alex & Bugge, Hans Jørgen
(2010).
Assessing airport runway conditions—A Bayesian approach.
In Ale, Ben; Papazoglou, Ioannis & Zio, Enrico (Ed.),
Reliability, risk and safety : back to the future.
CRC Press.
ISSN 978-0-415-60427-7.
p. 2024–2032.
-
Natvig, Bent; Huseby, Arne & Reistadbakk, Mads
(2010).
Measures of component importance in repairable multistate systems-a numerical study.
In Ale, Ben; Papazoglou, Ioannis & Zio, Enrico (Ed.),
Reliability, risk and safety : back to the future.
CRC Press.
ISSN 978-0-415-60427-7.
p. 677–685.
-
Haavardsson, Nils Fridthjov; Huseby, Arne; Pedersen, Frank; Lyngroth, Steinar; Xu, Jingzhen & Aasheim, Tore
(2010).
Hydrocarbon Production Optimization in Fields With Different Ownership and Commercial Interests.
SPE Reservoir Evaluation and Engineering.
ISSN 1094-6470.
13(1),
p. 95–104.
doi:
10.2118/121399-PA.
Show summary
A main field and satellite fields consist of several separate reservoirs with gas cap and/or oil rim. A processing facility on the main field receives and processes the oil, gas, and water from all the reservoirs. This facility is typically capable of processing only a limited amount of oil, gas, and water per unit of time. In order to satisfy these processing limitations, the production needs to be choked. The available capacity is shared among several field owners with different commercial interests. In this paper, we focus on how total oil and gas production from all the fields could be optimized. The satellite-field owners negotiate processing capacities on the main-field facility. This introduces additional processing-capacity constraints (booking constraints) for the owners of the main field. If the total wealth created by all owners represents the economic interests of the community, it is of interest to investigate whether the total wealth may be increased by lifting the booking constraints. If all reservoirs may be produced more optimally by removing the booking constraints, all owners may benefit from this when appropriate commercial arrangements are in place. We will compare two production strategies. The first production strategy optimizes locally, at distinct time intervals. At given intervals, the production is prioritized so that the maximum amount of oil is produced. In the second production strategy, a fixed weight is assigned to each reservoir. The purpose of the weights is to be able to prioritize some reservoirs before others. The weights are optimized from a life-cycle perspective. As an illustration, a case study based on real data is presented. For the examples considered, it is beneficial to lift the booking constraints because all of the reservoirs combined can be produced more efficiently when this is done.
-
Natvig, Bent; Eide, Kristina A.; Gåsemyr, Jørund Inge; Huseby, Arne & Isaksen, Stefan
(2009).
The Natvig measures of component importance in repairable systems applied to an offshore oil and gas production system.
In Martorell, Sebastián; Soares, C. Guedes & Barnett, Julie (Ed.),
Safety, Reliability and Risk Analysis: Theory, Methods and Applications.
CRC Press.
ISSN 978-0-415-48516-6.
p. 2029–2035.
Show summary
In the present paper the Natvig measures of component importance for repairable systems, and its extended version are applied to an offshore oil and gas production system. According to the extended version of the Natvig measure a component is important if both by failing it strongly reduces the expected system uptime and by being repaired it strongly reduces the expected system downtime. The results include a study of how different distributions affect the ranking of the components. All numerical results are computed using discrete event simulation. In a companion paper (Huseby, Eide, Isaksen, Natvig, and Gåsemyr 2008) the advanced simulation methods needed in these calculations are decribed
-
Huseby, Arne; Eide, Kristina A.; Isaksen, Stefan; Natvig, Bent & Gåsemyr, Jørund Inge
(2009).
Advanced discrete event simulation methods with application to importance measure estimation.
In Martorell, Sebastián; Soares, C. Guedes & Barnett, Julie (Ed.),
Safety, Reliability and Risk Analysis: Theory, Methods and Applications.
CRC Press.
ISSN 978-0-415-48516-6.
p. 1747–1753.
Show summary
In the present paper we use discrete event simulation in order to analyze a binary monotone system of repairable components. Asymptotic statistical properties of such a system, e.g., the asymptotic system availability and component criticality, can easily be estimated by running a single discrete event simulation on the system over a sufficiently long time horizon, or by working directly on the stationary availabilities. Sometimes, however, one needs to estimate how the statistical properties of the system evolve over time. In such cases it is necessary to run many simulations to obtain a stable curve estimate. At the same time one needs to store much more information from each simulation. A crude approach to this problem is to sample the system state at fixed points of time, and then use the mean values of the states at these points as estimates of the curve. Using a sufficiently high sampling rate a satisfactory estimate of the curve can be obtained. Still, all information about the process between the sampling points is thrown away. To handle this issue, we propose an alternative sampling procedure where we utilize process data between the sampling points as well. This simulation method is particularly useful when estimating various kinds of component importance measures for repairable systems. As explained in (Natvig and Gåsemyr 2008) such measures can often be expressed as weighted integrals of the time-dependent Birnbaum measure of importance. By using the proposed simulation methods, stable estimates of the Birnbaum measure as a function of time are obtained and combined with the appropriate weight function, and thus producing the importance measure of interest.
-
Huseby, Arne & Rabbe, Marit
(2009).
Predicting airport runway conditions based on weather data.
In Martorell, Sebastián; Soares, C. Guedes & Barnett, Julie (Ed.),
Safety, Reliability and Risk Analysis: Theory, Methods and Applications.
CRC Press.
ISSN 978-0-415-48516-6.
p. 2199–2206.
Show summary
Slippery runways represent a significant risk to aircrafts especially during the winter season. Thus, having reliable methods for identifying such conditions is very important. However, measuring the runway friction with a satisfactory precision is very difficult. While many different measurement devices have been developed, it is hard to find equipment that produces stable and consistent results. Furthermore, in order to measure friction, the runway needs to be shut down. Thus, in order to avoid severe delays to the traffic, such measurements cannot be carried out too frequently. In the present paper we present the results of a large-scale study of runway conditions carried out during two winter seasons at two Norwegian airports. The main goal was developing methodology for identifying slippery runway conditions using weather data in addition to runway reports. Throughout the two seasons various kinds of weather data was collected, e.g., air and ground temperature, humidity, precipitation, visibility and wind. Using these data a scenario based weather model for slippery conditions was developed. The model was validated using flight data from a large number of flights. Using these data we computed several indicators reflecting how the aircrafts were affected by the runway conditions. By comparing indicator values from landings where the weather model predicted slippery conditions with the corresponding values from landings on dry runways, we were able to verify that the weather model selected the correct landings.
-
Huseby, Arne & Haavardsson, Nils Fridthjov
(2009).
Multi-reservoir production optimization.
European Journal of Operational Research.
ISSN 0377-2217.
199(1),
p. 236–251.
doi:
10.1016/j.ejor.2008.11.023.
Show summary
When a large oil or gas field is produced, several reservoirs often share the same processing facility. This facility is typically capable of processing only a limited amount of commodities per unit of time. In order to satisfy these processing limitations, the production needs to be choked, i.e., scaled down by a suitable choke factor. A production strategy is defined as a vector valued function defined for all points of time representing the choke factors applied to reservoirs at any given time. In the present paper we consider the problem of optimizing such production strategies with respect to various types of objective functions. A general framework for handling this problem is developed. A crucial assumption in our approach is that the potential production rate from a reservoir can be expressed as a function of the remaining recoverable volume. The solution to the optimization problem depends on certain key properties, e.g., convexity or concavity, of the objective function and of the potential production rate functions. Using these properties several important special cases can be solved. An admissible production strategy is a strategy where the total processing capacity is fully utilized throughout a plateau phase. This phase lasts until the total potential production rate falls below the processing capacity, and after this all the reservoirs are produced without any choking. Under mild restrictions on the objective function the performance of an admissible strategy is uniquely characterized by the state of the reservoirs at the end of the plateau phase. Thus, finding an optimal admissible production strategy, is essentially equivalent to finding the optimal state at the end of the plateau phase. Given the optimal state a backtracking algorithm can then used to derive an optimal production strategy. We will illustrate this on a specific example.
-
Natvig, Bent; Eide, Kristina A.; Gåsemyr, Jørund Inge; Huseby, Arne & Isaksen, Stefan
(2009).
Simulation based analysis and an application to an offshore oil and gas production system of the Natvig measures of component importance in repairable systems.
Reliability Engineering & System Safety.
ISSN 0951-8320.
94(10),
p. 1629–1638.
doi:
10.1016/j.ress.2009.04.002.
-
Huseby, Arne
(2008).
Signed Domination of Oriented Matroid Systems.
In Bedford, Tim; Quigley, John; Walls, Lesley; Alkali, Babakalli; Daneshkhah, Alireza & Hardman, Gavin (Ed.),
Advances in Mathematical Modeling for Reliability.
IOS Press.
ISSN 978-1-58603-865-6.
p. 177–184.
-
Huseby, Arne
(2007).
Modeling aircraft movements using stochastic hybrid systems.
In Aven, Terje & Vinnem, Jan Erik (Ed.),
Risk, Reliability and Societal Safety, vol.2 Thematic Topics.
Taylor & Francis.
ISSN 978-0-415-44784-3.
p. 1183–1190.
Show summary
Modern simulation techniques and computer power makes it possible to simulate systems with continuous state spaces. In this paper we consider the problem of simulating aircraft movements. Such simulation studies are of interest in order to evaluate the risk of collision or near collision events in areas with heavy air traffic. Application areas include systems for automatic air traffic control (ATC) and airport traffic planning.
Risk elements in this area include arrivals of aircrafts into the area of interest, weather conditions, chosen runway and landing direction as well as arbitrary deviations from the normal flight trajectories. A realistic simulation model, incorporating all or most of these elements, require a combination of different tools. A popular framework for such modeling is stochastic hybrid systems.
A risk event occurs when two (or more) aircrafts are too close to each other in the air. When such an event occurs, the aircrafts will attempt to change their trajectories in order to avoid fatal accidents. Fortunately, risk events are fairly rare. Thus, in order to obtain stable results for the probability of a collision or near collision it is necessary to run the model for a substantial amount of time.
In the paper we propose a methodology for simulating aircraft movements based on ordinary differential equations and counting processes. The models and methods are illustrated with some simulation examples.
-
View all works in Cristin
-
Huseby, Arne Bang
(2023).
Domination and multistate systems.
Show summary
Domination theory has been studied extensively in the context of binary monotone systems, where the structure function is a sum of products of the component states, and with coefficients given by the signed domination function. Using e.g., matroid theory, many useful properties of the signed domination function can be derived. In this paper we show how some of these results can be extended to multistate systems. The signed domination function is an invariant determined by the poset generated by the minimal path vectors. Two systems with isomorphic posets also have the same signed domination function. We derive an explicit formula for the domination using Möbius functions. Furthermore, we obtain a recursive formula for the signed domination function, referred to as the signed domination theorem. Both these results are generalizations of the corresponding results for binary monotone systems.
-
Dahl, Kristina Rognlien & Huseby, Arne
(2020).
Environmental contours and optimal design.
-
Huseby, Arne; Vanem, Erik & Barbosa, Maria Hjelset
(2019).
Evaluating probabilistic properties of environmental contours for mixtures of distributions.
Show summary
Environmental contours are widely used as a basis for e.g., ship design, especially in early design phases. The traditional approach to such contours is based on the well-known Rosenblatt transformation. In the present paper we present a numerical method making it possible to apply the inverse Rosenblatt transformation to mixtures of distributions. Due to the effects of this transformation the probabilistic properties of the resulting environmental contour can be distorted. Based on a precise definition of the concept of exceedance probability, valid for all types of environmental contours, we show how to evaluate a given contour and adjust it so that it gets the desired properties. The methods are illustrated by a numerical example.
-
Huseby, Arne
(2019).
When does a convex environmental contour exist?
Show summary
Environmental contours are widely used as a basis for e.g., ship design, especially in early design phases. The traditional approach to such contours is based on the well-known Rosenblatt transformation. Unfortunately, due to the non-linearity of this transformation, the resulting contour may not have the desired probabilistic interpretation. Recent methods for constructing such contours include Monte Carlo which in principle produce convex contours with a constant exceedence probability in all tail directions. Due to numerical instabilities, however, such contours typically contain small irregularities or loops. The presence of such loops is closely related to the mathematical conditions for the existence of a convex contour. In this presentation we present a precise mathematical existence condition as well as a smoothing method which can be used to eliminate possible loops.
-
Huseby, Arne; Kalinowska, Martyna & Abrahamsen, Tobias
(2019).
Multistate systems and importance measures.
Show summary
Within the field of reliability multistate systems represent a natural extension of the classical binary approach. Various classes of multistate systems have been studied extensively with emphasis on extending the properties of binary systems. In this brief note we instead extend the theory by focussing on the physical properties of such systems. We argue that this approach will make it easier to adopt the methodology as the analyst do not need to encode various physical quantities using an abstract integer scale. Within this framework we introduce and compare two measures of importance defined for repairable multistate systems. Both measures can be viewed as direct generalisations of the classical Birnbaum measure.
-
Vanem, Erik & Huseby, Arne
(2019).
Environmental contours for safe design of ships and other marine structures.
-
Huseby, Arne & Rabbe, Marit
(2018).
Optimal warning scenarios for slippery runways.
Show summary
Slippery runways represent a significant risk to aircrafts especially during the winter season. In order to apply the appropriate braking action, the pilots need reliable information about the runway conditions. Unfortunately the accuracy of runway reports can sometimes be unsatisfactory. In order to obtain more precise and up-to-date information about the current conditions, a warning system based on various types of weather data was suggested by Huseby & Rabbe (2012). See also Huseby & Rabbe (2008) and Huseby et al. (2010). The system is based on a set of scenarios known to cause slippery conditions. By monitoring meteorological parameters like air and ground temperature, humidity, visibility and precipitation, and comparing these to the given scenarios, the system can issue warnings to the ground personnel. This system is currently being used on 16 Norwegian airports. In the present paper this warning system is reviewed. Ideally, the warning system should issue warnings whenever the estimated runway conditions are medium or worse. At the same time the system should not issue warnings when the runway conditions are good. Thus, there are two types of errors we need to take into consideration. Type 1 errors occur when the system does not issue a warning even though the conditions are medium or worse, while type 2 errors occur if a warning is issued when the conditions are good. When designing the system, we need to find the optimal balance between these types of errors taking into account that a type 1 error to a certain degree is considered to be worse than a type 2 error. The paper describes how the system can be optimized using a combination of weather and flight data
-
Huseby, Arne; Vanem, Erik & Eskeland, Karoline
(2017).
On the evaluation of probabilistic properties of environmental contours.
Show summary
Environmental contours are widely used as a basis for e.g., ship design. The traditional approach to environmental contours is based on the well-known Rosenblatt transformation. However, due to the effects of this transformation the probabilistic properties of the resulting environmental contour can be difficult to interpret. An alternative approach to environmental contours uses Monte Carlo simulations on the joint environmental model, and thus obtain a contour without the need for the Rosenblatt transformation. This contour have well-defined probabilistic properties, but may sometimes be overly conservative in certain areas. In this paper we give a precise definition of the concept of exceedence probability which is valid for all types of environmental contours. Moreover, we show how to estimate the exceedence probability of a given environmental contour, and use this to compare different approaches to contour construction. The methods are illustrated by numerical examples based on real-life data.
-
Huseby, Arne
(2017).
Constructing equivalent systems of orientable matroid systems.
Show summary
A binary monotone system is an ordered pair (E, φ) where E is the component set, and φ, the structure function of the system, is a binary function defined for all subsets A ⊆ E which is non-decreasing with respect to set inclusion. If all components have the same probability p of functioning, the system reliability can be expressed in terms of the reliability polynomial h(p). Two binary systems are equivalent if their reliability polynomials are equal. An important class of binary monotone systems is the class of undirected network systems. This class belongs to the larger class of matroid systems. A matroid is and ordered pair (F, M) where M is a family of incomparable subsets of F , called circuits, satisfying a set of axioms. A matroid system is a binary monotone system, (E, φ), which can be associated with a matroid (E ∪ x, M) in such a way that the minimal path sets of (E,φ) can be recovered by extracting all the circuits M ∈ M containing the element x, and then deleting x from these circuits. A subclass of such systems is the class of orientable matroid systems. If (E,φ) is a 2-terminal undirected network system, it is orientable since directions can be assigned to all the edges. By using the properties of orientable matroid systems we obtain a way of constructing equivalent systems.
-
Huseby, Arne & Breivik, Olav Nikolai
(2013).
Optimal load sharing in a binary multicomponent system where the components have constant or increasing failure rates.
Show summary
In the present paper we consider a system consisting of n components. the system is exposed to a certain load of supplying a certain amount of utility, e.g., electrical power. the load on the system is distributed among the components. For simplicity we assume that the system demand is constant over time. When functioning each component is capable of handling a certain amount of load which we refer to as the component’s load capacity. the load capacity of a component is assumed to be constant throughout its lifetime. the main objective of the present paper is developing methods for optimal load sharing among the components subject to the constraints imposed by the load capacities and demand on the system. A load sharing strategy is optimal if it maximizes the expected lifetime of the system. In the paper we show how to solve the problem in the case where the components have constant failure rates. We also consider the case where the components have increasing failure rate, and show how to solve this in some special cases.
-
Huseby, Arne
(2011).
A Framework for multi-reservoir production optimization.
-
Huseby, Arne
(2011).
Innslag om sannsynlighetsberegning og tilfeldigheter i NRK-programmet Helgeland.
[Radio].
NRK.
-
Huseby, Arne & Moratchevski, Nikita
(2011).
Sequential optimization of oil production under uncertainty.
-
Vanem, Erik; Huseby, Arne & Natvig, Bent
(2011).
Stochastic Modeling of Wave Climate Using a Bayesian Hierarchical Space-Time Model with a Log-Transform.
-
Huseby, Arne & Breivik, Olav Nikolai
(2011).
Optimal load sharing in a binary multicomponent system.
-
Huseby, Arne
(2010).
Assessing airport runway conditions—A Bayesian approach.
-
Natvig, Bent & Huseby, Arne
(2010).
Measures of component importance in repairable multistate systems- a numerical study.
-
Huseby, Arne & Haavardsson, Nils Fridthjov
(2009).
Multi-reservoir production optimization under uncertainty.
Show summary
Oil and gas production from several reservoirs are often processed at a single processing facility. Due to limitations in the processing capacity, this implies that the production rates from the individual reservoirs have to be reduced. That is, for each reservoir the production rate is scaled down by a suitable choke factor between zero and one, chosen so that the total production does not exceed the processing capacity. Recent studies of production optimization include (Horne, 2002), (Merabet & Bellah, 2002) and (Neiro & Pinto, 2004). (Huseby & Haavardsson, 2009) introduced the concept of a production strategy, a vector valued function defined for all points of time t ≥ 0 representing the choke factors applied to the reservoirs at time t. As long as the total potential production rate is greater than the processing capacity, the choke factors should be chosen so that the processing capacity is fully utilized. When the production reaches a state where this is not possible, the production should be left unchoked. A production strategy satisfying these constraints is said to be admissible. (Huseby & Haavardsson, 2009) developed a general framework for optimizing production strategies with respect to various types of objective functions. The same problem was considered in (Haavardsson et. al., 2008) where a parametric class of production strategies were introduced. (Haavardsson et. al., 2008) also showed how to find an optimal strategy within this class given that all reservoir parameters were known.
In the present paper we consider the problem of optimizing the production strategy when the reservoir parameters are uncertain. The uncertainty is represented using the stochastic models introduced in (Haavardsson & Huseby, 2007). In real- life reservoir uncertainty typically change over time. Thus, a production strategy considered to be optimal initially may turn out to be far from optimal as more knowledge about the reservoirs is gained. Ideally, one would prefer production strategies which could be updated dynamically as new information is obtained. However, optimizing such dynamic production strategies is a very difficult problem. In the present paper we instead focus on finding production strategies which are robust with respect to variations in the reservoir properties.
-
Huseby, Arne & Natvig, Bent
(2009).
Advanced discrete simulation methods applied to repairable multistate systems.
-
Huseby, Arne & Rabbe, Marit
(2008).
Predicting airport runway conditions based on weather data.
Show summary
Slippery runways represent a significant risk to aircrafts especially during the winter season. Thus, having reliable methods for identifying such conditions is very important. However, measuring the runway friction with a satisfactory precision is very difficult. While many different measurement devices have been developed, it is
hard to find equipment that produces stable and consistent results. Furthermore, in order to measure friction, the runway needs to be shut down. Thus, in order to avoid severe delays to the traffic, such measurements cannot be carried out too frequently. In the present paper we present the results of a large-scale study of runway conditions carried out during two winter seasons at two Norwegian airports. The main goal was developing methodology for identifying slippery runway conditions using weather data in addition to runway reports. Throughout the two seasons various kinds of weather data was collected, e.g., air and ground temperature, humidity, precipitation, visibility and wind. Using these data a scenario based weather model for slippery conditions was developed. The model was validated using flight data from a large number of flights. Using these data we computed several indicators reflecting how the aircrafts were affected by the runway conditions. By comparing indicator
values from landings where the weather model predicted slippery conditions with the corresponding values from landings on dry runways, we were able to verify that the weather model selected the correct landings.
-
Huseby, Arne; Eide, Kristina; Isaksen, Stefan; Natvig, Bent & Gåsemyr, Jørund Inge
(2008).
Advanced discrete event simulation methods with application to importance measure estimation.
Show summary
In the present paper we use discrete event simulation in order to analyze a binary monotone system of repairable components. Asymptotic statistical properties of such a system, e.g., the asymptotic system availability and component criticality, can easily be estimated by running a single discrete event simulation on the system over a sufficiently long time horizon, or by working directly on the stationary availabilities. Sometimes, however, one needs to estimate how the statistical properties of the system evolve over time. In such cases it is necessary to run many simulations to obtain a stable curve estimate. At the same time one needs to store much more information from each simulation. A crude approach to this problem is to sample the system state at fixed points of time, and then use the mean values of the states at these points as estimates of the curve. Using a sufficiently high sampling rate a satisfactory estimate of the curve can be obtained. Still, all information about the process between the sampling points is thrown away. To handle this issue, we propose an alternative sampling procedure where we utilize process data between the sampling points as well. This simulation method is particularly useful when estimating various kinds of component importance measures for repairable systems. As explained in (Natvig and Gåsemyr 2008) such measures can often be expressed as weighted integrals of the time-dependent Birnbaum measure of importance. By using the proposed simulation methods, stable estimates of the Birnbaum measure as a function of time are obtained and combined with the appropriate weight function, and thus producing the importance measure of interest.
-
Natvig, Bent; Eide, Kristina; Gåsemyr, Jørund Inge; Huseby, Arne & Isaksen, Stefan
(2008).
The Natvig measures of component importance in repairable systems applied to an offshore oil and gas production system.
Show summary
In the present paper the Natvig measures of component importance for repairable systems, and its extended version are applied to an offshore oil and gas production system. According to the extended version of the Natvig measure a component is important if both by failing it strongly reduces the expected system uptime and by being repaired it strongly reduces the expected system downtime. The results include a study of how different distributions affect the ranking of the components. All numerical results are computed using discrete event simulation. In a companion paper (Huseby, Eide, Isaksen, Natvig, and Gåsemyr 2008) the advanced simulation methods needed in these calculations are decribed.
-
Huseby, Arne
(2017).
On orientable matroid systems and reliability equivalence.
Matematisk institutt, UiO.
ISSN 0806-3842.
Full text in Research Archive
-
Lilleborge, Marie; Hauge, Ragnar; Eidsvik, Jo & Huseby, Arne
(2016).
Efficient Information Gathering in Discrete Bayesian Networks.
University of Oslo.
ISSN 1501-7710.
Full text in Research Archive
Show summary
I doktorgradsarbeidet mitt har jeg utformet retningslinjer for hvordan man bør evaluere informasjonsverdi ut fra anvendelsen, og jeg har laget algoritmer som hjelper datamaskinen å kjapt regne ut hvilke undersøkelser som er best. Disse løsningene har jeg brukt både for å planlegge oljeleting i Nordsjøen og for å regne ut det optimale screeningprogrammet for brystkreft.
Når mye er ukjent, er det vanskelig å undersøke alt og derfor viktig å forstå hvilke undersøkelser som er mest effektive. Det er også essensielt å evaluere læringsverdi ut fra hva resultatene skal brukes til. Videre må en effektivt kunne regne ut hvilke undersøkelser som er best.
Jeg har fått et skattekart over Nordsjøen. Skattekartet er en rettet graf, det vil si at den består av sirkler med piler mellom dem. Sirklene er kjøkken der oljen kan ha oppstått, feller vi kan hente den ut fra eller steder på veien mellom dem. Pilene viser mulige ruter oljen potensielt kan ha beveget seg langs etter at den ble dannet for millioner av år siden. Skattekartet gir et intuitivt bilde av det vi modellerer; vi kan lese av sannsynligheten for å finne olje enkeltvis for ulike steder, og ikke minst samspillet mellom disse sannsynlighetene.
Jeg har jobbet med hvordan den intuitive og samtidig ekstremt fleksible modellen bør brukes for å regne ut hvilke letebrønner man skal velge å bore. For å finne ut av dette har jeg brukt generelle statistiker-briller for å forstå informasjonsinnhenting og informasjonsevaluering i rettede grafer mer generelt, og jeg har programmert et system for å regne effektivt i disse modellene. Statistiker-brillene har nemlig et ekstra triks. De gjør det mye lettere å se likheten mellom oppgaver som ellers kan se veldig forskjellige ut. Slik kunne jeg bruke mye av det jeg lærte av oljeletingsprosjektet til å regne ut et optimalt screeningprogram for brystkreft ut fra data fra Mammografiprogrammet.
-
Huseby, Arne & Thomsen, Jan
(2015).
Simulating total operational risk using retrospective information and subjective assessments - a Bayesian approach.
Matematisk institutt, UiO.
ISSN 0806-3842.
Full text in Research Archive
Show summary
Quantifying operational risk exposure typically involves gathering information from several sources, including historical data as well as subjective assessments. Using historical data one can estimate both an incident frequency distribution, as well as an incident consequence distribution. Based on these two distributions a simulation model can be established. However, by limiting the focus to data related to incidents which may reappear in the future, one is often left with a relatively short incident history. In order to improve the risk quantification, it is often necessary to include subjective risk assessments as well. In the present paper we propose a model for how to combine these two sources of information. The model can be used to represent situations ranging from cases where the two sources are disjoint, overlap completely as well as intermediate cases where the two sources are partially overlapping. The model is illustrated by considering a numerical example. In this example we vary the degree of overlap between the sources of information.
-
Natvig, Bent; Huseby, Arne & Reistadbakk, Mads
(2010).
Measures of component importance in repairable multistate systems – a numerical study.
Matematisk institutt, Universitetet i Oslo.
ISSN 0806-3842.
Show summary
In [10] dynamic and stationary measures of importance of a component in a repairable multistate system were introduced. For multistate systems little has been published until now on such measures even in the nonrepairable case. According to the Barlow-Proschan type measures a component is important if there is a high probability that a change in the component state causes a change in whether or not the system state is above a given state. On the other hand, the Natvig type measures focus on how a change in the component state affects the expected system uptime and downtime relative to the given system state. In the present paper we first review these measures which can be estimated using the simulation methods suggested in [4]. Extending the work in [8] from the binary to the multistate case, a numerical study of these measures is then given for two three component systems, a bridge system and also applied to an offshore oil and gas production system. In the multistate case the importance of a component is calculated separately for each component state. Thus it may happen that a component is very important at one state, and less important, or even irrelevant at another. Unified measures combining the importances for all component states can be obtained by adding up the importance measures for each individual state. According to these unified measures a component can be important relative to a given system state but not to another. It can be seen that if the distributions of the total component times spent in the non complete failure states for the multistate system and the component lifetimes for the binary system are identical, the Barlow-Proschan measure to the lowest system state simply reduces to the binary version of the measure. The extended Natvig measure, however, does not have this property. This indicates that the latter measure captures more information about the system.
-
-
Huseby, Arne
(2008).
Oriented matroid systems.
Department of Mathematics, University of Oslo.
ISSN 0806-3842.
2008(2).
Show summary
The domination invariant has played an important part in reliability theory. While most of the work in this field has been restricted to various types of network system models, many of the results can be generalized to much wider families of systems associated with matroids. A matroid is an ordered pair (F,M), where F is a nonempty finite set and M is a collection of incomparable subsets of F, called circuits, satisfying certain closure properties. For any given matroid (F,M) where F = Eux we can associate a reliability system with component set E and with minimal path sets P. Previous papers have explored the relation between undirected network systems and matroids. In this paper the main focus is on directed network systems and their relation to oriented matroids. Oriented matroids are a special type of matroids where the circuits are signed sets. Using these signed sets one can e.g., obtain a set theoretic representation of the direction of the edges of a directed network system. Classical results for directed network systems include the fact that the signed domination is either +1 or -1 if the network is acyclic, and zero otherwise. It turns out that these results can be generalized to systems derived from oriented matroids. Several classes of systems for which the generalized results hold will be discussed. These include oriented versions of k-out-of-n-systems and a certain class of systems associated with matrices.
-
Huseby, Arne; Løvseth, Marthe & Wright, Jan
(2008).
Helhetlig risikomodell basert på flygbarhetsindeks.
Avinor.
Show summary
Bakgrunnen for prosjektet er at Avinor er forpliktet til å oppfylle kravene i BSL E 3-2 for alle lufthavner og på bakgrunn av denne få en teknisk/operativ godkjenning hvert 5. år. Ved manglende oppfyllelse skal kompenserende tiltak identifiseres og evalueres ved hjelp av risikoanalyse. Dagens godkjenningssystem innebærer at risikoreduserende tiltak ofte iverksettes i en tilnærmet vilkårlig rekkefølge, og tildels uten fokus på tiltak som øker sikkerheten mest. Avinor ønsker derfor å utvikle en kvantitativ risikoanalysemodell som kan gi et bedre grunnlag for å prioritere ulike risikoreduserende tiltak.
Modellen som er utviklet, tar for seg det totale risikobildet rundt flygning i tilknytning til en lufthavn. Dermed kan en få synliggjort hvor bidragene til den totale risikoen ligger, og
hvilke tiltak som gir mest redusert risiko i forhold til kostnaden. Risikomodellen tar hensyn til usikkerhet i ytre forhold ved at det for hver flyplass er angitt sannsynlighetsfordelinger for variasjon i f.eks. værforhold. Beregning av risikomålene gjøres ved Monte Carlo simulering. Effekten av usikkerheten gjenspeiles i risikonivået på den enkelte flyplass.
Modellen produserer flere forskjellige risikomål. Ved analyse av resultatene fra modellberegningene, vil man kunne veie de ulike målene opp mot hverandre for å komme frem til en anbefalt prioriteringsrekkefølge.
Flygbarhetsindeks beskriver risikoen er ved en enkelt flygning på en lufthavn i henhold til en skala fra 1 til 10. Et høyt indekstall indikerer høy risiko, mens et lavt tall indikerer lav
risiko. For en gitt flyplass angis det karakterer for et sett med risikoområder. Karakterene for de ulike områdene veies sammen etter risikoområdenes relevans for de ulike flygefasene.
Den sammenveide verdien kalles flygbarhetsindeksen og karakteriserer det samlede risikonivået ved en gitt flyplass.
Ved å benytte flyhavaristatistikk konverterer risikomodellen flygbarhetsindeks til sannsynlighet for fatalt havari (havari med minimum én person omkommet) pr. flygning. En flyplass med en høy flygbarhetsindeks for en gitt flygefase, vil få anslått en tilsvarende høy havarisannsynlighet i den samme fasen. Risikomodellen benytter så de justerte havarisannsynlighetene i et hendelsestre, slik at man til slutt får ut den samlede sannsynligheten for fatalt havari pr. flygning.
En avgjørende forutsetning for at denne konverteringen skal produsere troverdige resultater, er at modellen kalibreres. I den foreliggende prototypen er det kun gjort en preliminær
kalibrering. Det bør imidlertid gjøres en grundigere rekalibrering basert på resultater fra et bredt spekter av flyplasser. Imidlertid vil den relative forskjellen mellom lufthavnene (rangeringen) trolig være tilnærmet korrekt, selv om kalibreringen ikke er ferdig utført. Havarisannsynligheter konverteres videre til returperioder, dvs. gjennomsnittlig ”avstand”
mellom to havarier. Jo lavere risikonivået er ved en gitt flyplass, jo lengre blir returperiodene. Ved å benytte opplysninger om hvor mange passasjerer det er pr. flygning ved en gitt
lufthavn, kan man beregne konsekvensen av et fatalt havari i form av antall omkomne. Konsekvensberegningene tar her utgangspunkt sannsynlighetsfordelingen for antall passasjerer, kombinert med forventet andel som omkommer i de ulike flygefasene gitt havari i disse.
I modellen kan man i tillegg til å gi karakterer med basis i gjeldende tilstand på en flyplass, også kan angi karakterer svarende til tilstanden etter at tiltak er gjennomført. Dermed kan modellen gi en indikasjon på hvor mye flygbarhetsindeksen reduseres ved å gjennomføre ulike tiltak. Til hvert tiltak kan det også angis kostnader slik at man kan vurdere hvilke tiltak som gir størst uttelling pr. investert krone.
-
Huseby, Arne & Haavardsson, Nils Fridthjov
(2008).
A framework for multi-reservoir production optimization.
Universitetet i Oslo.
ISSN 0806-3842.
2008(4).
Show summary
When a large oil or gas field is produced, several reservoirs often share the same processing facility. This facility is typically capable of processing only a limited amount of commodities per unit of time. In order to satisfy these processing limitations, the production needs to be choked, i.e., scaled down by a suitable choke factor. A production strategy is defined as a vector valued function defined for all points of time representing the choke factors applied to reservoirs at any given time. In the present paper we consider the problem of optimizing such production strategies with respect to various types of objective functions. A general framework for handling this problem is developed. A crucial assumption in our approach is that the potential production rate from a reservoir can be expressed as a function of the remaining producible volume. The solution to the optimization problem depends on certain key properties, e.g., convexity or concavity, of the objective function and of the potential production rate functions. Using these properties several important special cases can be solved. An admissible production strategy is a strategy where the total processing capacity is fully utilized throughout a plateau phase. This phase lasts until the total potential production rate falls below the processing capacity, and after this all the reservoirs are produced without any choking. Under mild restrictions on the objective function the performance of an admissible strategy is uniquely characterized by the state of the reservoirs at the end of the plateau phase. Thus, finding an optimal admissible production strategy, is essentially equivalent to finding the optimal state at the end of the plateau phase. Given the optimal state a backtracking algorithm can then used to derive an optimal production strategy. We will demonstrate this on a specific example.
-
Haavardsson, Nils Fridthjov; Huseby, Arne & Holden, Lars
(2008).
A Parametric Class Of Production Strategies For Multi-Reservoir Production Optimization.
Universitetet i Oslo.
ISSN 0806-3842.
Show summary
When a large oil or gas field is produced, several reservoirs often share the same processing facility. This facility is typically capable of processing only a limited amount of oil, gas and water per unit of time. In the present paper only single phase production, e.g., oil production, is considered. In order to satisfy the processing limitations, the production needs to be choked. That is, for each reservoir the production is scaled down by suitable choke factors between zero and one, chosen so that the total production does not exceed the processing capacity. Huseby & Haavardsson (2008) introduced the concept of a production strategy, a vector valued function defined for all points of time t ≥ 0 representing the choke factors applied to the reservoirs at time t. As long as the total potential production rate is greater than the processing capacity, the choke factors should be chosen so that the processing capacity is fully utilized. When the production reaches a state where this is not possible, the production should be left unchoked. A production strategy satisfying these constraints is said to be admissible. Huseby & Haavardsson (2008) developed a general framework for optimizing production strategies with respect to various types of objective functions. In the present paper we present a parametric class of admissible production strategies. Using the framework of Huseby & Haavardsson (2008) it can be shown that under mild restrictions on the objective function an optimal strategy can be found within this class. The number of parameters needed to span the class is bounded by the number of reservoirs. Thus, an optimal strategy within this class can be found using a standard numerical optimization algorithm. This makes it possible to handle complex, high-dimensional cases. Furthermore, uncertainty may be included, enabling robustness and sensitivity analysis.
-
Haavardsson, Nils Fridthjov; Huseby, Arne; Pedersen, Frank; Lyngroth, Steinar; Xu, Jingzhen & Aasheim, Tore
(2008).
Hydrocarbon production optimization in fields with different ownership and commercial interests.
Universitetet i Oslo.
ISSN 0806-3842.
Show summary
A main field and satellite fields consist of several separate reservoirs with gas cap and/or oil rim. A processing facility on the main field receives and processes the oil, gas and water from all the reservoirs. This facility is typically capable of processing only a limited amount of oil, gas and water per unit of time. In order to satisfy these processing limitations, the production needs to be choked. The available capacity is shared among several field owners with different commercial interests. In this paper we focus on how total oil and gas production from all the fields could be optimized. The satellite field owners negotiate processing capacities on the main field facility. This introduces additional processing capacity constraints (booking constraints) for the owners of the main field. If the total wealth created by all owners represents the economic interests of the community, it is of interest to investigate whether the total wealth may be increased by lifting the booking constraints. If all reservoirs may be produced more optimally by removing the booking constraints, all owners may benefit from this when appropriate commercial arrangements are in place. We will compare two production strategies. The first production strategy optimizes locally, at distinct time intervals. At given intervals the production is prioritized so that the maximum amount of oil is produced. In the second production strategy a fixed weight is assigned to each reservoir. The reservoirs with the highest weights receive the highest priority.
-
Natvig, Bent; Eide, Kristina; Gåsemyr, Jørund Inge; Huseby, Arne & Isaksen, Stefan
(2008).
Simulation based analysis and an application to an offshore oil and gas production system of the Natvig measures of component importance in repairable systems.
Universitetet i Oslo.
ISSN 0806-3842.
Show summary
In the present paper the Natvig measures of component importance for repairable systems, and its extended version are analysed for two three component systems and a bridge system. The measures are also applied to an offshore oil and gas production system. According to the extended version of the Natvig measure a component is important if both by failing it strongly reduces the expected system uptime and by being repaired it strongly reduces the expected system downtime. The results include a study of how different distributions affect the ranking of the components. All numerical results are computed using discrete event simulation. In a companion paper (Huseby, Eide, Isaksen, Natvig and Gåsemyr 2008) the advanced simulation methods needed in these calculations are described.
-
Huseby, Arne; Eide, Kristina; Isaksen, Stefan; Natvig, Bent & Gåsemyr, Jørund Inge
(2008).
Advanced discrete event simulation methods with application to importance measure estimation.
Universitetet i Oslo.
ISSN 0806-3842.
2008(11).
Show summary
In the present paper we use discrete event simulation in order to analyze a binary monotone system of repairable components. Asymptotic statistical properties of such a system, e.g., the asymptotic system availability and component criticality, can easily be estimated by running a single discrete event simulation on the system over a sufficiently long time horizon, or by working directly on the stationary component availabilities. Sometimes, however, one needs to estimate how the statistical properties of the system evolve over time. In such cases it is necessary to run many simulations to obtain a stable curve estimate. At the same time one needs to store much more information from each simulation. A crude approach to this problem is to sample the system state at fixed points of time, and then use the mean values of the states at these points as estimates of the curve. Using a sufficiently high sampling rate a satisfactory estimate of the curve can be obtained. Still, all information about the process between the sampling points is thrown away. To handle this issue, we propose an alternative sampling procedure where we utilize process data between the sampling points as well. This simulation method is particularly useful when estimating various kinds of component importance measures for repairable systems. As explained in Natvig and Gåsemyr (2009) such measures can often be expressed as weighted integrals of the time-dependent Birnbaum measure of importance. By using the proposed simulation methods, stable estimates of the Birnbaum measure as a function of time are obtained. Combined with the appropriate weight function the importance measures of interest can be estimated.
-
Huseby, Arne & Haavardsson, Nils Fridthjov
(2007).
Multisegment production profile models, a hybrid system approach.
Department of Mathematics, University of Oslo.
ISSN 0806-3842.
2007(2).
Show summary
When an oil or gas field development project is evaluated, having a satisfactory production model is very important. Since the first attempts in the 40's, many different models have been developed for this purpose. Such a model typically incorporates knowledge about the geological properties of the reservoir. When such models are used in a total value chain analysis, however, also economical and strategic factors need to be taken into account. In order to do this, flexible modeling tools are needed. In this paper we demonstrate how this can be done using hybrid system models. In such models the production is modeled by ordinary differential equations representing both the reservoir dynamics as well as strategic control variables. The approach also allows us to break the production model into a sequence of segments. Thus, we can easily represent various discrete events affecting the production in different ways. The modeling framework is very flexible making it possible to obtain realistic approximations to real-life production profiles. At the same time the calculations can be done very efficiently. The framework can be incorporated in a full scale project uncertainty analysis.
View all works in Cristin
Published
Nov. 30, 2010 11:20 PM
- Last modified
Jan. 11, 2024 9:47 PM