Tags:
Stochastic analysis and finance and insurance and risk,
Statistics
Publications
-
Huseby, Arne (2017). Optimizing energy production systems under uncertainty, In Lesley Walls; Matthew Revie & Tim Bedford (ed.),
Risk, Reliability and Safety: Innovating Theory and Practice : Proceedings of ESREL 2016 (Glasgow, Scotland, 25-29 September 2016).
CRC Press.
ISBN 9781138029972.
13.6.
s 1619
- 1626
Full text in Research Archive.
Show summary
Electricity infrastructure has become a critical element of modern industrial society. In order to model and analyse this infrastructure, identify weaknesses, and optimize performance, one needs to take into account its distributed nature. Rather than modelling a single system, energy production and distribution systems consists of many more or less autonomous subsystems working together and trading with each other. Analytical models could perhaps be used to describe a single subsystem. However the complexity related to the interactions between the subsystems soon becomes unmanageable. Even establishing a simulation model for such phenomenons is a non-trivial task, especially if the model is required to be easily scaleable. In this paper we consider the problem of optimizing a simplified energy system with respect to supply stability. This is done using both deterministic methods and Monte Carlo methods. The system is broken into smaller units. These units may trade energy between them in order to maintain a stable supply covering the demand. An important element in the model is the ability to store energy within the unit. For some units, e.g., hydroelectric power plants, the energy can be easily stored in the form of a water reservoir. For other units, like wind power plants, storing energy is usually not feasible. By using an object oriented software framework, we can compare different production units, and study how these can interact in order to facilitate a stable total production.
-
Huseby, Arne; Vanem, Erik & Eskeland, Karoline (2017). Evaluating properties of environmental contours, In Marko Cepin & Radim Bris (ed.),
Safety & Reliability, Theory and Applications.
CRC Press.
ISBN 978-1138629370.
264.
s 2101
- 2109
Full text in Research Archive.
Show summary
Environmental contours are widely used as a basis for e.g., ship design. The traditional approach to environmental contours is based on the well-known Rosenblatt transformation. However, due to the effects of this transformation the probabilistic properties of the resulting environmental contour can be difficult to interpret. An alternative approach to environmental contours uses Monte Carlo simulations on the joint environmental model, and thus obtain a contour without the need for the Rosenblatt transformation. This contour have well-defined probabilistic properties, but may sometimes be overly conservative in certain areas. In this paper we give a precise definition of the concept of exceedence probability which is valid for all types of environmental contours. Moreover, we show how to estimate the exceedence probability of a given environmental contour, and use this to compare different approaches to contour construction. The methods are illustrated by numerical examples based on real-life data.
-
Lindqvist, Bo Henry; Samaniego, Francisco J. & Huseby, Arne (2016). On the equivalence of systems of different sizes, with applications to system comparisons. Advances in Applied Probability.
ISSN 0001-8678.
48(2), s 332- 348 . doi:
10.1017/apr.2016.3
-
Huseby, Arne & Thomsen, Jan (2015). Quantifying operational risk exposure by combining incident data and subjective risk assessments, In Luca Podofillini; Bruno Sudret; Bozidar Stojadinovic; Enrico Zio & Wolfgang Kröder (ed.),
Safety and Reliability of Complex Engineered Systems.
CRC Press.
ISBN 9781138028791.
57.
s 443
- 451
Show summary
Quantifying operational risk exposure typically involves gathering information from several sources, including historical data as well as subjective assessments. Using historical data one can estimate both an incident frequency distribution, as well as an incident consequence distribution. Based on these two distributions a simulation model can be established. However, by limiting the focus to data related to incidents which may reappear in the future, one is often left with a relatively short incident history. In order to improve the risk quantification, it is often necessary to include subjective risk assessments as well. In the present paper we propose three models for how to combine these two sources of information. In the first model we assume that the two sources are completely disjoint, while in the second model the two sources are assumed to overlap completely. The third model represents an intermediate situation where the two sources are partially overlapping. This third model contains the two first models as limiting cases. The models are illustrated and compared in an extensive numerical example.
-
Huseby, Arne; Vanem, Erik & Natvig, Bent (2015). A new Monte Carlo method for environmental contour estimation, In Tomasz Nowakowski; Marek Mlynczak; Anna Jodejko-Pietruczuk & Sylwia Werbinska-Wojciechowska (ed.),
Safety and Reliability : Methodology and Applications: Proceedings of the European safety and reliability Conference, ESREL 2014, Poland, 14-18 september 2014.
Taylor & Francis.
ISBN 978-1-138-02681-0.
Chapter 270.
s 2091
- 2098
Show summary
Environmental contour estimation is an efficient and widely used method for identifying extreme conditions as a basis for e.g., ship design. Monte Carlo simulation is a flexible method for estimating such contours. A main challenge with this approach, however, is that extreme conditions typically correspond to events with low probabilities. Thus, in order to obtain satisfactory estimates, large numbers of simulations are needed. While these simulations can be carried out very fast, the analysis of the resulting data can be very time-consuming. In the present paper we propose a new Monte Carlo method where only the extreme simulation results are stored and analyzed. This method utilizes the fact that an unbiased estimate of an environmental contour does not depend on the exact values of the non-extreme results. It is sufficient to know the number of such results. Probabilistic structural reliability analysis is performed to ensure that mechanical structures can withstand certain design loads. Obtaining precise environmental contours has become an important part of this analysis. The proposed method improves precision and speeds up calculations.
-
Huseby, Arne; Vanem, Erik & Natvig, Bent (2015). Alternative environmental contours for structural reliability analysis. Structural Safety.
ISSN 0167-4730.
54, s 32- 45 . doi:
10.1016/j.strusafe.2014.12.003
Show summary
This paper presents alternative methods for constructing environmental contours for probabilistic struc- tural reliability analysis of structures exposed to environmental forces such as wind and waves. For such structures, it is important to determine the environmental loads to apply in structural reliability calcula- tions and structural design. The environmental contour concept is an effective, risk-based approach in establishing such design conditions. Traditionally, such contours are established by way of a Rosenblatt transformation from the environmental parameter space to a standard normal space, which introduces uncertainties and may lead to biased results. The proposed alternative approach, however, eliminates the need for such transformations and established environmental contours based on direct Monte Carlo sampling from the joint distribution of the relevant environmental parameters. In this paper, three alter- native implementations of the proposed generic approach will be outlined.
-
Klein-Paste, Alex; Bugge, Hans Jørgen & Huseby, Arne (2015). A decision support model to assess the braking performance on snow and ice contaminated runways. Cold Regions Science and Technology.
ISSN 0165-232X.
117, s 43- 51 . doi:
10.1016/j.coldregions.2015.06.002
-
Huseby, Arne & Breivik, Olav Nikolai (2014). Optimal load sharing in a binary multicomponent system where the components have constant or increasing failure rates, In Raphaël Steenbergen; P.H.A.J.M. van Gelder; S. Miraglia & A.C.W.M. Vrouwenvelder (ed.),
Safety, reliability and risk analysis : beyond the horizon : proceedings of the European Safety and Reliability Conference, ESREL 2013, Amsterdam, the Netherlands, 29 September-2 October 2013.
CRC Press.
ISBN 978-1-138-00123-7.
Chapter 340.
s 2865
- 2872
Show summary
In the present paper we consider a system consisting of n components. The system is exposed to a certain load of supplying a certain amount of utility, e.g., electrical power. The load on the system is distributed among the components. For simplicity we assume that the system demand is constant over time. When functioning each component is capable of handling a certain amount of load which we refer to as the component's load capacity. The load capacity of a component is assumed to be constant throughout its lifetime. The main objective of the present paper is developing methods for optimal load sharing among the components subject to the constraints imposed by the load capacities and demand on the system. A load sharing strategy is optimal if it maximizes the expected lifetime of the system. In the paper we show how to solve the problem in the case where the components have constant failure rates. We also consider the case where the components have increasing failure rate, and show how to solve this in some special cases.
-
Vanem, Erik; Huseby, Arne & Natvig, Bent (2014). Bayesian hierarchical spatio-temporal modelling of trends and future projections in the ocean wave climate with a CO2 regression component. Environmental and Ecological Statistics.
ISSN 1352-8505.
21(2), s 189- 220 . doi:
10.1007/s10651-013-0251-6
-
Huseby, Arne & Natvig, Bent (2013). Discrete event simulation methods applied to advanced importance measures of repairable components in multistate network flow systems. Reliability Engineering & System Safety.
ISSN 0951-8320.
119, s 186- 198 . doi:
10.1016/j.ress.2013.05.025
Show summary
Discrete event models are frequently used in simulation studies to model and analyze pure jump processes. A discrete event model can be viewed as a system consisting of a collection of stochastic processes, where the states of the individual processes change as results of various kinds of events occurring at random points of time. We always assume that each event only affects one of the processes. Between these events the states of the processes are considered to be constant. In the present paper we use discrete event simulation in order to analyze a multistate network flow system of repairable components. In order to study how the different components contribute to the system, it is necessary to describe the often complicated interaction between component processes and processes at the system level. While analytical considerations may throw some light on this, a simulation study often allows the analyst to explore more details. By producing stable curve estimates for the development of the various processes, one gets a much better insight in how such systems develop over time. These methods are particulary useful in the study of advanced importance measures of repairable components. Such measures can be very complicated, and thus impossible to calculate analytically. By using discrete event simulations, however, this can be done in a very natural and intuitive way. In particular significant differences between the Barlow-Proschan measure and the Natvig measure in multistate network flow systems can be explored.
-
Huseby, Arne; Vanem, Erik & Natvig, Bent (2013). A New Method for Environmental Contours in Marine Structural Design, In conference ASME (ed.),
Proceedings of ASME 2013 32nd International Conference on Ocean, Offshore and Arctic Engineering (OMAE 2013). Volume 2A: Structures, Safety and Reliability.
ASME Press.
ISBN 978-0-7918-5532-4.
Paper No. OMAE2013-10053.
-
Huseby, Arne; Vanem, Erik & Natvig, Bent (2013). A new approach to environmental contours for ocean engineering applications based on direct Monte Carlo simulations. Ocean Engineering.
ISSN 0029-8018.
60, s 124- 135 . doi:
10.1016/j.oceaneng.2012.12.034
-
Vanem, Erik; Natvig, Bent; Huseby, Arne & Bitner-Gregersen, E. M. (2013). An Illustration of the Effect of Climate Change on the Ocean Wave Climate - A Stochastic Model, In Bharat Raj Singh (ed.),
Climate Change - Realities, Impacts Over Ice Cap, Sea Level and Risks.
INTECH.
ISBN 978-953-51-0934-1.
Chapter 20.
s 481
- 508
-
Huseby, Arne & Moratchevski, Nikita (2012). Sequential optimization of oil production under uncertainty, In Christophe Bérenguer; Antoine Grall & Carlos Guedes Soares (ed.),
Advances in Safety, Reliability and Risk Management - proceedings of the European Safety and Reliability Conference, ESREL 2011.
CRC Press.
ISBN 978-0-415-68379-1.
030.
s 233
- 239
Show summary
In the present paper we study how to optimize oil production with respect to revenue in a situation where the production rate is uncertain. The oil production in a given period is described in terms of a difference equation, where this equation contains several uncertain parameters. The uncertainty about these parameteres is expressed in terms of a suitable prior distribution. As the production develops, more information about the production parameters is gained. Hence, the uncertainty distributions need to be updated. However, the information comes in the form of inequalities and equalities which makes it very difficult to obtain exact analytical expressions for the posteriors. Still it is possible to estimate the dis- tributions using a combination of rejection sampling and the well-known Metropolis-Hastings algorithm. Armed with these techniques it is possible to solve the optimization problem using stochastic program- ming. The methods will be demonstrated on a few examples.
-
Huseby, Arne & Rabbe, Marit (2012). A SCENARIO BASED MODEL FOR ASSESSING RUNWAY CONDITIONS USING WEATHER DATA, In . PSAM&ESREL (ed.),
11th International Probabilistic Safety Assessment and Management Conference and the Annual European Safety and Reliability Conference 2012, 25-29 June 2012, Helsinki, Finland.
Curran Associates, Inc..
ISBN 978-1-62276-436-5.
21-WE3.
s 5092
- 5101
Show summary
Slippery runways represent a significant risk to aircrafts especially during the winter season. Recent accidents, such as the Southwest Airlines jet skidding off a runway at Chicago Midway Airport in December 2005, as well as the similar accident with the Delta Connection flight at the Cleveland Hopkins International Airport in Ohio in February 2007, show that this is indeed a serious problem. In order to apply the appropriate braking action, the pilots need reliable information about the runway conditions. Unfortunately the accuracy of runway reports can sometimes be unsatisfactory. In order to obtain more precise information about the current conditions, we suggest to use an assessment model based on various types of weather data. The model defines a set of scenarios known to cause slippery conditions. By monitoring meteorological parameters like air and ground temperature, humidity, visibility and precipitation, the model enables us to detect scenarios and issue warnings to the ground personnel. An information system using this model is currently implemented on 14 Norwegian airports. In the present paper we present this model. Morover, we show how the model can be evaluated using data from a large number of flights. A key concept in this evaluation is the notion of friction limited landings. Unless the pilot challenges the runway friction during the landing, the flight data cannot be used to measure the available friction. Thus, typically only a small subset of the available flight data is actually used in the evaluation.
-
Huseby, Arne & Sødal, Karen Marie (2012). SEQUENTIAL OPTIMIZATION OF OIL PRODUCTION FROM MULTIPLE RESERVOIRS UNDER UNCERTAINTY, In . PSAM&ESREL (ed.),
11th International Probabilistic Safety Assessment and Management Conference and the Annual European Safety and Reliability Conference 2012, 25-29 June 2012, Helsinki, Finland.
Curran Associates, Inc..
ISBN 978-1-62276-436-5.
02-WE2.
s 633
- 642
Show summary
In the present paper we study how to optimize oil production from several reservoirs sharing a common processing facility under uncertainty. The potential oil production from a reservoir in a given period is described in terms of a difference equation containing several uncertain parameters. Using Lagrange optimization a step by step production strategy can be determined. This strategy distributes the available processing capacity so that the expected total production from each period is maximized. This is done by ensuring that all the reservoirs have the same probability of producing according to the plan. The resulting strategy, however, is typically not optimal globally. In order to improve the global performance of the strategy, a different objective function is introduced. As the production develops, more information about the production parameters is gained. Hence, the uncertainty distributions need to be updated. This is done using a combination of rejection sampling and the Metropolis-Hastings algorithm. This updating is taken into account in the optimization procedure. The methods are illustrated by considering a specific example.
-
Klein-Paste, Alex; Huseby, Arne; Anderson, John; Giesman, Paul; Bugge, Hans Jørgen & Langedahl, Tor Børre (2012). Braking performance of commercial airplanes during operation on winter contaminated runways. Cold Regions Science and Technology.
ISSN 0165-232X.
79-80, s 29- 37 . doi:
10.1016/j.coldregions.2012.04.001
-
Vanem, Erik; Huseby, Arne & Natvig, Bent (2012). A Bayesian hierarchical spatio-temporal model for significant wave height in the North Atlantic. Stochastic environmental research and risk assessment (Print).
ISSN 1436-3240.
26(5), s 609- 632 . doi:
10.1007/s00477-011-0522-4
-
Vanem, Erik; Huseby, Arne & Natvig, Bent (2012). A Stochastic Model in Space and Time for Monthly Maximum significant Wave Height, In
Geostatistics Oslo 2012.
Springer.
ISBN 978-94-007-4152-2.
41.
s 505
- 517
-
Vanem, Erik; Huseby, Arne & Natvig, Bent (2012). Modelling ocean wave climate with a Bayesian hierarchical space-time model and a log-transform of the data. Ocean Dynamics.
ISSN 1616-7341.
62(3), s 355- 375 . doi:
10.1007/s10236-011-0505-5
-
Vanem, Erik; Natvig, Bent & Huseby, Arne (2012). Modelling the Effect of Climate Change on the Ocean Wave Climate Around the World, In
The Proceedings of the Tenth (2012) ISOPE Pacific/Asia Offshore Mechanics Symposium PACOMS-2012.
International Society of Offshore & Polar Engineers.
ISBN 978-1-880653-93-7.
paper.
s 145
- 152
-
Vanem, Erik; Natvig, Bent & Huseby, Arne (2012). Modelling the effect of climate change on the wave climate of the world’s oceans. Ocean Science Journal.
ISSN 1738-5261.
47(2), s 123- 145 . doi:
10.1007/s12601-012-0013-7
-
Huseby, Arne (2011). Oriented matroid systems. Discrete Applied Mathematics.
ISSN 0166-218X.
159(1), s 31- 45 . doi:
10.1016/j.dam.2010.09.008
Show summary
The domination invariant has played an important part in reliability theory. While most of the work in this field has been restricted to various types of network system models, many of the results can be generalized to much wider families of systems associated with matroids. Previous papers have explored the relation between undirected network systems and matroids. In this paper the main focus is on directed network systems and their relation to oriented matroids. An oriented matroid is a special type of matroid where the circuits are signed sets. Using these signed sets one can e.g., obtain a set theoretic representation of the direction of the edges of a directed network system. Classical results for directed network systems include the fact that the signed domination is either +1 or −1 if the network is acyclic, and zero otherwise. It turns out that these results can be generalized to systems derived from oriented matroids. Several classes of systems for which the generalized results hold will be discussed. These include oriented versions of k-out-of-n systems and a certain class of systems associated with matrices.
-
Huseby, Arne & Breivik, Olav Nikolai (2011). Optimal load sharing in a binary multicomponent system, In
Proceedings of the 19th Advances in Risk and Reliability Technology Symposium.
University of Nottingham.
ISBN 9780904947656.
Chapter 31.
s 381
- 391
Show summary
In the present paper we consider a system consisting of n components that is exposed to the load of supplying a certain amount of utility, e.g., electrical power. The load on the system is distributed among the components. When functioning each component is capable of handling a certain amount of load. The load capacity of a component is assumed to be constant throughout its lifetime. The main objective of the present paper is developing methods for optimal load sharing among the components subject to the constraints imposed by the load capacities and demand on the system. In the paper we show how to solve the problem in several special cases, and outline a greedy algorithm for handling the general case.
-
Natvig, Bent; Huseby, Arne & Reistadbakk, Mads (2011). Measures of component importance in repairable multistate systems-a numerical study. Reliability Engineering & System Safety.
ISSN 0951-8320.
96(12), s 1680- 1690 . doi:
10.1016/j.ress.2011.07.006
-
Vanem, Erik; Huseby, Arne & Natvig, Bent (2011). A Bayesian-Hierarchical Space-Time Model for Significant Wave Height Data, In
30th International Conference on Ocean, Offshore and Arctic Engineering (OMAE2011) - Vol. II.
ASME Press.
ISBN 978-0-7918-4434-2.
Paper no. OMAE2011-49716.
s 517
- 530
-
Haavardsson, Nils Fridthjov; Huseby, Arne; Pedersen, Frank; Lyngroth, Steinar; Xu, Jingzhen & Aasheim, Tore (2010). Hydrocarbon Production Optimization in Fields With Different Ownership and Commercial Interests. SPE Reservoir Evaluation and Engineering.
ISSN 1094-6470.
13(1), s 95- 104 . doi:
10.2118/121399-PA
Show summary
A main field and satellite fields consist of several separate reservoirs with gas cap and/or oil rim. A processing facility on the main field receives and processes the oil, gas, and water from all the reservoirs. This facility is typically capable of processing only a limited amount of oil, gas, and water per unit of time. In order to satisfy these processing limitations, the production needs to be choked. The available capacity is shared among several field owners with different commercial interests. In this paper, we focus on how total oil and gas production from all the fields could be optimized. The satellite-field owners negotiate processing capacities on the main-field facility. This introduces additional processing-capacity constraints (booking constraints) for the owners of the main field. If the total wealth created by all owners represents the economic interests of the community, it is of interest to investigate whether the total wealth may be increased by lifting the booking constraints. If all reservoirs may be produced more optimally by removing the booking constraints, all owners may benefit from this when appropriate commercial arrangements are in place. We will compare two production strategies. The first production strategy optimizes locally, at distinct time intervals. At given intervals, the production is prioritized so that the maximum amount of oil is produced. In the second production strategy, a fixed weight is assigned to each reservoir. The purpose of the weights is to be able to prioritize some reservoirs before others. The weights are optimized from a life-cycle perspective. As an illustration, a case study based on real data is presented. For the examples considered, it is beneficial to lift the booking constraints because all of the reservoirs combined can be produced more efficiently when this is done.
-
Huseby, Arne & Haavardsson, Nils Fridthjov (2010). Multi-reservoir production optimization under uncertainty, In Radim Bris; Sebastián Martorell & C. Guedes Soares (ed.),
Reliability, Risk and Safety. Theory and Applications.
CRC Press.
ISBN 978-0-415-55509-8.
Volume I.
s 407
- 413
Show summary
Oil and gas production from several reservoirs are often processed at a single processing facility. Due to limitations in the processing capacity, this implies that the production rates from the individual reservoirs have to be reduced. That is, for each reservoir the production rate is scaled down by a suitable choke factor between zero and one, chosen so that the total production does not exceed the processing capacity. Recent studies of production optimization include (Horne, 2002), (Merabet & Bellah, 2002) and (Neiro & Pinto, 2004). (Huseby & Haavardsson, 2009) introduced the concept of a production strategy, a vector valued function defined for all points of time t ≥ 0 representing the choke factors applied to the reservoirs at time t. As long as the total potential production rate is greater than the processing capacity, the choke factors should be chosen so that the processing capacity is fully utilized. When the production reaches a state where this is not possible, the production should be left unchoked. A production strategy satisfying these constraints is said to be admissible. (Huseby & Haavardsson, 2009) developed a general framework for optimizing production strategies with respect to various types of objective functions. The same problem was considered in (Haavardsson et. al., 2008) where a parametric class of production strategies were introduced. (Haavardsson et. al., 2008) also showed how to find an optimal strategy within this class given that all reservoir parameters were known. In the present paper we consider the problem of optimizing the production strategy when the reservoir parameters are uncertain. The uncertainty is represented using the stochastic models introduced in (Haavardsson & Huseby, 2007). In real- life reservoir uncertainty typically change over time. Thus, a production strategy considered to be optimal initially may turn out to be far from optimal as more knowledge about the reservoirs is gained. Ideally, one would prefer production strategies which could be updated dynamically as new information is obtained. However, optimizing such dynamic production strategies is a very difficult problem. In the present paper we instead focus on finding production strategies which are robust with respect to variations in the reservoir properties.
-
Huseby, Arne; Klein-Paste, Alex & Bugge, Hans Jørgen (2010). Assessing airport runway conditions—A Bayesian approach, In Ben Ale; Ioannis Papazoglou & Enrico Zio (ed.),
Reliability, risk and safety : back to the future.
CRC Press.
ISBN 978-0-415-60427-7.
Kapittel 277.
s 2024
- 2032
-
Huseby, Arne & Natvig, Bent (2010). Advanced discrete simulation methods applied to repairable multistate systems, In Radim Bris; Sebastián Martorell & C. Guedes Soares (ed.),
Reliability, Risk and Safety. Theory and Applications.
CRC Press.
ISBN 978-0-415-55509-8.
Volume I.
s 659
- 666
-
Huseby, Arne; Natvig, Bent; Gåsemyr, Jørund Inge; Skutlaberg, Kristina & Isaksen, Stefan (2010). Advanced discrete event simulation methods with application to importance measure estimation in reliability, In Aitor Goti (ed.),
Discrete Event Simulation.
INTECH.
ISBN 978-953-307-115-2.
Kapittel 11.
s 205
- 222
Show summary
In the present chapter we use discrete event simulation in order to analyze a binary monotone system of repairable components. Asymptotic statistical properties of such a system, e.g., the asymptotic system availability and component criticality, can easily be estimated by running a single discrete event simulation on the system over a sufficiently long time horizon, or by working directly on the stationary component availabilities. Sometimes, however, one needs to estimate how the statistical properties of the system evolve over time. In such cases it is necessary to run many simulations to obtain a stable curve estimate. At the same time one needs to store much more information from each simulation. A crude approach to this problem is to sample the system state at fixed points of time, and then use the mean values of the states at these points as estimates of the curve. Using a sufficiently high sampling rate a satisfactory estimate of the curve can be obtained. Still, all information about the process between the sampling points is thrown away. To handle this issue, we propose an alternative sampling procedure where we utilize process data between the sampling points as well. This simulation method is particularly useful when estimating various kinds of component importance measures for repairable systems. Such measures can often be expressed as weighted integrals of the time-dependent Birnbaum measure of importance. By using the proposed simulation methods, stable estimates of the Birnbaum measure as a function of time are obtained. Combined with the appropriate weight function the importance measures of interest can be estimated.
-
Natvig, Bent; Huseby, Arne & Reistadbakk, Mads (2010). Measures of component importance in repairable multistate systems-a numerical study, In Ben Ale; Ioannis Papazoglou & Enrico Zio (ed.),
Reliability, risk and safety : back to the future.
CRC Press.
ISBN 978-0-415-60427-7.
Artikkel.
s 677
- 685
-
Huseby, Arne; Eide, Kristina A.; Isaksen, Stefan; Natvig, Bent & Gåsemyr, Jørund Inge (2009). Advanced discrete event simulation methods with application to importance measure estimation, In Sebastián Martorell; C. Guedes Soares & Julie Barnett (ed.),
Safety, Reliability and Risk Analysis: Theory, Methods and Applications.
CRC Press.
ISBN 978-0-415-48516-6.
volume 3.
s 1747
- 1753
Show summary
In the present paper we use discrete event simulation in order to analyze a binary monotone system of repairable components. Asymptotic statistical properties of such a system, e.g., the asymptotic system availability and component criticality, can easily be estimated by running a single discrete event simulation on the system over a sufficiently long time horizon, or by working directly on the stationary availabilities. Sometimes, however, one needs to estimate how the statistical properties of the system evolve over time. In such cases it is necessary to run many simulations to obtain a stable curve estimate. At the same time one needs to store much more information from each simulation. A crude approach to this problem is to sample the system state at fixed points of time, and then use the mean values of the states at these points as estimates of the curve. Using a sufficiently high sampling rate a satisfactory estimate of the curve can be obtained. Still, all information about the process between the sampling points is thrown away. To handle this issue, we propose an alternative sampling procedure where we utilize process data between the sampling points as well. This simulation method is particularly useful when estimating various kinds of component importance measures for repairable systems. As explained in (Natvig and Gåsemyr 2008) such measures can often be expressed as weighted integrals of the time-dependent Birnbaum measure of importance. By using the proposed simulation methods, stable estimates of the Birnbaum measure as a function of time are obtained and combined with the appropriate weight function, and thus producing the importance measure of interest.
-
Huseby, Arne & Haavardsson, Nils Fridthjov (2009). Multi-reservoir production optimization. European Journal of Operational Research.
ISSN 0377-2217.
199(1), s 236- 251 . doi:
10.1016/j.ejor.2008.11.023
Show summary
When a large oil or gas field is produced, several reservoirs often share the same processing facility. This facility is typically capable of processing only a limited amount of commodities per unit of time. In order to satisfy these processing limitations, the production needs to be choked, i.e., scaled down by a suitable choke factor. A production strategy is defined as a vector valued function defined for all points of time representing the choke factors applied to reservoirs at any given time. In the present paper we consider the problem of optimizing such production strategies with respect to various types of objective functions. A general framework for handling this problem is developed. A crucial assumption in our approach is that the potential production rate from a reservoir can be expressed as a function of the remaining recoverable volume. The solution to the optimization problem depends on certain key properties, e.g., convexity or concavity, of the objective function and of the potential production rate functions. Using these properties several important special cases can be solved. An admissible production strategy is a strategy where the total processing capacity is fully utilized throughout a plateau phase. This phase lasts until the total potential production rate falls below the processing capacity, and after this all the reservoirs are produced without any choking. Under mild restrictions on the objective function the performance of an admissible strategy is uniquely characterized by the state of the reservoirs at the end of the plateau phase. Thus, finding an optimal admissible production strategy, is essentially equivalent to finding the optimal state at the end of the plateau phase. Given the optimal state a backtracking algorithm can then used to derive an optimal production strategy. We will illustrate this on a specific example.
-
Huseby, Arne & Rabbe, Marit (2009). Predicting airport runway conditions based on weather data, In Sebastián Martorell; C. Guedes Soares & Julie Barnett (ed.),
Safety, Reliability and Risk Analysis: Theory, Methods and Applications.
CRC Press.
ISBN 978-0-415-48516-6.
volum 3.
s 2199
- 2206
Show summary
Slippery runways represent a significant risk to aircrafts especially during the winter season. Thus, having reliable methods for identifying such conditions is very important. However, measuring the runway friction with a satisfactory precision is very difficult. While many different measurement devices have been developed, it is hard to find equipment that produces stable and consistent results. Furthermore, in order to measure friction, the runway needs to be shut down. Thus, in order to avoid severe delays to the traffic, such measurements cannot be carried out too frequently. In the present paper we present the results of a large-scale study of runway conditions carried out during two winter seasons at two Norwegian airports. The main goal was developing methodology for identifying slippery runway conditions using weather data in addition to runway reports. Throughout the two seasons various kinds of weather data was collected, e.g., air and ground temperature, humidity, precipitation, visibility and wind. Using these data a scenario based weather model for slippery conditions was developed. The model was validated using flight data from a large number of flights. Using these data we computed several indicators reflecting how the aircrafts were affected by the runway conditions. By comparing indicator values from landings where the weather model predicted slippery conditions with the corresponding values from landings on dry runways, we were able to verify that the weather model selected the correct landings.
-
Natvig, Bent; Eide, Kristina A.; Gåsemyr, Jørund Inge; Huseby, Arne & Isaksen, Stefan (2009). Simulation based analysis and an application to an offshore oil and gas production system of the Natvig measures of component importance in repairable systems. Reliability Engineering & System Safety.
ISSN 0951-8320.
94(10), s 1629- 1638 . doi:
10.1016/j.ress.2009.04.002
-
Natvig, Bent; Eide, Kristina A.; Gåsemyr, Jørund Inge; Huseby, Arne & Isaksen, Stefan (2009). The Natvig measures of component importance in repairable systems applied to an offshore oil and gas production system, In Sebastián Martorell; C. Guedes Soares & Julie Barnett (ed.),
Safety, Reliability and Risk Analysis: Theory, Methods and Applications.
CRC Press.
ISBN 978-0-415-48516-6.
volume 3.
s 2029
- 2035
Show summary
In the present paper the Natvig measures of component importance for repairable systems, and its extended version are applied to an offshore oil and gas production system. According to the extended version of the Natvig measure a component is important if both by failing it strongly reduces the expected system uptime and by being repaired it strongly reduces the expected system downtime. The results include a study of how different distributions affect the ranking of the components. All numerical results are computed using discrete event simulation. In a companion paper (Huseby, Eide, Isaksen, Natvig, and Gåsemyr 2008) the advanced simulation methods needed in these calculations are decribed
-
Huseby, Arne (2008). Signed Domination of Oriented Matroid Systems, In Tim Bedford; John Quigley; Lesley Walls; Babakalli Alkali; Alireza Daneshkhah & Gavin Hardman (ed.),
Advances in Mathematical Modeling for Reliability.
IOS Press.
ISBN 978-1-58603-865-6.
Chapter 7.
s 177
- 184
-
Haavardsson, Nils Fridthjov & Huseby, Arne (2007). Multisegment production profile models - A tool for enhanced total value chain analysis. Journal of Petroleum Science and Engineering.
ISSN 0920-4105.
58, s 325- 338 . doi:
10.1016/j.petrol.2007.02.003
-
Huseby, Arne (2007). Modeling aircraft movements using stochastic hybrid systems, In Terje Aven & Jan Erik Vinnem (ed.),
Risk, Reliability and Societal Safety, vol.2 Thematic Topics.
Taylor & Francis.
ISBN 978-0-415-44784-3.
Monte Carlo methods in system safety and reliability.
s 1183
- 1190
Show summary
Modern simulation techniques and computer power makes it possible to simulate systems with continuous state spaces. In this paper we consider the problem of simulating aircraft movements. Such simulation studies are of interest in order to evaluate the risk of collision or near collision events in areas with heavy air traffic. Application areas include systems for automatic air traffic control (ATC) and airport traffic planning. Risk elements in this area include arrivals of aircrafts into the area of interest, weather conditions, chosen runway and landing direction as well as arbitrary deviations from the normal flight trajectories. A realistic simulation model, incorporating all or most of these elements, require a combination of different tools. A popular framework for such modeling is stochastic hybrid systems. A risk event occurs when two (or more) aircrafts are too close to each other in the air. When such an event occurs, the aircrafts will attempt to change their trajectories in order to avoid fatal accidents. Fortunately, risk events are fairly rare. Thus, in order to obtain stable results for the probability of a collision or near collision it is necessary to run the model for a substantial amount of time. In the paper we propose a methodology for simulating aircraft movements based on ordinary differential equations and counting processes. The models and methods are illustrated with some simulation examples.
-
Huseby, Arne & Naustdal, Morten (2003). Improved Simulation Methods for System Reliability Evaluation, In Bo Lindqvist & Kjell Doksum (ed.),
Mathematical and Statistical Methods in Reliability.
World Scientific.
ISBN 981-238-321-2.
8.
s 105
- 121
Show summary
In this chapter we show how Monte Carlo methods can be improved significantly by conditioning on a suitable variable. In particular this principle is applied to system reliability evaluation. In relation to this an efficient algorithm for simulating a vector of independent Bernoulli variables given their sum is presented. By using this algorithm one can generate such a vector in O(n) time, where n is the number of variables. Thus, simulating from the conditional distribution can be done just as efficient as simulating from the unconditional distribution. The special case where the Bernoulli variables are i.i.d. is also considered. For this case the reliability evaluation can be improved even further. In particular, we present a simulation algorithm which enables us to estimate the entire system reliability polynomial expressed as a function of the common component reliability. If the component reliabilities are not too different from each other, a generalized version of the improved conditional method can be used in combination with importance sampling.
View all works in Cristin
-
Skutlaberg, Kristina; Huseby, Arne & Natvig, Bent (2018). Partial monitoring of multistage systems. Statistical research report (Universitetet i Oslo. Matematisk institut. 1.
Show summary
For large multicomponent systems it is typically too costly to monitor the entire system constantly. In the present paper we consider a case where a component is unobserved in a time interval [0, T]. Here T is a stochastic variable with a distribution which depends om the structure of the system and the lifetime distribution of the other components. Thus, different systems will result in different distributions of T, the main focus of the paper is on how the unobserved period of time affects what we learn about the unobserved component during this period. We analyse this by considering three different cases. In the first case we consider both T as well as the state of the unobserved component at time T as given. In the second case we allow the state of the unobserved component at time T to be stochastic, while in the third case both T and the state are treated as stochastic variable. In all cases we study the problem using preposterior analysis. That is, we investigate how much information we can expect to get by the end of the time interval [0, T]. The methodology is also illustrated on a more complete example.
-
Huseby, Arne (2017). On orientable matroid systems and reliability equivalence. Statistical research report (Universitetet i Oslo. Matematisk institut. 1. Full text in Research Archive.
-
Lilleborge, Marie; Hauge, Ragnar; Eidsvik, Jo & Huseby, Arne (2016). Efficient Information Gathering in Discrete Bayesian Networks. Series of dissertations submitted to the Faculty of Mathematics and Natural Sciences, University of Oslo.. No. 1796. Full text in Research Archive.
Show summary
Effektiv informasjonsinnhenting i Bayesianske Nettverk
-
Huseby, Arne & Thomsen, Jan (2015). Simulating total operational risk using retrospective information and subjective assessments - a Bayesian approach. Statistical research report (Universitetet i Oslo. Matematisk institut. 2. Full text in Research Archive.
Show summary
Quantifying operational risk exposure typically involves gathering information from several sources, including historical data as well as subjective assessments. Using historical data one can estimate both an incident frequency distribution, as well as an incident consequence distribution. Based on these two distributions a simulation model can be established. However, by limiting the focus to data related to incidents which may reappear in the future, one is often left with a relatively short incident history. In order to improve the risk quantification, it is often necessary to include subjective risk assessments as well. In the present paper we propose a model for how to combine these two sources of information. The model can be used to represent situations ranging from cases where the two sources are disjoint, overlap completely as well as intermediate cases where the two sources are partially overlapping. The model is illustrated by considering a numerical example. In this example we vary the degree of overlap between the sources of information.
-
Huseby, Arne & Breivik, Olav Nikolai (2013). Optimal load sharing in a binary multicomponent system where the components have constant or increasing failure rates.
Show summary
In the present paper we consider a system consisting of n components. the system is exposed to a certain load of supplying a certain amount of utility, e.g., electrical power. the load on the system is distributed among the components. For simplicity we assume that the system demand is constant over time. When functioning each component is capable of handling a certain amount of load which we refer to as the component’s load capacity. the load capacity of a component is assumed to be constant throughout its lifetime. the main objective of the present paper is developing methods for optimal load sharing among the components subject to the constraints imposed by the load capacities and demand on the system. A load sharing strategy is optimal if it maximizes the expected lifetime of the system. In the paper we show how to solve the problem in the case where the components have constant failure rates. We also consider the case where the components have increasing failure rate, and show how to solve this in some special cases.
-
Huseby, Arne (2011). A Framework for multi-reservoir production optimization.
-
Huseby, Arne (2011, 12. november). Innslag om sannsynlighetsberegning og tilfeldigheter i NRK-programmet Helgeland. [Radio].
NRK.
-
Huseby, Arne & Breivik, Olav Nikolai (2011). Optimal load sharing in a binary multicomponent system.
-
Huseby, Arne & Moratchevski, Nikita (2011). Sequential optimization of oil production under uncertainty.
-
Vanem, Erik; Huseby, Arne & Natvig, Bent (2011). Stochastic Modeling of Wave Climate Using a Bayesian Hierarchical Space-Time Model with a Log-Transform.
-
Huseby, Arne (2010). Assessing airport runway conditions—A Bayesian approach.
-
Natvig, Bent & Huseby, Arne (2010). Measures of component importance in repairable multistate systems- a numerical study.
-
Natvig, Bent; Huseby, Arne & Reistadbakk, Mads (2010). Measures of component importance in repairable multistate systems – a numerical study. Statistical research report (Universitetet i Oslo. Matematisk institut. 4.
Show summary
In [10] dynamic and stationary measures of importance of a component in a repairable multistate system were introduced. For multistate systems little has been published until now on such measures even in the nonrepairable case. According to the Barlow-Proschan type measures a component is important if there is a high probability that a change in the component state causes a change in whether or not the system state is above a given state. On the other hand, the Natvig type measures focus on how a change in the component state affects the expected system uptime and downtime relative to the given system state. In the present paper we first review these measures which can be estimated using the simulation methods suggested in [4]. Extending the work in [8] from the binary to the multistate case, a numerical study of these measures is then given for two three component systems, a bridge system and also applied to an offshore oil and gas production system. In the multistate case the importance of a component is calculated separately for each component state. Thus it may happen that a component is very important at one state, and less important, or even irrelevant at another. Unified measures combining the importances for all component states can be obtained by adding up the importance measures for each individual state. According to these unified measures a component can be important relative to a given system state but not to another. It can be seen that if the distributions of the total component times spent in the non complete failure states for the multistate system and the component lifetimes for the binary system are identical, the Barlow-Proschan measure to the lowest system state simply reduces to the binary version of the measure. The extended Natvig measure, however, does not have this property. This indicates that the latter measure captures more information about the system.
-
Huseby, Arne & Haavardsson, Nils Fridthjov (2009). Multi-reservoir production optimization under uncertainty.
Show summary
Oil and gas production from several reservoirs are often processed at a single processing facility. Due to limitations in the processing capacity, this implies that the production rates from the individual reservoirs have to be reduced. That is, for each reservoir the production rate is scaled down by a suitable choke factor between zero and one, chosen so that the total production does not exceed the processing capacity. Recent studies of production optimization include (Horne, 2002), (Merabet & Bellah, 2002) and (Neiro & Pinto, 2004). (Huseby & Haavardsson, 2009) introduced the concept of a production strategy, a vector valued function defined for all points of time t ≥ 0 representing the choke factors applied to the reservoirs at time t. As long as the total potential production rate is greater than the processing capacity, the choke factors should be chosen so that the processing capacity is fully utilized. When the production reaches a state where this is not possible, the production should be left unchoked. A production strategy satisfying these constraints is said to be admissible. (Huseby & Haavardsson, 2009) developed a general framework for optimizing production strategies with respect to various types of objective functions. The same problem was considered in (Haavardsson et. al., 2008) where a parametric class of production strategies were introduced. (Haavardsson et. al., 2008) also showed how to find an optimal strategy within this class given that all reservoir parameters were known. In the present paper we consider the problem of optimizing the production strategy when the reservoir parameters are uncertain. The uncertainty is represented using the stochastic models introduced in (Haavardsson & Huseby, 2007). In real- life reservoir uncertainty typically change over time. Thus, a production strategy considered to be optimal initially may turn out to be far from optimal as more knowledge about the reservoirs is gained. Ideally, one would prefer production strategies which could be updated dynamically as new information is obtained. However, optimizing such dynamic production strategies is a very difficult problem. In the present paper we instead focus on finding production strategies which are robust with respect to variations in the reservoir properties.
-
Huseby, Arne & Natvig, Bent (2009). Advanced discrete simulation methods applied to repairable multistate systems.
-
Søderholm, Bjørn; Bugge, Hans Jørgen; Huseby, Arne; Rabbe, Marit; Klein-Paste, Alex; Bergersen, Erling & Skjøndal, Per (2009). Integrert Rullebane InformasjonsSystem (IRIS).
-
Natvig, Bent; Eide, Kristina; Gåsemyr, Jørund Inge; Huseby, Arne & Isaksen, Stefan (2008). The Natvig measures of component importance in repairable systems applied to an offshore oil and gas production system.
Show summary
In the present paper the Natvig measures of component importance for repairable systems, and its extended version are applied to an offshore oil and gas production system. According to the extended version of the Natvig measure a component is important if both by failing it strongly reduces the expected system uptime and by being repaired it strongly reduces the expected system downtime. The results include a study of how different distributions affect the ranking of the components. All numerical results are computed using discrete event simulation. In a companion paper (Huseby, Eide, Isaksen, Natvig, and Gåsemyr 2008) the advanced simulation methods needed in these calculations are decribed.
-
Haavardsson, Nils Fridthjov; Huseby, Arne & Holden, Lars (2008). A Parametric Class Of Production Strategies For Multi-Reservoir Production Optimization. Statistical research report (Universitetet i Oslo. Matematisk institut. 8.
Show summary
When a large oil or gas field is produced, several reservoirs often share the same processing facility. This facility is typically capable of processing only a limited amount of oil, gas and water per unit of time. In the present paper only single phase production, e.g., oil production, is considered. In order to satisfy the processing limitations, the production needs to be choked. That is, for each reservoir the production is scaled down by suitable choke factors between zero and one, chosen so that the total production does not exceed the processing capacity. Huseby & Haavardsson (2008) introduced the concept of a production strategy, a vector valued function defined for all points of time t ≥ 0 representing the choke factors applied to the reservoirs at time t. As long as the total potential production rate is greater than the processing capacity, the choke factors should be chosen so that the processing capacity is fully utilized. When the production reaches a state where this is not possible, the production should be left unchoked. A production strategy satisfying these constraints is said to be admissible. Huseby & Haavardsson (2008) developed a general framework for optimizing production strategies with respect to various types of objective functions. In the present paper we present a parametric class of admissible production strategies. Using the framework of Huseby & Haavardsson (2008) it can be shown that under mild restrictions on the objective function an optimal strategy can be found within this class. The number of parameters needed to span the class is bounded by the number of reservoirs. Thus, an optimal strategy within this class can be found using a standard numerical optimization algorithm. This makes it possible to handle complex, high-dimensional cases. Furthermore, uncertainty may be included, enabling robustness and sensitivity analysis.
-
Haavardsson, Nils Fridthjov; Huseby, Arne; Pedersen, Frank; Lyngroth, Steinar; Xu, Jingzhen & Aasheim, Tore (2008). Hydrocarbon production optimization in fields with different ownership and commercial interests. Statistical research report (Universitetet i Oslo. Matematisk institut. 9.
Show summary
A main field and satellite fields consist of several separate reservoirs with gas cap and/or oil rim. A processing facility on the main field receives and processes the oil, gas and water from all the reservoirs. This facility is typically capable of processing only a limited amount of oil, gas and water per unit of time. In order to satisfy these processing limitations, the production needs to be choked. The available capacity is shared among several field owners with different commercial interests. In this paper we focus on how total oil and gas production from all the fields could be optimized. The satellite field owners negotiate processing capacities on the main field facility. This introduces additional processing capacity constraints (booking constraints) for the owners of the main field. If the total wealth created by all owners represents the economic interests of the community, it is of interest to investigate whether the total wealth may be increased by lifting the booking constraints. If all reservoirs may be produced more optimally by removing the booking constraints, all owners may benefit from this when appropriate commercial arrangements are in place. We will compare two production strategies. The first production strategy optimizes locally, at distinct time intervals. At given intervals the production is prioritized so that the maximum amount of oil is produced. In the second production strategy a fixed weight is assigned to each reservoir. The reservoirs with the highest weights receive the highest priority.
-
Huseby, Arne (2008). Oriented matroid systems. Statistical research report (Universitetet i Oslo. Matematisk institut. 2.
Show summary
The domination invariant has played an important part in reliability theory. While most of the work in this field has been restricted to various types of network system models, many of the results can be generalized to much wider families of systems associated with matroids. A matroid is an ordered pair (F,M), where F is a nonempty finite set and M is a collection of incomparable subsets of F, called circuits, satisfying certain closure properties. For any given matroid (F,M) where F = Eux we can associate a reliability system with component set E and with minimal path sets P. Previous papers have explored the relation between undirected network systems and matroids. In this paper the main focus is on directed network systems and their relation to oriented matroids. Oriented matroids are a special type of matroids where the circuits are signed sets. Using these signed sets one can e.g., obtain a set theoretic representation of the direction of the edges of a directed network system. Classical results for directed network systems include the fact that the signed domination is either +1 or -1 if the network is acyclic, and zero otherwise. It turns out that these results can be generalized to systems derived from oriented matroids. Several classes of systems for which the generalized results hold will be discussed. These include oriented versions of k-out-of-n-systems and a certain class of systems associated with matrices.
-
Huseby, Arne; Eide, Kristina; Isaksen, Stefan; Natvig, Bent & Gåsemyr, Jørund Inge (2008). Advanced discrete event simulation methods with application to importance measure estimation.
Show summary
In the present paper we use discrete event simulation in order to analyze a binary monotone system of repairable components. Asymptotic statistical properties of such a system, e.g., the asymptotic system availability and component criticality, can easily be estimated by running a single discrete event simulation on the system over a sufficiently long time horizon, or by working directly on the stationary availabilities. Sometimes, however, one needs to estimate how the statistical properties of the system evolve over time. In such cases it is necessary to run many simulations to obtain a stable curve estimate. At the same time one needs to store much more information from each simulation. A crude approach to this problem is to sample the system state at fixed points of time, and then use the mean values of the states at these points as estimates of the curve. Using a sufficiently high sampling rate a satisfactory estimate of the curve can be obtained. Still, all information about the process between the sampling points is thrown away. To handle this issue, we propose an alternative sampling procedure where we utilize process data between the sampling points as well. This simulation method is particularly useful when estimating various kinds of component importance measures for repairable systems. As explained in (Natvig and Gåsemyr 2008) such measures can often be expressed as weighted integrals of the time-dependent Birnbaum measure of importance. By using the proposed simulation methods, stable estimates of the Birnbaum measure as a function of time are obtained and combined with the appropriate weight function, and thus producing the importance measure of interest.
-
Huseby, Arne; Eide, Kristina; Isaksen, Stefan; Natvig, Bent & Gåsemyr, Jørund Inge (2008). Advanced discrete event simulation methods with application to importance measure estimation. Statistical research report (Universitetet i Oslo. Matematisk institut. 11.
Show summary
In the present paper we use discrete event simulation in order to analyze a binary monotone system of repairable components. Asymptotic statistical properties of such a system, e.g., the asymptotic system availability and component criticality, can easily be estimated by running a single discrete event simulation on the system over a sufficiently long time horizon, or by working directly on the stationary component availabilities. Sometimes, however, one needs to estimate how the statistical properties of the system evolve over time. In such cases it is necessary to run many simulations to obtain a stable curve estimate. At the same time one needs to store much more information from each simulation. A crude approach to this problem is to sample the system state at fixed points of time, and then use the mean values of the states at these points as estimates of the curve. Using a sufficiently high sampling rate a satisfactory estimate of the curve can be obtained. Still, all information about the process between the sampling points is thrown away. To handle this issue, we propose an alternative sampling procedure where we utilize process data between the sampling points as well. This simulation method is particularly useful when estimating various kinds of component importance measures for repairable systems. As explained in Natvig and Gåsemyr (2009) such measures can often be expressed as weighted integrals of the time-dependent Birnbaum measure of importance. By using the proposed simulation methods, stable estimates of the Birnbaum measure as a function of time are obtained. Combined with the appropriate weight function the importance measures of interest can be estimated.
-
Huseby, Arne & Haavardsson, Nils Fridthjov (2008). A framework for multi-reservoir production optimization. Statistical research report (Universitetet i Oslo. Matematisk institut. 4.
Show summary
When a large oil or gas field is produced, several reservoirs often share the same processing facility. This facility is typically capable of processing only a limited amount of commodities per unit of time. In order to satisfy these processing limitations, the production needs to be choked, i.e., scaled down by a suitable choke factor. A production strategy is defined as a vector valued function defined for all points of time representing the choke factors applied to reservoirs at any given time. In the present paper we consider the problem of optimizing such production strategies with respect to various types of objective functions. A general framework for handling this problem is developed. A crucial assumption in our approach is that the potential production rate from a reservoir can be expressed as a function of the remaining producible volume. The solution to the optimization problem depends on certain key properties, e.g., convexity or concavity, of the objective function and of the potential production rate functions. Using these properties several important special cases can be solved. An admissible production strategy is a strategy where the total processing capacity is fully utilized throughout a plateau phase. This phase lasts until the total potential production rate falls below the processing capacity, and after this all the reservoirs are produced without any choking. Under mild restrictions on the objective function the performance of an admissible strategy is uniquely characterized by the state of the reservoirs at the end of the plateau phase. Thus, finding an optimal admissible production strategy, is essentially equivalent to finding the optimal state at the end of the plateau phase. Given the optimal state a backtracking algorithm can then used to derive an optimal production strategy. We will demonstrate this on a specific example.
-
Huseby, Arne; Løvseth, Marthe & Wright, Jan (2008). Helhetlig risikomodell basert på flygbarhetsindeks.
Show summary
Bakgrunnen for prosjektet er at Avinor er forpliktet til å oppfylle kravene i BSL E 3-2 for alle lufthavner og på bakgrunn av denne få en teknisk/operativ godkjenning hvert 5. år. Ved manglende oppfyllelse skal kompenserende tiltak identifiseres og evalueres ved hjelp av risikoanalyse. Dagens godkjenningssystem innebærer at risikoreduserende tiltak ofte iverksettes i en tilnærmet vilkårlig rekkefølge, og tildels uten fokus på tiltak som øker sikkerheten mest. Avinor ønsker derfor å utvikle en kvantitativ risikoanalysemodell som kan gi et bedre grunnlag for å prioritere ulike risikoreduserende tiltak. Modellen som er utviklet, tar for seg det totale risikobildet rundt flygning i tilknytning til en lufthavn. Dermed kan en få synliggjort hvor bidragene til den totale risikoen ligger, og hvilke tiltak som gir mest redusert risiko i forhold til kostnaden. Risikomodellen tar hensyn til usikkerhet i ytre forhold ved at det for hver flyplass er angitt sannsynlighetsfordelinger for variasjon i f.eks. værforhold. Beregning av risikomålene gjøres ved Monte Carlo simulering. Effekten av usikkerheten gjenspeiles i risikonivået på den enkelte flyplass. Modellen produserer flere forskjellige risikomål. Ved analyse av resultatene fra modellberegningene, vil man kunne veie de ulike målene opp mot hverandre for å komme frem til en anbefalt prioriteringsrekkefølge. Flygbarhetsindeks beskriver risikoen er ved en enkelt flygning på en lufthavn i henhold til en skala fra 1 til 10. Et høyt indekstall indikerer høy risiko, mens et lavt tall indikerer lav risiko. For en gitt flyplass angis det karakterer for et sett med risikoområder. Karakterene for de ulike områdene veies sammen etter risikoområdenes relevans for de ulike flygefasene. Den sammenveide verdien kalles flygbarhetsindeksen og karakteriserer det samlede risikonivået ved en gitt flyplass. Ved å benytte flyhavaristatistikk konverterer risikomodellen flygbarhetsindeks til sannsynlighet for fatalt havari (havari med minimum én person omkommet) pr. flygning. En flyplass med en høy flygbarhetsindeks for en gitt flygefase, vil få anslått en tilsvarende høy havarisannsynlighet i den samme fasen. Risikomodellen benytter så de justerte havarisannsynlighetene i et hendelsestre, slik at man til slutt får ut den samlede sannsynligheten for fatalt havari pr. flygning. En avgjørende forutsetning for at denne konverteringen skal produsere troverdige resultater, er at modellen kalibreres. I den foreliggende prototypen er det kun gjort en preliminær kalibrering. Det bør imidlertid gjøres en grundigere rekalibrering basert på resultater fra et bredt spekter av flyplasser. Imidlertid vil den relative forskjellen mellom lufthavnene (rangeringen) trolig være tilnærmet korrekt, selv om kalibreringen ikke er ferdig utført. Havarisannsynligheter konverteres videre til returperioder, dvs. gjennomsnittlig ”avstand” mellom to havarier. Jo lavere risikonivået er ved en gitt flyplass, jo lengre blir returperiodene. Ved å benytte opplysninger om hvor mange passasjerer det er pr. flygning ved en gitt lufthavn, kan man beregne konsekvensen av et fatalt havari i form av antall omkomne. Konsekvensberegningene tar her utgangspunkt sannsynlighetsfordelingen for antall passasjerer, kombinert med forventet andel som omkommer i de ulike flygefasene gitt havari i disse. I modellen kan man i tillegg til å gi karakterer med basis i gjeldende tilstand på en flyplass, også kan angi karakterer svarende til tilstanden etter at tiltak er gjennomført. Dermed kan modellen gi en indikasjon på hvor mye flygbarhetsindeksen reduseres ved å gjennomføre ulike tiltak. Til hvert tiltak kan det også angis kostnader slik at man kan vurdere hvilke tiltak som gir størst uttelling pr. investert krone.
-
Huseby, Arne & Rabbe, Marit (2008). Predicting airport runway conditions based on weather data.
Show summary
Slippery runways represent a significant risk to aircrafts especially during the winter season. Thus, having reliable methods for identifying such conditions is very important. However, measuring the runway friction with a satisfactory precision is very difficult. While many different measurement devices have been developed, it is hard to find equipment that produces stable and consistent results. Furthermore, in order to measure friction, the runway needs to be shut down. Thus, in order to avoid severe delays to the traffic, such measurements cannot be carried out too frequently. In the present paper we present the results of a large-scale study of runway conditions carried out during two winter seasons at two Norwegian airports. The main goal was developing methodology for identifying slippery runway conditions using weather data in addition to runway reports. Throughout the two seasons various kinds of weather data was collected, e.g., air and ground temperature, humidity, precipitation, visibility and wind. Using these data a scenario based weather model for slippery conditions was developed. The model was validated using flight data from a large number of flights. Using these data we computed several indicators reflecting how the aircrafts were affected by the runway conditions. By comparing indicator values from landings where the weather model predicted slippery conditions with the corresponding values from landings on dry runways, we were able to verify that the weather model selected the correct landings.
-
Natvig, Bent; Eide, Kristina; Gåsemyr, Jørund Inge; Huseby, Arne & Isaksen, Stefan (2008). Simulation based analysis and an application to an offshore oil and gas production system of the Natvig measures of component importance in repairable systems. Statistical research report (Universitetet i Oslo. Matematisk institut. 10.
Show summary
In the present paper the Natvig measures of component importance for repairable systems, and its extended version are analysed for two three component systems and a bridge system. The measures are also applied to an offshore oil and gas production system. According to the extended version of the Natvig measure a component is important if both by failing it strongly reduces the expected system uptime and by being repaired it strongly reduces the expected system downtime. The results include a study of how different distributions affect the ranking of the components. All numerical results are computed using discrete event simulation. In a companion paper (Huseby, Eide, Isaksen, Natvig and Gåsemyr 2008) the advanced simulation methods needed in these calculations are described.
-
Haavardsson, Nils Fridthjov & Huseby, Arne (2007). The modelling of multisegment production profiles using hybrid systems and ordinary differential equations.
-
Huseby, Arne (2007). Advances in Risk Analysis and Decision Sciences.
-
Huseby, Arne (2007). Aircraft Movements and Stochastic Hybrid Systems.
-
Huseby, Arne (2007). Oriented Matroid Systems and Signed Domination.
-
Huseby, Arne (2007). Safe Winter Operations, Statistical Analysis.
-
Huseby, Arne (2007). Signed Domination of Oriented Matroid Systems.
Show summary
The domination invariant has played a central part in reliability theory. Most of the work in this field has been restricted to various types of network system models. However, many of the results can be generalized to much wider families of systems associated with matroids. A matroid is an ordered pair (F, M), where F is a nonempty finite set and M is a collection of incomparable subsets of F, called circuits, satisfying certain closure properties. For any given matroid (F, M) and x in F we can associate a reliability system with component set E = F \x. Previous papers have explored the relation between undirected network systems and matroids. In this paper the main focus is on directed network systems and their relation to oriented matroids. Oriented matroids are a special type of matroids where the circuits are signed sets. Using these signed sets one can e.g., obtain a set theoretic representation of the direction of the edges of a directed network system. Classical results for directed network systems include the fact that the signed domination either 1 or -1 if the network is acyclic, and zero otherwise. It turns out that these results can be generalized to systems derived from oriented matroids. Several classes of systems for which the generalized results hold will be discussed. These include oriented versions of k-out-of-n-systems and a certain class of systems associated with matrices.
-
Huseby, Arne (2007). Signert dominasjon og rettede matroidesystemer.
-
Huseby, Arne; Aarrestad, Olav; Norheim, Armann & Rabbe, Marit (2007). Safe Winter Operation Project.
Show summary
Safe landings on slippery runways have been a challenge for decades. Airlines, airport operators and authorities recognize the need for improving today’s methods for measuring and reporting runway braking action, both with respect to precision of the data and timing. The SWOP project addresses this challenge and aims at develop an equivalent “factor” to today’s friction coefficient, resting on meteorological data in combination with in depth knowledge of the contamination at hand. The approach is multidisciplinary and puts the pilots' need for precise and timely information on prevailing conditions in the centre. There are no commercial liabilities to the results of the work, thus know-how and models developed by the project are freely available to airlines, airport operators and authorities. The main idea has been to study the relationship between prevailing weather conditions and corresponding runway conditions. To study this, meteorological data have been continuously logged for two consecutive winter seasons (2004/2005 and 2005/2006). In parallel, runway reports and flight recorder data have been logged for a large number of actual landings on Oslo International Airport Gardermoen and Tromsø Airport. Comparing prevailing weather and runway conditions makes it possible to identify meteorological phenomena that trigger slippery runways. At the same time, considerable attention has been given to define ground to air reporting that is meaningful and acceptable from a pilot point of view. One of the most important results of the project is a weather model. The quality of the model is validated against indicators based on the collected flight data. These calculations clearly show that the weather model works well, as the landings identified as potentially slippery, have flight indicator values significantly different from the other landings. Thus, the model represents a major step towards a method for describing runway conditions as a function of the current status of the continuously logged weather parameters. By combining the results from the weather model with other available sources, pilots can be updated on prevailing conditions on a (close to) continuous basis. The weather model is based on measurement of air temperature, runway temperature, type of precipitation, relative humidity and visibility. The model describes 6 different scenarios known to cause slippery runways. The scenarios are precisely defined based on criteria representing the above mentioned measurements. For any point in time within a range of 4 hours back in time, meteorological measurements are used to assess if the 6 scenarios have occurred or not. If occurred, the runway is classified as potential slippery. This information can then be combined with runway report data, and thus producing a final classification of the runway conditions as “Good”, “Medium” or “Poor”. The analysis has revealed significant differences between runway condition reports and results produced by the weather model. In most cases these differences can be explained by outdated runway condition reports or recent measures taken by ground staff to improve runway conditions. This suggests that implementing a weather model in addition to existing runway reporting routines will improve assessment and ground to air reporting of prevailing runway conditions. Further work is still needed to fully coordinate the use of such a weather model and existing runway reporting routines. A weather model as the one developed through the project should be combined with several support tools/systems in the assessment of prevailing runway conditions. Suitable support tools/systems, individual competence and hands-on experience and insight in local weather phenomena will together decide the quality of the information reported from ground to air.
-
Huseby, Arne & Evenseth, John (2007). Veslefrikk Redevelopment Project, Schedule Risk Analysis Report.
-
Huseby, Arne & Haavardsson, Nils Fridthjov (2007). Multisegment production profile models, a hybrid system approach. Statistical research report (Universitetet i Oslo. Matematisk institut. 2.
Show summary
When an oil or gas field development project is evaluated, having a satisfactory production model is very important. Since the first attempts in the 40's, many different models have been developed for this purpose. Such a model typically incorporates knowledge about the geological properties of the reservoir. When such models are used in a total value chain analysis, however, also economical and strategic factors need to be taken into account. In order to do this, flexible modeling tools are needed. In this paper we demonstrate how this can be done using hybrid system models. In such models the production is modeled by ordinary differential equations representing both the reservoir dynamics as well as strategic control variables. The approach also allows us to break the production model into a sequence of segments. Thus, we can easily represent various discrete events affecting the production in different ways. The modeling framework is very flexible making it possible to obtain realistic approximations to real-life production profiles. At the same time the calculations can be done very efficiently. The framework can be incorporated in a full scale project uncertainty analysis.
-
Eisinger, Siegfried; Sutter, Esther & Huseby, Arne (2006). Component importance measures in complex systems.
Show summary
In this paper a contribution has been made to specifying requirements for improvement measures for a wide class of field application models, namely multi-state systems consisting of multi-state components changing state at discrete times. Following the requirement list benchmark models can be defined which can suit to test the performance of improvement measures. A simple benchmark model has been proposed in this paper. Starting from the classical importance measures ‘Improvement potential’ and ‘Birnbaums measure’, two new importance measures have been proposed, namely the Covariance and the ‘Component blaming’-based measures. These measures yield promising results with respect to the stated requirements. Both the Covariance and the ‘Component loss’ measures are rather robust and do not complicate the simulation considerably.
-
Huseby, Arne (2005). Trends and Local Effects in Aviation Accident Rates Related to Deregulation. Statistical research report (Universitetet i Oslo. Matematisk institut. 7.
Show summary
When analyzing flight accident data over some period of time, it is clear that the rates of serious accidents per year show a steady decline. In the present paper we shall study the accident rates for general aviation in USA for the period 1960 - 2003 with special emphasis on the period around the US. deregulation, i.e., the period around 1980. The data used here, is obtained from US FAA (Federal Aviation Authorities) as source. It is shown that both the total accident rates and the fatal accident rates overall are on a steady decline. We have fitted different regression models to the data. Among these models, the loglinear model gives the best fit, indicating that the rates are flattening out. For the total accident rates it seems like there is an even stronger tendency towards flattening. On the other hand for the fatal accident rates there still appears to be a potential for a future decline. From a long term perspective we find no indication that the trends are affected significantly by events like deregulation. Still the accident rates around the deregulation point are indeed lower than one could expect. Thus, for a limited time such events may have a positive effect in terms of increased risk awareness.
-
Huseby, Arne; Natvig, Bent & Herbjørnsrud, Dag (2005, 22. april). Høy terrorfare på Huseby.
Aftenposten Aften.
Show summary
Rapporten som sier at terrorfaren rundt en amerikansk ambassade på Huseby blir "moderat", holder ikke faglige mål. Terrorfaren blir høy, mener to forskere.
-
Huseby, Arne (2004). Advanced modelling and sensitivity analysis in ODRisk.
-
Huseby, Arne (2004). Aktiviteter og utfordringer ved Matematisk institutt.
-
Huseby, Arne (2004). An LCC-study of a high voltage effect switch.
-
Huseby, Arne (2004). An LCC-study of a transformation station.
-
Huseby, Arne (2004). CMC - Conditional Monte Carlo simulation program for System Reliability Estimation.
-
Huseby, Arne (2004). LCC-analyser i Statnett.
-
Huseby, Arne (2004). Real Option Methodology for Offshore Development Projects.
-
Huseby, Arne (2004). Safe winter operations on contaminated runways - three different approaches.
-
Huseby, Arne (2004). System reliability evaluation using conditional Monte Carlo simulation.
Show summary
The paper considers the problem of estimating system reliability using Monte Carlo simulation. By conditioning on suitable functions of the component state vector, the Monte Carlo estimate may converge much faster to the true reliability. In the paper we demonstrate how this can be done using a combination of upper and lower bounds and component counts. The method is especially well suited for estimating reliability of network systems. For such systems the method can be optimized using the well-known domination invariant. After presenting the main ideas, a few illustrating examples are included as well as comparison with other similar methods.
-
Huseby, Arne & Osnes, Eivind (2004). Må pasienten få det verre for å få det bedre?. Uniforum.
ISSN 1891-5825.
(16)
-
Huseby, Arne (2004). Exact Simulation of Binary Variables given their Sum. Statistical research report (Universitetet i Oslo. Matematisk institut. 10.
Show summary
The paper considers the problem of simulating a vector, X, of n independent binary variables conditioned on their sum, S. For a fixed value of S an exact simulation method is provided in Huseby and Naustdal[4]. In certain situations, however, it is of interest to generate an increasing sequence of binary vectors X1 < ··· < Xn, such that the s-th vector is distributed as the vector X given S = s, s =1,...,n. If all the variables of the vector X are identically distributed, it can be shown that this is equivalent to generating a random permutation, of the index set, {1,...,n}. For more details about this, see Huseby and Naustdal[4]. In the present paper, however, we provide a simulation algorithm for the case when the variables of the vector X do not necessarily have the same distribution. This algorithm utilizes the fact that the distribution of a sum of independent binary variables is always log-concave.
-
Huseby, Arne (2004). Importance Measures for Multicomponent Binary Systems. Statistical research report (Universitetet i Oslo. Matematisk institut. 11.
Show summary
In this paper we review the theory of importance measures for multicomponent binary systems starting out with the classical Birnbaum measure. We then move on to various time independent measures for systems which do not allow repairs including the Barlow and Proschan measure and the Natvig type 1 measure. For the case with repairs we discuss a measure suggested by Barlow and Proschan along with some new suggestions. We also present some new results regarding importance measures for sets of components. In particular we present a generalization and a new representation of the Natvig type 1 set importance measure. We also indicate how the set measures can be extended to the case with repairs.
-
Huseby, Arne (2004). PGSRisk, Fast modelling of development and production project portfolios. [CD].
-
Huseby, Arne (2004). UiOs plass i det moderne kunnskapssamfunn.
-
Huseby, Arne; Naustdal, Morten & Vårli, Ingeborg (2004). System reliability evaluation using conditional Monte Carlo methods.
Show summary
The paper shows how Monte Carlo methods can be improved significantly by conditioning on a suitable variable or vector. In particular this principle is applied to system reliability evaluation. Different choices of variables to condition on lead to different approaches. We start out by using upper and lower bounds on the structure function of the system, and develop an efficient method for sampling from the resulting conditional distribution. Another approach is to use the sum of the component state variables. In relation to this an efficient algorithm for simulating a vector of independent Bernoulli variables given their sum is presented. By using this algorithm one can generate such a vector in O(n) time, where n is the number of variables. Thus, simulating from the conditional distribution can be done just as efficient as simulating from the unconditional distribution. The special case where the Bernoulli variables are i.i.d. is also considered. For this case the reliability evaluation can be improved even further. In particular, we present a simulation algorithm which enables us to estimate the entire system reliability polynomial expressed as a function of the common component reliability. If the component reliabilities are not too different from each other, a generalized version of the improved conditional method can be used in combination with importance sampling. Finally we outline how the two conditioning methods can be combined in order to get even better results.
-
Huseby, Arne & Osnes, Eivind (2004). Budsjettet for 2005 og konekvenser for virksomheten ved grunnenhetene. Åpent brev til rektor. Uniforum.
ISSN 1891-5825.
(14)
-
Huseby, Arne (2003). MODRisk - Generic Modular Influence Diagram Framework.
Show summary
MODRisk is a generic framework for developing tailor-made risk analysis software. Such software products can be developed using a common java based framework which is configured through various xml-files descibing the modules of the program. The end-user can use such a program to quickly build large scale risk models without having to deal with complex internal structure of the modules. The framework also enables the end-user to export the finished models to a complete influence diagram model readable by the software product Riscue.
-
Huseby, Arne (2003). Modelling production profile uncertainty.
-
Huseby, Arne (2002). Development of an associated field based on information from a planned test well.
Show summary
In relation to the development of a given oil field, one is considering the development of a second associated field. Before this decision is made, a test well is to be drilled. By modelleing the potential information from this well, it is possible to calculate the value of this well for the project. A methodology for doing this is presented.
-
Huseby, Arne (2002). ODRisk 2.0 - A software tool for evaluating uncertainties in relation to offshore activities on the Norwegian continental shelf.
Show summary
ODRisk is a software tool for analyzing uncertainties related to oil and gas production and investments in relation to the Norwegian continental shelf. The new version offers improved models for representing time uncertainties. Moreover, the program now reads data directly from Excel spreadsheets.
View all works in Cristin
Published Nov. 30, 2010 11:20 PM
- Last modified Oct. 22, 2014 10:47 PM