In this presentation, I will discuss methods to quantify model risk. At its core, a valuation or risk measurement model implicitly assigns probabilities to scenarios, with each scenario determining a set of cash flows or losses. Alternative models assign different probabilities to different scenarios. The literature on “robustness” and uncertainty provides tools for gauging the plausibility of alternative models relative to a baseline model. Among a set of plausible alternatives, we can identify the worst-case model, meaning the one that would produce the largest deviation from the baseline model. The difference in valuation of a security or portfolio under the baseline model and the worst-case alternative yields a bound on model risk at each level of plausibility. This framework lends itself to practical implementation through Monte Carlo simulation, taking advantage of an explicit representation for the worst-case alternative. I will illustrate this method through problems of portfolio analysis, derivatives valuation, and the measurement of counterparty risk.
*Value and Capital Management: A Handbook for the Finance and Risk Functions of Financial Institutions, Wiley 2015. Forthcoming 2017 in Chinese by the Shanghai University of Finance and Economics Press, 2017 in Korean by Pakyoung-sa Publishing, and 2018 in Japanese).
Using weekly credit default swap premiums for 35 financial firms, we analyze the credit risk of each of these companies and their statistical linkages, placing special emphasis on the 2005–2012 period. As the model captures firm-specific credit risk and dependence across the firms, it serves as a building block to construct a systemic risk measure. We find increases in systemic risk contributions for both insurance and banking subsectors during the crisis period. We also detect a unidirectional causal effect from banks to insurers when accounting for heteroskedasticity.
Flood risk in Florence is used for testing the dynamics on a real case.
The National Highway Transportation Safety Administration (NHTSA) concluded its National Motor Vehicle Crash Causation Survey (NMVCCS) in 2008. The NMVCCS analyzed the events leading up to a motor vehicle crash to determine what was causing automobile accidents. This study, which found that 93% of accidents are caused by human error, is often referenced to justify and quantify automated vehicles’ accident reduction potential. However, this study was never intended to be applied to automated vehicles. Currently celebrating its 100th year, the Casualty Actuarial Society fulfills its mission to advance actuarial science through a singular focus on research and education for property/casualty actuarial practice. Among its 6,200 members are experts in property-casualty insurance, reinsurance, finance, risk management, and enterprise risk management. The Casualty Actuarial Society has created an Automated Vehicles Task Force (CAS AVTF) to research the technology’s risks and their implications for insurance and risk management. To this end, the Task Force has re-evaluated the NMVCCS in the context of an automated vehicle world. It found that 49% of accidents contain at least one limiting factor that could disable the technology or reduce its effectiveness. The safety of automated vehicles should not be determined by today’s standards; things that cause accidents today may or may not cause accidents in an automated vehicle era. Rather, things like the vehicle’s failure rate (after accounting for any fail-safes, infrastructure investments, and driver interactions) and unavoidable accidents (e.g., falling rocks) should be the gauge by which they should be measured. Safety metrics should also consider additional criteria that would not be part of today’s standards and safety concerns, as automation introduces additional risks to consider. This report details the CAS AVTF’s re-evaluation of the NMVCCS and notes areas for future research.
In non-life insurance, territory-based risk classification is useful for various insurance operations including marketing, underwriting, ratemaking, etc. This paper proposes a spatially dependent frequency-severity modeling framework to produce territorial risk scores. The framework applies to the aggregated insurance claims where the frequency and severity components examine the occurrence rate and average size of insurance claims in each geographic unit, respectively. We employ the bivariate conditional autoregressive models to accommodate the spatial dependency in the frequency and severity components, as well as the cross-sectional association between the two components. Using a town-level claims data of automobile insurance in Massachusetts, we demonstrate applications of the model output—territorial risk scores—in ratemaking and market segmentation.
We study the problem of valuating life insurance contracts in the presence of taxes and future profits. The basic framework consists of the classical finite state Markov chain model describing the possible states of the policy-holder and a stochastic model for the financial market. One approach to model the liabilities is to introduce a full simulation model for the relevant states of the policy holder and the payments associated with the contracts. We discuss how one can alternatively adopt analytical methods such as Thiele's differential equation for the state wise reserves and Kolmogorov's forward equations for the transition probabilities for determining the market values of the various cash flows arising from the contracts. More precisely, we determine the tax cash flows for guaranteed and unguaranteed payments, tax payments and future profits for the owners. We also discuss how the market values of the cash flows can be determined without explicitly deriving the underlying cash flows. The cash flows for unguaranteed payments, taxes and profits will typically depend on the term structure, which is uncertain. We refer to these cash flows as term structure dependent cash flows.
We start by covering the key risks in retirement, such as interest rate risk, inflation risk, investment and reinvestment risk, and longevity risk. We then look at the components of a retirement financial strategy: investment strategy, the strategy for investing the pension pot; withdrawal strategy, the strategy for withdrawing cash from the pension pot to finance expenditures; and longevity insurance strategy. A good product for delivering retirement income needs to offer: accessibility, a degree of flexibility to withdraw funds on an ad hoc basis; inflation protection, either directly or via investment performance, with minimal involvement by individuals who do not want to manage investment risk; and longevity insurance. It is difficult for a single product to meet all these requirements, but a combination of drawdown and a deferred (inflation-linked) annuity does, for example. So a well-designed retirement income programme will have to involve a combination of products. Next we discuss the withdrawal strategy, and note that there is no safe fixed withdrawal rate that guarantees to last for the lifetime of the retiree (apart from a life annuity). Alternatives are: withdraw the annuitised value of the fund each year, known as the ‘equivalent annuity’ strategy; draw only the ‘natural’ income from the fund, defined as the ‘pay-out of dividends from income-generating investments’; auto-rebalancing, withdraw from asset classes that experienced the highest growth during the year; and cashflow reserve (or bond) ladder or bucket, hold enough in deposits or short-maturing bonds to meet the next two years of expenditure. We then consider the longevity insurance strategy, which determines when longevity insurance is purchased and when it comes into effect. The essentially boils down to the choice between: buying and immediate annuity when it is needed, and buying a deferred annuity at the point of retirement, with the deferred annuity beginning to make payments when it is needed. We end by designing a default retirement income/expenditure plan.
Inhalt des Vortrags wird erst 2018 abgestimmt
risk management. Various relevant papers are to be found on my website: www.math.ethz.ch/~embrechts
We undertake a detailed case study on the Human Mortality Database demographic data from European countries and we use the extracted features to better explain the term structure of mortality in the UK over time for male and female populations when compared to a pure Lee-Carter stochastic mortality model, demonstrating our feature extraction framework and consequent multi-factor mortality model improves both in sample fit and importantly out-off sample mortality forecasts by a non-trivial gain in performance.
This presentation will cover ongoing work by Sander Devriendt, Katrien Antonio, Edward (Jed) Frees and Roel Verbelen.
In the life insurance literature a novel decomposition approach was recently proposed that uses martingale representations for defining meaningful risk decompositions of future liabilites. The applications include risk management, product design, and capital regulation. This talk shows that and how this decomposition approach can be further developed for application in health insurance. Besides discussing the different sources of risk such as mortality risk, morbidity risk, and financial risk, special attention will also be given to the time dynamics.
Looking at health-linked life annuities from a technical perspective, we note that an appropriate product design can improve the profitabililty and/or the risk profile of the insurer’s portfolio. In particular, we will first prove that the care pension is less exposed to pricing (and reserving) risks than other LTC insurance products. Then, we will show that underwritten annuities can raise the size and the profitability of a life annuity portfolio without worsening its risk profile.
While the theory of actuarial and statistical methods has developed gradually over time from the mid-seventeenth century, it is only relatively recently that the computer has had its impact on the practical application of the ideas. Increasingly decisions affecting our lives are being based on formal decision-making processes coded within computers. Often these processes are very elaborate, using data no single human could hope to understand, or even manually examine. Sometimes the processes are adaptive, changing according to obscure internal mechanisms as new data become available. While clearly such developments hold huge promise for improving the human condition, they do not come without risks. In particular, the data may be of uncertain quality. The different dimensions of data quality are examined, categorised as bad data (not the data you want, but a distorted version), invisible data (not just the data you have, but also the data you’d like), changing data (not the data you’ve got, but the data you’ll have), alternative data (not the data you’ve got, but the data you would have had), and misleading data (not the data you’ve got, but the data you think you’ve got). Examples are given showing the potential adverse impact on our decisions, and our lives, and strategies for tackling the problems of poor data quality via detection, prevention, and correction are briefly explored.
The present paper aims at exploring possible applications of the frailty concept and related stochastic assumptions in the context of health insurance (disability annuities and long-term care annuities, in particular), hence generalizing the probabilistic structure of the multi-state models. A more realistic setting should actually allow for individual random frailty, affecting disablement, recovery, mortality. Special attention will be placed on the volatility of the benefits paid by the insurer (and hence on its risk profile), assessed in this more realistic framework.
An effective data architecture is essential to realizing the opportunities of AI within actuarial science.
Genetic testing for inherited heart disorders (cardiomyopathies) is becoming widespread. In countries where genetic test results are admissible for insurance underwriting, genetic councellors are believed to advise clients to obtain life or health insurance before taking a genetic test for a suspected cardiomyopathy. Using Hypertrophic Cardiomyopathy (HCM) as an example, we consider the implications for insurance, especially as regards pre-existing conditions and diagnostic as opposed to predictive genetic testing.
Floods are among the major natural hazards and suitable models for the respective risks are of particular importance for the management of natural disasters. In this talk we discuss various approaches towards modelling flood risk events together with the challenges that arise in practical implementations. Particular emphasis will be given on the choice of the underlying spatial and temporal dependence model, the modelling of extremes as well as on diversification possibilities for this type of risk. We illustrate the analysis with concrete case studies for European floods.
The talk introduces some of these techniques and tools and explores their potential for life insurance, as well as their limitations. The ideas are illustrated by small real-world examples drawn from the Swiss Re data monitoring pool ('Swiss Re Bestandsmonitoring') and recent Swiss Re studies on the insurability of HIV-positive people.
Therefore we apply several multivariate data analytic methods to predict German DI incidence rates. We compare these different methods and share some of the results and their application in product design and pricing or process design for UW and claims. For these applications a careful interpretation and validation by market experts and medical doctors is crucial.
Advanced open loss modeling technology enables actuaries to take on a wider role in providing more credible information for pricing and underwriting catastrophe exposed lines of business. Actuaries can provide more actionable information to senior executives before, during, and after the events. This session will include actual examples of model customizations and how they resulted in more accurate catastrophe loss estimates and impacted the decision making process.
On October 7, 1994, within the Department of Mathematics of the ETH Zurich, we founded RiskLab in collaboration with the banking and insurance industry. A key premise was the notion of precompetitive research, i.e. research which is both relevant for the global financial industry as well as academically publishable. In this talk I will briefly sketch the establishment of such a bridge-building center and discuss successes and failures both from an organizational as well as from a research point of view. From what I learned from the establishment and running of RiskLab, I will give advice on possible future cross-disciplinary centers with related ambitions. One such center within the ETH Zurich is the in 2011 established ETH Risk Center. In contrast to RiskLab which was solely restricted to the Department of Mathematics, the Risk Center bundles about 20 professors from 8 different departments. The research topics treated also go beyond purely banking and insurance ones. A key aspect of my presentation will be the discussion of mainly actuarially relevant research topics treated within either RiskLab or the Risk Center. Examples may include the modeling of dependence within examples from insurance, finance and engineering, the modeling of catastrophic events, the analysis of business interruption scenarios and the challenges facing the financial industry in a changing technological environment. Towards the end of my talk I will make some comments on the teaching for and duties of future actuaries.
In the final part of the talk we link backtesting approaches based on PIT values to the literature on comparative testing of forecASTINg procedures using concepts like elicitability and consistent or proper scaling roles. We show that the two approaches can reveal different deficiencies of risk models and thus provide complementary tools in the model validator’s armoury.
Cyber risk, and more broadly, data information security risk takes on increased importance in today’s digital economy. There are several challenges in quantification of cyber risks, which calls for new actuarial theories for cyber risks. Firstly, there is a convoluted relationship between threats (number of cyberattacks) and vulnerability (likelihood of weakness being exploited). Secondly, there is an interlocking relationship between firms’ cyber security spending and the required provision for the residual loss. Conceptually, the provision for the Annual Loss Expectancy can be viewed as the insurance premium for a full risk transfer, which depends on the level of information security investment. Thirdly, a firm’s information system has multiple data assets facing multiple areas of vulnerability, which need to be accounted for in optimal allocation of resources of cybersecurity investment. In this talk I present an actuarial economic theory, with mathematical equations for the combined effect of security investments in addressing cyber threats and vulnerability, from which one can derive the required provision for the residual Annual Loss Expectancy. I will discuss implications in cyber insurance product design, including the features of pre-breach prevention/mitigation and post-breach response. I will also discuss the externality effects of cybersecurity investment and the need for private-sector collective actions.
The world and options of technology is developing at an increasing speed. While topic areas like Big Data or Real-time Analytics are starting to be digested by the Insurance Industry, new technology topics like IoT, Blockchain, Robotics or Cognitive Computing are making pathways into widespread, pervasive usage. The author will provide patterns describing how these technology options are supporting the transformation of many Insurance companies from the classical “covered risk”-perspective of their product portfolio into service providers (“care takers”) addressing the underlying life situation of the customers in a (more) holistic way. The presentation will contain many illustrating real life examples how this shift may look like and why this solves real problems for the persons affected, the companies involved and the Insurance company taking care of a specific problem area. These examples will, by nature, be crossing the lines of the classical separation of life, health, P&C as the real life situation of humans are very likely to ignore these classifications. From a mathematical point of view this extension will be accompanied by the necessity for models that can be used not only to predict hazardous constellations, but also give ”optimal” (in whatever sense) advice and resulting actions how to finally avoid this. The development of these models together with the interference with the Risk models, the models predicting customer behavior or the operational models needed to handle individual, contextual support actions will be the next frontier of actuaries and mathematicians (models composed from sub-models).
Advanced Statistical analyses can enhance traditional actuarial methods to highlight trends and behaviours which can be crucial to the running of a healthcare portfolio. This presentation outlines current new thinking in methodology with practical examples.
We discuss the design of pension products. The theoretical context is decision making under uncertainty with notions of preferences like risk aversion, habit formation, and resolution of uncertainty. The practical questions we address concern guaranteed benefits, smoothing of investment returns and...would you actually like to know when you are going to die?
Dirk Jonker, Dutch Actuary of the year, shares his personal story of how he transformed from a classical retirement actuary into a data science entrepreneur to solve big people puzzles for companies. Get inspired with tons of visual examples and see how you can be relevant (and needed) in a context far beyond pension and insurances.
It is a well-known phenomenon that the costs for health care have increased much more than the GDP or the general inflation rate. One component of the health care cost increase is medical inflation. Obviously, medical inflation is an important factor for calculating premiums and trends in health insurance. But what is medical inflation? And what do we know about it? There is quite a large number of publications on medical inflation. However, serious empirical research on medical inflation is rare. The presentation describes the various definitions of medical inflation, presents an international comparison and tries a prediction on medical inflation in future.
The drug related consumption and expenses continue to increase every year. This is particularly true for private insurances (PKV) which reimburse pharmaceutical innovations. In comparison to the compulsory sickness insurance, only a small number of regulations allow to reduce prices. This PKV characteristic may be beneficial for the assured person and the physician because therapy planning and personalized drug selection occurs independently from budget restraints. However, medical treatment of high quality is challenged especially in the case of chronical diseases. The aim of study is to estimate drug related expenses by age and sex for the present and the future.
In this talk I will provide a cross sectoral overview of securities lending, and then focus more explicitly on the insurance space, as well as discuss the perspective of regulators addressing the issue of systemic risk contribution via an activities based approach. I will then outline a portfolio model to illustrate the main trade-offs at play when designing and managing securities lending programs, demonstrating in turn how a holistic, risk based approach can provide a good representation of the risk profile of such activities. Finally, I will explore how a Solvency II – type framework may be used to understand the main risks channeled by securities lending operations.
After a brief summary of some background information, the relevant mathematical steps are presented which have been applied to convert the previous incidence and mortality rates for “care levels” to those for “care degrees”. Finally, we discuss the risk of undergoing the need for care bearing in mind the longevity risk.
The United Nations Joint Staff Pension Fund (UNJSPF) is a 60+ year old defined benefit plan providing retirement, disability, and death benefits to over 120,000 staff members of the United Nations and 23 other international organizations. The Fund pays benefits to over 72,000 retires and beneficiaries in over 190 countries in 15 different currencies. Assets are close to 60 billion $US. This paper provides background on the UNJSPF, focusing on the singular aspects of its governance, plan design, and actuarial and funding approach. The governance of the UNJSPF is bifurcated with assets and liabilities managed separately with technical coordination through asset liability management studies. The United Nations General Assembly has ultimate responsibility for the UNJSPF but has created a tripartite Pension Board and a secretariat to administer the Fund. Assets are managed mostly internally, under the responsibility of the United Nations Secretary-General. Key provisions of the UNJSPF consist of a benefit formula based on final average remuneration and service, with some variants regarding accrual rates, retirement age and payout options. In particular, the Fund has a unique two track system which allows retirees to receive benefits in local currency but comparing the value of their original benefit established in US dollars. The UNJSPF actuarial approach includes establishing its own actuarial assumptions, long-term open group funding method and sophisticated approach to determining the value of its two track system. There is an objective of maintaining constant the contribution rate. Finally, a Committee of Actuaries, as well as an Investment Committee and an Audit Committee are involved to assist the stakeholders.
The global actuarial profession is now going through a period of significant change regarding actuarial education. This has been prompted by a number of actuarial professional associations around the world acknowledging that the world has changed, and what will be needed from the actuarial profession is also changing very fast. At a global level, this has been led by the Syllabus Review Task Force of the International Actuarial Association (IAA). This Task Force has come up with a new global actuarial syllabus setting out a minimum standard which the Task Force believes is needed to be an effective actuary in the 2020s. This new global syllabus has recently been adopted officially by the IAA, and is already influencing changes taking place in the syllabuses of many actuarial associations around the world. Some of these changes including the incorporation of the mathematical techniques and professional impacts of the big data revolution, as well as more focus on the delivery skills of an actuary, including communication and professionalism. There is also a focus on defining the depth of knowledge and thinking skills required for each part of the syllabus to help define more clearly the competencies needed for an actuary. All of this will also allow the actuary to operate in an increasing range of roles and industries, using a powerful technical toolkit backed with the professional promise.
In this talk, we present challenges related to understanding, modelling, hedging and monitoring longevity risk. We first describe longevity risk and its potential consequences for the insurance and pension industry. We then describe how to model longevity risk and uncertainty in presence of very heterogeneous demographers viewpoints. In particular we introduce population dynamics approach to longevity modelling. We present partial hedging solutions and the associated basis risk. We conclude with operational monitoring solutions. Numerical illustrations are given based on simulations and real world insurance portfolios.
The German legislator has introduced optional covers in German Public Health Insurance in 2007. In my presentation I will explain the background of this decision, the different products which were introduced in the market and details of the regulatory environment. I will describe the pricing of such products which requires particular methods that are not commonly used in health insurance. The final part of the talk will deal with the performance of the products in the market, their profitability and the impact on the claims cost of the health insurers.
ForecASTINg Medical Trends remains one of the most significant components of healthcare pricing. This presentation analyses the sources of trend from utilisation effects to provider behaviour, and in addition summarises commonly used methods to control future trends.
In Germany, long-term care insurance has been introduced as a pay-as-you go (PAYG) financing scheme in 1995. This system is however not sustainable with demographic change leading to a growing number of very old people (beneficiaries) and a shrinking share of young people financing long-term care insurance. These demographic developments result in either rising contributions (as we have witnessed in the last years) or in rationing insurance benefits. To see how a capital funded system would have coped with demographic change, we will simulate the premium path that would have emerged if the long-term care insurance had been introduced as a capital funded system for the whole German population in 1995. We will use the calculation model of the private mandatory long-term care insurance (“Private Pflegepflichtversicherung”) to calculate the starting premium that would have been necessary to insure the respective cohorts of the population. The basis for this calculation will be the information of 1995 on the benefits basket and the probability of benefit claims. Furthermore, we will also take into account the politically intended premium cap and the resulting cost-sharing between cohorts. On this basis, we will simulate the premium path successively over the years by taking into account all events requiring a premium adjustment (such as changing benefits, higher benefit claims or rising life expectancy). By that, we will generate a realistic premium path of the fictive capital funded long-term care insurance that can be compared to the contribution burden in the actual PAYG system. This comparison will be made using exemplary insureds with different socio-economic characteristics. In a second step, we will create a premium/contribution forecast for the fictive capital funded system and the existing PAYG long-term care system respectively to highlight the future difference in the premium/contribution burden of both systems.