Home » Analytics
Category Archives: Analytics
Felix von Leitner, Code Blau GmbH, Source Code Optimization, here. Top shelf presentation. I would adjust this money shot though for the competitive FinQuant folks. Yes, the optimization of the memory hierarchy performance is a really important goal. But simply optimizing your code’s memory performance without addressing the fact that you are running on a little superscalar, superpiplined parallel machine, may set you apart from the apathetic coders out there, but it doesn’t win you many competitive races. There are two primary optimization targets for FinQuant code optimizations that need to exhibit competitive performance:
1. Instruction Level Parallelism (ILP) and
2. Multi level Cache Hierarchies
If you miss the ILP, you can lose close to an order of magnitude of execution performance. If your code is missing in L2 or L3 needlessly, you could be dumping multiple orders of magnitude of execution performance. If you get both of these right in FinQuant analytics, and you have a the right algorithms, and top of the line off-the-shelf microprocessors then there is no competitor running significantly faster than you (in any race you care about). Where does this fail to hold? Something like session oriented protocol processing for exchange order routing – custom hardware w. FPGAs can demonstrate much better latency performance (say an order of magnitude improvement 10mics to 400ns) than off-the-shelf servers running software no matter how cleverly you optimize. Where does this hold? Standard FinQuant applications like Monte Carlo simulation, most Fixed Income P&L and Risk, even something as simple as Black Scholes.
• Only important optimization goal these days
• Use mul instead of shift: 5 cycles penalty.
• Conditional branch mispredicted: 10cycles.
• Cache miss to main memory: 250cycles.
Marco Avellaneda, NYU, Algorithmic and High-frequency trading: an overview, here.
Algorithmic trading: the use of programs and computers to generate and execute (large) orders in markets with electronic access.
Almgren and Chriss, NYU, Dec 2000, Optimal Execution of Portfolio Transactions, here.
We consider the execution of portfolio transactions with the aim of minimizing a combination of volatility risk and transaction costs aris- ing from permanent and temporary market impact. For a simple lin- ear cost model, we explicitly construct the efficient frontier in the space of time-dependent liquidation strategies, which have minimum expected cost for a given level of uncertainty. We may then select op- timal strategies either by minimizing a quadratic utility function, or by minimizing Value at Risk. The latter choice leads to the concept of Liquidity-adjusted VAR, or L-VaR, that explicitly considers the best tradeoff between volatility risk and liquidation costs.
Found some old prose for a study I wrote on Counterparty Valuation Adjustment for OTC derivatives. I was thinking to pull this together for publication back in the day and then saw Jon Gregory’s CVA book was already out on the shelves, so this prose kind of got lost. I had not looked at it for a couple years and the details on the per trade mitigants was not foremost in my memory when I reread this. It isn’t bad. The Analytics are really quite straightforward. Getting the client data clean and lined up and the grid/cache layout of the Monte Carlo grid execution are really the whole game here.
Counterparty Valuation Adjustment Analytics
Once you have the Reference Data and Trade Inventory in hand, the problem is to apply suitable CVA models allowing a production CVA computation to produce valuation, sensitivity, and explanatory information in a timely fashion for the desk and P&L reporting. Each desk’s product group maintains a series of models coded in an analytics library for the P&L valuation of each of their particular products. For each trade the analytics compute the expected fair value of the position given a suitable series of underlying prices, spreads, and market levels for that particular product. As part of their daily P&L batch they retrieve market data and compute the MTMs, risk sensitivities, and PAA for each active trade in their inventory on the current business day. Now for CVA, the product group and research develop a model for estimating the expected future MTM exposure, not for just a single product trade (and its hedges), but for a P&L batch trade portfolio corresponding to a particular counterparty or even a specific Master Agreement signed with a specific counterparty. In this case the model evolves a set of relevant prices, spreads, and market levels forward in time in order to calculate and estimate of the total MTM exposure of multiple products simultaneously. Whereas, the product groups P&L pricing model has to get expected fair value for product A and be well behaved with respect to other products used to hedge product A; The CVA model sort of has to get a fair price for all the products in a portfolio simultaneously by evolving a larger correlated set of underlying prices, spreads, and market levels forward in time. Remember someone is going to try to hedge the resulting CVA risk.
The Asset Charge and Liability Benefit are main valuation components of the CVA numbers used to describe the credit exposure due to fluctuations observed in the credit spreads of OTC derivative counterparties. One important distinction to note upfront is that the CVA numbers are treated as portfolio level P&L quantities; the CVA numbers do not generally get broken down to the trade level unlike MTM P&L. We will see why that is by first describing how CVA numbers are computed on a single trade and then examining how CVA numbers are generally computed as a portfolio of trades subject to the same ISDA Master Agreement (or absent that, the same Legal Entity x counterparty).
Counterparty risk exists on those individual trades for which the current or potential future replacement cost from Dealer’s perspective may be non-zero. Think of two trades entered in to by Party A; a dealer the first trade is a 5Y payor swap sales trade with Party B, a client of Dealer. Party B default protection is currently trading at 320 bps over USDlibor. The second trade is 5Y receiver swap, exactly offsetting the first trade, between Party A and Party C. Party C default protection is currently trading at 165 bps over USDlibor. The first trade is marked to market at $100 today and the second trade at -$100. From Party A’s perspective there is no market risk on the hedged position, however, since Party B and Party C default protection trades differently (note different spreads over Libor) there is counterparty risk for Party A. The trade currently marked at $100 is unlikely to exactly offset the second trade with Party C because the market expects that Party B has a significantly greater chance to default on its payment obligations than Party C. Notice there is nothing special about the underlying Swap – it could as well be an FX option, commodity swap, or a Local market swaption.
Looking more closely at the first trade between Party A and Party B we see that the positive MTM trade is slightly less positive when you consider Party B’s market implied expected capacity to pay – this is called the CVA quantity – Asset Charge. Similarly, Party B, in the event of its own default, will only pay the Recovery Rate *$100 to Party A so the liability arising from the first trade’s MTM is, once again, slightly less negative (from Party B’s perspective) – the Liability Benefit to Party B.
Asset Charge (Party A) = – Liability Benefit (Party B)
One method to quantify the magnitude of the CVA adjustment (e.g., Asset Charge) to the MTM to reflect the counterparty risk is to imagine that in addition to printing the 5Y Payor swap, Party A implicitly sold an option to knock-out on default the 5Y payor swap to Party B (represented by the dotted arrow in Figure) with a termination fee of Recovery Rate * the MTM of the 5Y payor swap at the time Party B defaults. The value of the CVA Asset Charge is identically the value of the knock-out option and therefore also the magnitude of the Liability Benefit.
So how does one compute the Asset CVA on a single 5Y Payor Swap? The issue in determining CVA is that, unlike coupons in fixed rate bonds, swap cash flows are not certain and future values can be positive or negative with asymmetric outcomes in the event of default by either counterparty. Let’s assume we employ a Monte Carlo analytic framework even though in certain cases closed form solutions or other faster numerical approximations may be used in practice. One important factor in the Monte Carlo framework is the ability to use the Front Office valuation for the day zero mark to market. The mark from which the process evolves is the P&L mark so one might expect that the major approximation error to monitor is the Monte Carlo convergence in the context of the market factor evolution/diffusion model.
The CVA Monte Carlo framework dictates that we compute PVs along a set of paths of future interest rates using current market (forward) rates and implied volatilities indexed by a series of tenors. This is likely to be the dominant part of the runtime of the entire CVA computation. Any good runtime estimates need more information on the specifics of the quantitative model and numerical approximation frameworks selected by the product groups and Research.
The estimated data requirements for 2000 paths and 300 tenors at 8 bytes a double is, worst case, 4.8 MB per trade (presumably, we are running 300 tenors out to 30Y so not all tenors will be needed for a 5Y swap). Assuming 1MM trades in scope we are looking at 4.8 TB (worst case) of intermediate storage for PVs. From a computational runtime perspective I think you already know that the computation needs to entirely residing in a single process memory address space (in say a single grid processor after the embarrassingly parallel MC computation is load balanced across a grid) to avoid absolutely egregiously bad memory performance.
The tenors cover the remaining time to maturity of the 5Y payor swap. For a given set of N simulations let PVi(t) denote the mark to market exposure at simulation path i and tenor t. Additionally, we need to segregate the positive and negative contributions to the expected exposure by defining posPV and negPV. Note, that when we take the max() and min() we lose the expectation linearity past this point in the process of computing CVA.
posPVi(t) = max(0, PVi(t))
negPVi(t) = min(0, PVi(t))
The computation requires the valuation of a swap at a sequence of tenors out to maturity of the swap when the mark to market is zero. Additionally the computation determines the EPE and ENE, the average positive mark to market (posPV) and the average negative mark to market (negPV). The data volume at this step is proportional to the number of counterparties times the tenors.
EPE(t) = 1/N * ∑(i=1,N) posPVi(t)
ENE(t) = 1/N * ∑(i=1,N) negPVi(t)
Compute the Unilateral CVA or the Asset CVA by discounting the loss on default * EPE(t) termwise per tenor with the risky discount factor to obtain a scalar value. The Liability CVA is similarly discounted by the Firm’s loss on default and risky discount factors. Typically, a Unilateral CVA has been applied to trades where the expected counterparty exposure is positive (i.e. a receivable from the counterparty) occasionally referred to as Asset CVA or Asset Charge.
Asset CVA= ∑(t=1, T) delta t * EPE(t) * (1 – Recovery Rate) * CP.rdf(t)
Liability CVA = ∑(t=1, T) delta t * ENE(t) * (1 – Dealer Recovery Rate) * Dealer.rdf (t)
Unilateral CVA can be defined as the difference between the risk-free portfolio value and the true portfolio value that takes into account the possibility of a counterparty default. Bilateral CVA requires adding a term to the Unilateral CVA that accounts for the risk the Dealer Legal Entity, as a party to the trade, poses for the counterparty
Bilateral CVA = Asset CVA + Liability CVA
The Firm’s current CVA exposure (Asset CVA) to the counterparty is a function of current MTMs (as we just discussed) as well as the effect of any legally enforceable exposure mitigations. We are going to review the exposure mitigants at the single contract level now. Ignoring credit derivative hedges, at the individual trade level the applicable exposure mitigants include:
- Stop Loss and Recouponing
- Optional Early Termination and
- Reverse Walk Away Clauses and Extinguishing Derivatives.
Margin and collateral are applicable for discussion at the single trade level but really gets implemented at the counterparty account level. Typically, an investment bank has an entire independent department of folks running margin for the Firm based on a centralized database of accounts opened for counterparties and their trading/legal entities at the inception of the trading relationship. Historically, the challenge to the desk is almost always to the view integration between the counterparty account level current collateral on hand reports from Margin and the P&L book level MTM and risk reports from the desk for a specific trader. Margin happens to be a portfolio exposure mitigant that can be discussed at the single trade level so we will take advantage of that and cover it here before we proceed to the portfolio CVA computation.
Margin agreements for OTC derivatives are typically in the form of an ISDA Credit Support Annex to the main ISDA Master Agreement. Among other things the CSA sets a ratings based schedule of Collateral thresholds. In a vastly simplified description, if at a given time a counterparty has a Moody’s rating of A the schedule is used to determine a dollar (in general, some stipulated currency) level for the value of the collateral that must be held in the margin account. The value of the MTM minus the posted collateral must be less than the threshold. If the threshold is breached the Margin folks call up the counterparty and request no less than a minimum transfer amount (also in the CSA) of collateral to restore the threshold integrity. So, in effect, the greater the Firm’s CVA exposure to a counterparty, on a particular trade, the more collateral the Margin folks are holding. Think of the positive MTM of the trade as a loan to the counterparty and the margin folks as maintaining a dynamic portfolio of cash-like assets in a counterparty account securing the loan. The CVA computation can model margin by bounding the expected PV of the trade by the residue after the counterparty margin call. Below we show the residue as the remainder after dividing the PV by the ratings based threshold (we are ignoring the Minimum Transfer Amount here) assuming that the margin agreement is bilateral (i.e., both the counterparty and Dealer post collateral per the CSA).
posPVi(t) = max(0, res(PVi(t), CP Threshold))
negPVi(t) = min(0, res(PVi(t), Dealer Threshold))
Then use posPV and negPV to compute EPE and ENE (and CVA) as above. We are just describing the tip of iceberg that is Margin and Collateral modelling. From a computational runtime perspective Margin has a non-trivial cost which we will account for in estimates presented later in the discussion.
In several trading jurisdictions where legal concerns about enforceability of margin agreements have arisen, a trade may include specific contractual clauses to limit distressed counterparty exposures as well as loss limits in the event of defaults. A stop loss agreement is an MTM threshold trigger at which the contract unwinds. The parties to a stop loss exercise change both their market and counterparty risk profiles. Recouponing is an MTM threshold triggered unwind followed by reprinting the terms of the unwound trade as par trade. So after the unwind fee is paid the parties to the recouponed trade exercise maintain the same market risk but not the same counterparty credit risk. Third party guarantees may be used to offset counterparty default risk. Think of once popular monoline wraps of synthetic CDO tranches, for example. The credit risk of the guaranteed tranche is dependent on the correlated default probability of the counterparty and its guarantor. Optional Early Termination is negotiated on a per trade basis allowing Party A and Party B to terminate and settle a trade at a predefined date (or schedule of dates) prior to the nominal maturity date. Finally, reverse walk away clauses or extinguishing derivatives are gaining some traction. These are trade clauses, invoked at the time of a counterparty default, stating that the Firm (as party A to the trade) has no claim for the MTM of the trade. The Firm is short a binary credit default option and is exposed to recovery rate market risk. Analysis of the possibility of the Firm actually exercising these clauses against a counterparty in distress must evaluate the value of the trading relationship. The simple observation that the Firm is in effect terminating the trading relationship with the distressed counterparty by invoking these clauses and causing addition economic hardship to the client when they can least afford it limits the exercise of these options. In the case of a counterparty default, or the counterparty exercising against the Firm, then the trading relationship is presumably concluded in any case making exercise of these trade-by-trade negotiated options more plausible. From a computational runtime perspective the additional cost of these clauses to the data already enumerated inventory lookup runtime cost is small. The evaluation runtime cost seems likely to be small as well.
Some of the most significant counterparty portfolio exposure mitigants are the close-out netting provisions defined in the ISDA Master Agreements. The general idea is that a Firm should not have to recognize the potential exposure with respect to a given trade’s counterparty exposure if that exposure is adequately hedged by another trade (or set of trades). To discuss netting we need to extend the CVA computation discussion from the single trade case to the portfolio context.
For a given set of N simulations let PVij(t) denote the mark to market exposure at simulation path i for the jth trade in the netting group at tenor t. Netting at the PV level lets you choose the order of summation since expectation linearity is, so far, preserved. In this case we choose to net at the PV level and maintain the respective margin thresholds:
posPVi(t) = max(0, res(∑(j=1,J)PVij(t), CP Threshold))
negPVi(t) = min(0, res(∑(j=1,J)PVij(t), Dealer Threshold))
The average of all positive MTM values is called the “Expected Positive Exposure” or EPE. Similarly we can calculate “Expected Negative Exposure” or ENE. EPE (and ENE) use market rates and implied volatilities rather than historic vols. Assume that all the trade-by-trade mitigants are handled separately and aggregated appropriately to get an accurate CVA result. The data volume at this step is proportional to the number of counterparties times the tenors.
EPE(t) = 1/N * ∑(1,N) posPVi(t)
ENE(t) = 1/N * ∑(1,N) negPVi(t)
As in the single trade case,
Asset CVA= ∑(t=1, T) delta t * EPE(t) * (1 – Recovery Rate) * CP.rdf(t)
Liability CVA = ∑(t=1, T) delta t * ENE(t) * (1 – Dealer Recovery Rate) * Dealer.rdf (t)
This worst case data volume can be brought down a bit by collecting the trades per Master Agreement or counterparty ~50K master agreements versus perhaps 1MM trades. From a runtime perspective, even with the reduction in the data volume it seems likely the computation needs to reside in a single memory address space (a single grid process) from computation inception to completion. All in, the Master Agreement portion of the CVA computation cost the runtime for the production of the PVs (the dominant part of the runtime) plus the computation for the posPV, negPV, EPE, ENE, Asset CVA, and Liability CVA. posPV and negPV should be computed as partial sums along side the initial PV computation. Let’s allocate 1000 cycles for cache misses for each of the N paths. The min() and max() compare costs 10 cycles per path. Lets assume all million trades are in the netting group but since we accumulate them as partial sums in the original PV calculation we only need to account for one incremental addition more or less hitting the L1 caches or 4MM cycles per path. The residue will cost you a divide, multiply and subtraction – call it 50 cycles (the divide is probably dominant at ~25 cycles). So the posPV and negPV look like 2000 paths * ~4MM cylces (all in) = 8 billion cycles at 3 GHz or 3 seconds of runtime. The runtime of the EPE and ENE computation is 4000 adds and a couple of multiplies and is completely dominated by the posPV and negPV runtime even if the execution suffers L1 cache misses on every single one of the 4000 adds. Ditto the Asset CVA and Liability CVA computations, so all in there is about 3 seconds of aggregate runtime (w. contemporary microprocessor and competitive code) for all the Master Agreements on top of the product group’s PV computation.
There is a demand for computing Economic CVA and measures for reporting Cost of Funding as well as Asset and Liability CVA. Risk sensitivities, distressed market Scenarios, and PAA are required at a small multiplicative runtime cost to that already outlined in the preceeding CVA valuation discussion.
Nicole Hemsoth, HPCWire, LINPACK Creator Sheds Light on Emerging HPC Benchmark, here. Interestingly both LINPACK and HPCG are of limited relevance to a major chunk of Wall Street analytics. On The Street if you ain’t a Magoo and you’re missing in L2, you’re doing it wrong. Efficient use of the NUMA interconnect by the Top Gun Erlang Low Latency Analytics boys, surely Sir you jest?
Back in June during the International Supercomputing Conference (ISC), we discussed the need for a potential alternative to the current LINPACK benchmark, which is the sturdy yardstick by which supercomputing might is measured, with its creator, Dr. Jack Dongarra.
At that time, he discussed a new benchmarking effort that is taking shape with the input of several collaborators, called the high performance conjugate gradient (HPCG) benchmark. The news about this effort drew a great deal of positive reaction from the scientific computing community in particular as it is more in tune with the types of modern and future simulations that are actually running on LINPACK top-ranked systems on the Top500. This new benchmark will be announced in further detail tomorrow (Tuesday) during the Top500 announcement and will be made available to be tested across a wider array of systems.
W. Ben Hunt, Epsilon Theory, A Game of Sentiment, here. So Sally Kellerman reads for MASH, and Altman tells her she has the best role in the movie, and she’s not sure the part has enough lines. 30+ min of burn for Bogut, Z-Bo 26-15 – 2 stls and good %s, but Pierce sucked hard and I had to watch it on TeeVee because Joe Johnson is now on the wire. Farmers ended up pulling Mo Williams off the wire and leaving Iso Joe even though Mo barely gets 30+ burn in the loaded Blazer backcourt. Johnson cannot be this bad the entire year, Prokarov must be losing his shit realizing that at this point in KG’s career, Prokarov can probably score on him, if you give him ten tries standing directly under the basket, with the ball, and his dribble.
But the most interesting aspect of the CK game played on the Island of the Green-Eyed Tribe is the role of the Missionary. It is the public statement of information, not the prevalence of private information or beliefs, that forces movement in the CK game. The public statement is what creates Common Knowledge, even if all of that knowledge was already there privately. Everyone must see that everyone else sees the same thing in order to unlock that privately held information and drive individual decisions and behavior.
Cardiff Garcia, FT Alphaville, GS: QE’s portfolio rebalancing effect has been underestimated, here. Index pricing looks interesting with the limited corp bond liquidity.
Here is the explanation from Edgerton:
Unlike equities, many corporate bonds trade infrequently. In the current iBoxx US investment grade index, for example, about 15% of bonds in the index trade less than once a month, and only about 65% trade on any given day. Even fewer bonds trade multiple times per day in substantial sizes. Thus prices and yields for a large fraction of the bonds that are aggregated into published indices must be estimated by the providers of the index data each day.
Unfortunately, it appears that the procedures used to estimate these prices do not incorporate all information available on each day, because future movements in bond indices are easily forecastable well into the future. To illustrate, we regress daily changes from 2010 to present in the Bank of America-Merrill Lynch BBB index yield on contemporaneous and lagged daily changes in 5-year Treasury rates and daily changes in spreads on the 5-yr CDX index of corporate default swaps, a more liquid credit market instrument. Exhibit 1 graphs the cumulative effect over time of a 1 bp increase in 5-year Treasury yields and a 1 bp increase in the CDX index spread on BBB index yields.
Kurt Helin, Rotoworld, Patrick Beverley Latest News, here. Ongoing study of what Happiness really is. Wired McBob and Nene, Pulled ZBo and Nash from the wire. Lead Farmers are going to try and make a go of it with a bunch of players who only do a couple things well. Ibaka, Aston, KMart, Zbo, Varejao, and Bogut. I don’t understand where Ibaka has disappeared to, it’s messing up the plan. Patrick Beverley looks good in warmups, feels improved, and increased his standing with the Rockets by not playing.
Patrick Beverley reportedly “looked good” in Monday’s pregame warmups according to Pro Basketball Talk’s Kurt Helin, and also said that he “feels improved.”The Rockets got torched defensively tonight and in particular Jeremy Lin and the perimeter group, leaving Rockets beat writers to bemoan the absence of Beverley at regular intervals throughout the night. He actually gained value on a night he didn’t play, and while we’re not saying it will happen it wouldn’t be surprising if he accelerated his timetable given the totality of the situation. Given Beverley’s likely stat output and chance he could eventually start for one of the NBA’s best fantasy teams, he shouldn’t be on waiver wires. Nov 5 – 1:45 AM
Chris Paul scored 23 points on 7-of-13 shooting (1-of-3 from deep, 8-of-8 from the line) with three rebounds, 17 assists and two steals in the Clippers’ 137-118 win over the Rockets on Monday.
Fantasy’s No. 1 play scoffs at your James Harden and Steph Curry selections as he has taken Doc Rivers’ suggestion to get more aggressive under complete consideration.
Dan McCrum, FT Alphaville, Puny human analysts to be crushed by algorithmic steamroller, here.
The world’s most successful hedge fund strategy is both the most secretive and the one that you could never invest in: Renaissance Technology’s Medallion fund. The fund ignores the mainstream finance literature, preferring to scoop up experienced cryptographers and mathematicians required to sign intimidating non-compete clauses, and it is willing to trade signals in the noise that work even if it doesn’t understand why.
According to II, however, new systems on offer to analyse the flood of news headlines and tweets can cost from $5,000 to $20,000 a month, less than the annual pay of a junior hedge fund analyst.
We’re reminded of AQR, the hedge fund firm built upon a former Goldman Sachs prop desk that has become an all-round quant-based asset manager. Each year it offers a $100,000 prize to get an early look at unpublished papers from the world of academic finance.
S.L. Mintz, Institutional Investor, Analysts Beware! A Machine Has Its Eye on Your Job, here.
You are invited to submit a paper for the 2014 AQR Insight Award. A $100,000 prize will be awarded to the most important unpublished finance paper that provides significant, novel, practical solutions for institutional investors and financial advisors. The judges will have discretion to select up to three papers to share the prize.
Dan McCrum, FT Alphaville The rate exit, Credit Suisse edition, here.
The Swiss bank has followed the lead of UBS in deciding that core fixed income trading is just too expensive, now that the whole flight to safety trade is over and lucrative over the counter business is dwindling.As the FT reported last month:
Regulators around the world have raised capital requirements in the wake of the financial crisis and this, combined with subdued markets and a move towards central clearing, has prompted banks to rethink how they run their debt trading.
Matt Levine, Bloomberg, Levine on Wall Street: Ditch Your Index Funds, here.
Stock-picking is back
Actively managed mutual funds have been so maligned for so long that I guess they were due for a good year, and so Reuters reports that “Some 57 percent of U.S. funds run by active managers are beating their benchmark indexes this year.” This seems to be attributable to the fact that stocks are moving less together and doing more of their own thing: Implied correlations “have fallen to their lowest since October 2007 after peaking in 2011. “It’s a stock-picker’s market,” as they say, and boy do they ever say that. (Not Reuters, I mean, just the generic “they.” Though Reuters too.) One lesson here is that investing success is built on a series of meta-skills and meta-judgments: Sometimes it is nice to be able to pick stocks (or to pick fund managers who pick stocks), other times wisdom lies in indexing, and knowing in advance which times will be which has obvious value.
Richard White, Open Gamma, Pricing and Risk Management of Credit Default Swaps, here. That paper doesn’t look too bad on a first read.
For historical reasons (mainly due to CDS trades coming from the bond world), the risk management around CDSs has followed a bump and reprice methodology rather than the analytic sensitivities used in the (mathematically similar) interest rates world. There is no reason to prefer these (one-sided finite difference) approximations, other than it is what people are accustomed to seeing. For the ISDA model it is easy to produce analytical first-order risk factors.
In doing all this, I produced what turned out to be a lengthy document, which can be found on our website:
The paper describes general CDS pricing before detailing the ISDA model. It then discusses analytic risk factors and gives explicit formulae for the ISDA model. It goes on to discuss hedging and portfolio rebalancing – both as they are currently performed, and how they can be performed with analytic sensitivities. Finally I briefly discuss other models for CDS pricing.
Matt Levine, Bloomberg, Commodity Trader Didn’t Really Believe in Market Prices, here.
So: That’s not supposed to happen! My colleague William Cohan recently wrote that JPMorgan’s “biggest mistake” in the London Whale situation was that the bank relied on traders to mark their own books of over-the-counter derivatives, rather than having an independent valuations group rigorously test valuations against third-party data. You need that rigorous valuation control because valuing structured over-the-counter trades is hard, and leaves some room for judgment. And the Whale traders used that wiggle room in bad faith, mismarking their opaque unlisted positions by something like 2 percent to 4 percent.***
Interesting, Levine is one of the best at outing fools and clowns publicly, gladly, and most mirthfully, but here he has allowed impurities to slip into his latest examination of Tomfoolery. Wisty knows Tomfoolery and insists on its purity for the discriminating Pink I reader.
The case made here is that there is no sensible reason to ever mark a liquid instrument away from mid. So therefore every trading desk that allows variance in the m2m marks away from the commonly accepted mid is a fool being wiggled by a Whale. But this is obviously not true – Interest rate swaps model conversion to OIS mark to market during the recent credit crisis, QED. The confusion is assuming a m2m mark is like the location of a ball moving on a frictionless surface in a closed system (no exogenous forces). They are not the same:
A. there is typically no commonly accepted security valuation model, like f=ma, to give you a mark and a series of future marks that are correct for any frame of reference , and
B. Markets are not closed systems – there are exogenous forces (e.g., Congress, The Fed, China, CFTC, The SIP feed, etc., etc.).
One of the reasons it is difficult to regulate trading desks is there are numerous circumstances where the trader’s mark is in fact the “best” mark ( aka shinola). The regulator’s job, the controller’s job, and the journalist’s job is to distinguish shit from shinola. The fun part is, since these guys cannot remember STEM after Algebra 2, they can’t reliably tell if you are quoting Ito’s Lemma or reading from Buzzfeed. Read the Cohan piece, he obviously has no idea what is going on with JPM/London Whale P&L marking, but he has read a bunch of articles and wrote a book about something and so is guessing that he’s looking at marking fraud and wants us to do something about it. Here, Levine and Cohan left some shinola in the shit, and that’s nasty.
Barry Ritholtz, The Big Picture, An Illustrated Book of Bad Arguments, here.
I deal with bad arguments all the time, constantly swatting away silly arguments and foolish rhetoric. This little book does a nice job summarizing them all.
John Hull and Alan White, Using Hull-White Interest-Rate Trees, here. So you need this in Cont et. al. for the contingency of collateral posting on the variation margin.
The Hull-White tree building procedure is a flexible approach to constructing trees for a wide range of different one-factor models of the term structure. The tree is constructed in such a way that it is exactly consistent with the initial term structure. In this article we have shown how the basic procedure presented in our earlier paper can be extended. Some of these extensions involve the use of analytic results and some involve changing the geometry of the tree to reflect special features of the derivative under consideration. We have devoted some time in this article to a discussion of what happens when the volatility parameters are made time-dependent. It not difficult to extend the Hull-White tree to incorporate time-dependent parameters so that the prices of caps or swap options (or both) are matched. However, this is liable to result in unacceptable assumptions about the evolution of volatilities.