You are currently browsing the tag archive for the ‘Credit Derivatives’ tag.
Deus Ex Macchiato, Is it time to centralise capital calculation? here. Forgot about this oldie. These are the VaR results from reporting banks on the same sample portfolio varying by up to a factor of ten. Can you see the Clown Car now? I guess you could centralize the Clown Car so there is some sort of planning, maybe the clowns would occasionally give the same answers to the same questions. But this is Sisyphian, they’d still be clowns, albeit somewhat standardised clowns. Hey, here’s a crazy idea, forget the clowns for a minute, maybe find out why the Whale didn’t know the P&L of his positions in January and got caught off-sides? Does Anybody Remember Laughter?
As Dealbreaker acidly puts it, banks rarely differ from each other by more than a factor of ten. It’s no wonder, then, that investors are losing trust in capital ratios. The answer is clear. Centralize and standardise capital calculation. Throw all those internal models away, and use one common, regulator developed approach. Now that a lot of progress has been made on trade reporting, the data infrastructure exists to do this — or least it wouldn’t be too hard to extend what does exist to do it. Developing the models would be a huge undertaking, but compared to having each large bank do it individually, a central infrastructure would be cheaper and more reliable, and anyway you could pick the best of individual banks’ methodologies. You could even spin out the var teams from four or five leading banks into the new central body — just don’t pick the bank whose IRC is less than 10% of the average answer…
Shipkevitch, CFTC Law, CFTC’s Bart Chilton Warns of ‘Swapification’ of Futures Markets, here.
Finally, while I’m interested in hearing the concerns about futurization, I am more concerned about, a silent creeper. That is, the “swapification” of the futures markets. Specifically, I’m concerned that the conversion of certain standardized cleared swaps will be under-regulated–under-regulated–in the futures markets. It may be block rules or something else, but we need to be cautious about converting certain swaps to futures in an attempt to export the deregulated, opaque swaps trading model to these new futures markets. Let’s be cautious about allowing lax oversight of these futures contracts, regardless of how they were treated before they were futurized.
Jon Skinner, The OTC Space, How Big is the OTC Really, here.
So how does OTC compare with other markets? Comparing global market values at the same point in time i.e. end 2011, we have:
Instrument Global Market Value ($trn) Bonds 93 Loans 64 Equities 54 OTC Derivatives 27
Lisa Pollack, Alphaville, Ten times on the board: I will not put “optimizing regulatory capital: in the subject line of an email, here. Interesting material. Hagan is a really good quant, in the business for years after leaving JPL, I think. He did the dominant contemporary model for Rates volatility. He is in a really shitty position with JPM CIO blowing up, and he now has Pollack jumping all over his dick about VaR. What a nightmare.
More trivially, we find ourselves contemplating whether Hagan deserves a moniker. As an example of the importance those supporting trading play. Ideas thus far include: HAL, Miles Dyson, Potter!, and that type of fish that cleans whales. Further ideas and thoughts welcome in the below space.
Lisa Pollack, FT Alphaville, Risk limits are made to be broken, here. Shades of Strother Martin, but Pollack cannot, or will not, get it that the VaR is for dopes. Nobody even checks the VaR to see if the sign on the valuation is always right. There’s banks where the Firm VaR uses market data from a month ago and miss a chunk of the positions on the trading books because “it’s hard.” JPM VaR is a space age rocket science operation in comparison to those guys. But don’t lose sight of the point – VaR, with or without the rocket science, is a Clown Car operation in the best of circumstances. When things get too tense in an adverse market and the regulators and government officials need to be distracted – out comes the VaR Clown Car and one clown after another exits the car to the delight, amazement, eternal entertainment of the assembled officials, while the traders scramble to safety. It is not that unreasonable to blow through the VaR limits, maybe not quite as much as Pollack is showing, on a good day.
Pollack is super entertaining but she’s dancing in the weeds. It’s like giving the Titanic owners infinite shit because the ship split in half before it sank. “How you gonna sail a ocean liner across the Atlantic Ocean in TWO pieces? Jesus! Who in charge here, Clowns?” I’ll bet you that the London Whale and probably Ina Drew were as surprised as the rest us that VaR Clowns were required. Once they are “out of the car,” detailed analysis, cross referencing, and checking their historical statements, testimony, and publications is somewhat less than illuminating, because they are after all … clowns.
Big picture question: what do risk limits mean at JPMorgan? CEO Jamie Dimon explains:
JPMorgan Chase personnel, from Mr. Dimon on down, all told the Subcommittee that the risk limits at CIO were not intended to function as “hard stops,” but rather as opportunities for discussion and analysis.
Omg, look at all that opportunity for discussion about the CIO’s synthetic credit portfolio!!!
Lisa Pollack, Alphaville, Humongous credit derivative cake proves inedible, here.
THE MODEL GOT IT WRONG. ALL THE THEORETICAL UNDERPINNINGS OF VALUATION HAVE BROKEN DOWN AND THE VOLATILITY HAS BROKEN ALL HISTORICAL AND WORSE CASE BANDS.
03/23/2012 06:20:09 BRUNO IKSIL, JPMORGAN CHASE BANK, says: i did not fail, here.
Mr. Iksil later told the JPMorgan Chase Task Force investigation that he had not been able to sell as much credit protection as he would have liked (which would have generated more carry and profits to keep pace with the high yield rally). He said that two risk metrics – the “VaR” and “CS01” – prevented him from doing so. He later wrote in an email: “[T]he need to reduce VAR – RWA and stay within the CS01 limit prevented the book from being long risk enough.”
When asked about the February trading activity, the OCC told the Subcommittee that the CIO traders apparently believed that the prices in the markets were wrong, and that the traders had a strategy to defend their positions and keep the prices from falling by taking on more of them. Mr. Macris later said that all of the trades and losses were “well-communicated” to CIO management, meaning that his supervisors were fully informed about the status of the SCP book. 517
US Senate, JPMorgan Chase Whale Trade: A Case History of Derivatives Risk and Abuses, here. Very instructive to see how this plays out. After a quick read it still seems like the desk’s P&L correlation cooker simply failed and all the rest of the remarking, reporting, and VaR adjustments were simply compensating factors that are now under the spotlight. It is not like the conclusions here are so wide of the mark, but they don’t seem to get a handle on why this evolved the way it did. The only reasonable explanation is the Whale was flying blind on CDX tranches and got his positions exposed. It is not the case the Whale or the CIO didn’t know the market levels of his CDX and CDS trades, exactly. They had to move the CDX/CDS marks to preserve the hedge to the correlation book P&L, which was out.
The ability of CIO personnel to hide hundreds of millions of dollars of additional losses
over the span of three months, and yet survive internal valuation reviews, shows how imprecise,
undisciplined, and open to manipulation the current process is for valuing credit derivatives.
This weak valuation process is all the more troubling given the high risk nature of synthetic
credit derivatives, the lack of any underlying tangible assets to stem losses, and the speed with
which substantial losses can accumulate and threaten a bank’s profitability. The whale trades’
bad faith valuations exposed not only misconduct by the CIO and the bank’s violation of the
derivative valuation process mandated in generally accepted accounting principles, but also a
systemic weakness in the valuation process for all credit derivatives.
Gretchen Morgenson and Joshua Rosner, Reckless Endangerment: How Outsized Ambition, Greed, and Corruption Created the Worst Financial Crisis of Our Time, here. Now for just
A Washington Post Notable Nonfiction Book for 2011One of The Economist’s 2011 Books of the Year
In Reckless Endangerment, Gretchen Morgenson exposes how the watchdogs who were supposed to protect the country from financial harm were actually complicit in the actions that finally blew up the American economy. Drawing on previously untapped sources and building on original research from coauthor Joshua Rosner—who himself raised early warnings with the public and investors, and kept detailed records—Morgenson connects the dots that led to this fiasco. Morgenson and Rosner draw back the curtain on Fannie Mae, the mortgage-finance giant that grew, with the support of the Clinton administration, through the 1990s, becoming a major opponent of government oversight even as it was benefiting from public subsidies.
Matt Levine, DealBreaker, Senate Subcommittee Feasting On Whale Today, here.
CIO’s most senior quantitative analyst, Patrick Hagan, who joined the CIO in 2007 and spent about 75% of his time on SCP projects, told the Subcommittee that he was never asked at any time to analyze another portfolio of assets within the bank, as would be necessary to use the SCP as a hedge for those assets. In fact, he told the Subcommittee that he was never permitted to know any of the assets or positions held in other parts of the bank.
First, some background: Under Dodd-Frank, the CFTC was given the task of regulating the $300 trillion market for swaps in the U.S. The basic point was to bring light to a dark market and prevent another AIG by pushing as much of the over-the-counter swaps market as possible onto exchanges where prices and volume are posted. With about 80 percent of those swaps rules written,according to CFTC Chairman Gary Gensler, and a bunch of them now in effect, traders have begun “futurizing their swaps”—that is, trading futures contracts instead of entering into swaps deals. Some say that’s a clever way around Dodd-Frank. Others see it as merely a natural evolution of financial instruments.
Whatever the reason, it’s happening. And as arcane as the details may be, the potential consequences are enormous, as evidenced by Thursday’s packed house. The general consensus of those present was that Thursday was the most crowded CFTC hearing in recent memory. Lawyers and lobbyists lined the walls; congressional staffers and industry suits packed the chairs. More than 150 people crammed into the CFTC’s main conference room, and a healthy number of folks watched on TVs in the hallway outside.
Dodd-Frank has upended the derivatives market, and in the shakeout that follows, there will be winners and losers. Perhaps those with the most at stake areIntercontinentalExchange (ICE) and the Chicago Mercantile Exchange (CME), the two biggest futures exchanges in the U.S. As more companies and traders start favoring futures over swaps, the two exchanges stand to capture a much bigger portion of that activity. The potential losers? Dealers such as Goldman Sachs (GS) that have done a lot of swaps business. Standing at the back of the room, Chris Giancarlo, chair of the Wholesale Markets Brokers’ Association, likened the fight over swaps and futures to “the Maginot Line for the exchanges.”
Easley, de Prado, & O’Hara, SSRN, The Volume Clock: Insights into the High Frequency Paradigm, here. 2 LFT structural weaknesses
Over the last two centuries, technological advantages have allowed some traders to be faster than others. We argue that, contrary to popular perception, speed is not the defining characteristic that sets High Frequency Trading (HFT) apart. HFT is the natural evolution of a new trading paradigm that is characterized by strategic decisions made in a volume-clock metric. Even if the speed advantage disappears, HFT will evolve to continue exploiting Low Frequency Trading’s (LFT) structural weaknesses. However, LFT practitioners are not defenseless against HFT players, and we offer options that can help them survive and adapt to this new environment.
Nerval’s Lobster, Mars Rover Curiosity: Less Brain Power Than Apple’s iPhone 5, here. 3. So, remember that 8 hour to 238 second 2011 award winning Credit valuation and risk computation on the million dollar FPGA supercomputer with the Dataflow acceleration? That’s the Apollo Creed here, on a good day. There are days, (e.g., July 27, 2018 57.6 million km distance between Mars and Earth) when you can send the entire credit portfolio to Mars, compute the entire Risk and Valuation for the portfolio in the down time on the spare computer in the Mars Rover, then send the results back to Earth, and finish in ~360 seconds. That’s just about 50% slower than the 2011 award winning Credit valuation and risk computation on the million dollar FPGA supercomputer with the Dataflow acceleration. So, the message here is, I guess, if your computing infrastructure is on Mars … and has less brain power than an iPhone5 … then you are probably not going to be at the very top of the USD fixed/float Vanilla Swap League tables … on most days, but …if you own an iPhone5 here on earth … you have more brain power … than the 2011 award winning Credit valuation and risk computation on the million dollar FPGA supercomputer with the Dataflow acceleration?
“To give the Mars Rover Curiosity the brains she needs to operate took 5 million lines of code. And while the Mars Science Laboratory team froze the code a year before the roaming laboratory landed on August 5, they kept sending software updates to the spacecraft during its 253-day, 352 million-mile flight. In its belly, Curiosity has two computers, a primary and a backup. Fun fact: Apple’s iPhone 5 has more processing power than this one-eyed explorer. ‘You’re carrying more processing power in your pocket than Curiosity,’ Ben Cichy, chief flight software engineer, told an audience at this year’s MacWorld.”
In the case of the synthetic credit portfolio of JPMorgan’s CIO, they had a good three months to build positions that would subsequently cause billions of dollars of losses. Our previous post outlined how, according to the bank’s Task Force Report, the CIO was going to unwind profitable high yield shorts at the beginning of 2012. Instead, the unit ended up building those positions further, along with long positions in the Markit CDX.NA.IG.9 index that were meant to hedge and finance them.
Positioning for credit losses, the JP Morgan way, here.
If it’s alright by you, FT Alphaville has a confession to make. This whole London Whale thing, the billions that JPMorgan lost as a result of the actions of its Chief Investment Office primarily in the first quarter of 2012… we kinda made a cottage industry of trying to figure out what the trades were. Not that it was just us, mind you.
Naturally, we had been hoping that we’d finally get some answers when the Task Force Report came out last week. The report has revealed in painful detail how a large, well-respected bank can get so much wrong. There were bad risk management practices, model deficiencies, spreadsheet errors, complacent management and more. But trade details? That’s left for us to piece together from various scraps.
Can haz spredshetz, here. I do not find the Cupcake Police case compelling. The Cupcake Police were clearly passengers in the story, however, as a side effect of this Pollack is explaining reasonably carefully how P&L and Risk theoretically works in an OTC derivative desk.
Spreadsheet errors sure are a fun, but serious, topic. The last time FT Alphaville dove into JPMorgan’s Task Force Report on its losses in synthetic credit thanks to the bank’s Chief Investment Office, we took you through the blunders around their shiny new VaR model (that didn’t work). This time we want to introduce you to the spreadsheets with valuation errors.
In order for any of this to make sense, we need to re-introduce you to the CIO’s Valuation Control Group (VCG). At FT Alphaville, we previously called them the “cupcake police” when explaining the importance of empowering the back office(VCG-type teams) to challenge front office marks, thereby ensuring more accurate reporting.
Tyler Durden, Zerohedge, Irony 101 Or How the Fed Blew Up JPMorgan’s “Hedge” in 22 Tweets, here. Durden says the cash register is innocent because the Fed did it.
Many pixels have been ‘spilled’ trying to comprehend what exactly JPMorgan were up to, where they are now, and what the response will likely end up becoming. Our note from last week appears, given the mainstream media’s ‘similar’ notes after it, to have struck a nerve with many as both sensible and fitting with the facts (and is well worth a read) but we have been asked again and again for a simplification. So here is our attempt, in 22 simple tweets (or sentences less than 140 characters in length) to describe what the smartest people in the room did and in possibly the most incredible irony ever, how the Fed (and the Central Banks of the world) were likely responsible for it all going pear-shaped for Bruno and Ina.
Matt Levine, DealBreaker, Turns Out Global Regulators Are Fine With Using Credit Ratings To Decide What Banks Can Do, here.
Isn’t that strange? The argument against using ratings in setting bank capital is some combination of (A) “the rating agencies are dumb and corrupt” and (B) “credit ratings don’t measure market risk”; the argument in favor is some combination of (C) “ratings measure the risk of default – i.e. permanent devaluation of bank assets – which is really what capital is designed to guard against” and (D) “well, do you have a better idea of how to measure that?” And so there is much fulminating against giving official power to ratings, and not so much done about actually stripping them of that power in capital regulation.
Subrahmanyam, Tang, and Wang, SSRN, Does the Tail Wag the Dog? The Effect of Credit Default Swaps on Credit Risk, here.
Credit default swaps (CDS) are derivative contracts that are widely used as tools for credit risk management. However, in recent years, concerns have been raised about whether CDS trading itself affects the credit risk of the reference entities. We use a unique, comprehensive sample covering CDS trading of 901 North American corporate issuers, between June 1997 and April 2009, to address this question. We find that the probability of both a credit rating downgrade and bankruptcy increase, with large economic magnitudes, after the inception of CDS trading. This finding is robust to controlling for the endogeneity of CDS trading. Beyond the CDS introduction effect, we show that firms with relatively larger amounts of CDS contracts outstanding, and those with relatively more “no restructuring” contracts than other types of CDS contracts covering restructuring, are more adversely affected by CDS trading. Moreover, the number of creditors increases after CDS trading begins, exacerbating creditor coordination failure for the resolution of financial distress.
Lisa Pollack, Alphaville, Footnote 74: FACEPALM, here; and A tempest in a spreadsheet, here. Funny, but getting lost in the weeds. This is important because Pollack is one of the dozen or so folks who could end up writing the London Whale book that’ll get cited for decades. The 130+ pages in the JPM report dance around a lot, recounting a sequence of events without simply stating what obviously happened.
The cash register that JPM built for tracking the running value of the securities owned by the London Whale broke, probably in March or April 2012, and it could not be fixed before losing several billion dollars. Curiously , the “cash register” in this case is less euphemistic than you might have expected. The VaR, the risk managers, most of the people not directly on the CIO trading desk weave in an out of the official narrative but they are mostly irrelevant to what originally happened. They are passengers in a sad story. It really looks like the problem was either the code that read the market data to compute the inputs to the P&L calculator (the spreadsheet) or the P&L calculator itself (the supercomputer). The report doesn’t really carefully dissect this issue, not sure why. If the problem was A. the spreadsheet model for calibrating the correlations and the hazard rates for inputs, I bet the CIO desk and quants are/were more than smart and motivated enough to fix it or patch the underlying spreadsheet and analytics packages before losing much money. The CIO folks all probably remembered, all too vividly, how correlations behaved with the GM and Ford junk downgrades in May 2005 and designed their new correlation cooker to do something “better.” If the problem was B. programming the new “supercomputer,” I could see them not having enough time to fix the situation. B … final answer.
The report says there is “some evidence” that pressure was put on the reviewers to get on with approving the model in January because of the risk limit breaches being incurred with the old model around then. For example, as quoted above: “In an e-mail to Mr. Hogan on January 25, Mr. Goldman reported that the new model would be implemented by January 31 “at the latest” and that it would result in a “significant reduction” in the VaR.”
Hence the Model Review Group “may have been more willing to overlook the operational flaws apparent during the approval process.”
Back to the modeler though. He used to work at Numerix (a vendor), where a repricing model had been “developed under his supervision” that JPMorgan normally used in VaR calculations. The Numerix analytic suite had been approved by the Model Review Group. But the modeler, when developing the new VaR model, developed his own suite — called “West End”. This suite was not reviewed in advance of the new VaR model being rolled out, but rather only had a limited amount of backtesting completed on it.
Felix Salmon, Reuters, How does JP Morgan Respond to a crisis? here. Figuratively, it’s the Pink Iguana that got them not the Black Swan.
The report doesn’t say how many eight-sigma events the CIO has ever seen: my guess is that this is the only one. But here’s an idea of how crazy eight-sigma events are: under a normal distribution, they’re meant to happen with a probability of roughly one in 800 trillion. The universe, by contrast, is roughly 5 trillion days old: you could run the universe a hundred times, under a normal distribution, and still never see an eight-sigma event. If anything was a black swan, this was a black swan. And it didn’t help JP Morgan’s “tail risk book” one bit.
Matt Levine, DealBreaker, JPMorgan Dissects A Whale Carcass, here. Unknown Unknowns are hard to cope with.
How should one read JPMorgan’s Whale Report? I suppose “not” is an acceptable answer; the Whale’s credit derivatives losses at JPMorgan’s Chief Investment Office are old news by now, though perhaps his bones point us to the future. One way to read it is as a depressing story about measurement. There were some people and whales, and there was a pot of stuff, and the people and whales sat around looking at the stuff and asking themselves, and each other, “what is up with that stuff?” The stuff was in some important ways unknowable: you could list what the stuff was, if you had a big enough piece of paper, but it was hard to get a handle on what it would do. But that was their job. And the way you normally get such a handle, at a bank, is with a number, or numbers, and so everyone grasped at a number.
Lisa Pollack, Alphaville, The London Whale, an oral history, here. Link to the task force report from JPM. Lisa is still not sure if the tranches did the Whale in.
It’s history, JPMorgan Task Force Report style.
Or rather, it’s a mostly oral history, lacking in technical detail, and it’s not all independently verified. Oh, and heavily reliant on one guy.
Oh, the report pg. 120 says tranches don’t do so good in VaR.
Appendix A: VaR Modeling
VaR is a metric that attempts to estimate the risk of loss on a portfolio of assets. A
portfolio’s VaR represents an estimate of the maximum expected mark-to-market loss over a
specified time period, generally one day, at a stated confidence level, assuming historical market
conditions. Through January 2012, the VaR for the Synthetic Credit Portfolio was calculated
using a “linear sensitivity model,” also known within the Firm as the “Basel I model,” because it
was used for purposes of Basel I capital calculations and for external reporting purposes.
The Basel I model captured the major risk facing the Synthetic Credit Portfolio at the
time, which was the potential for loss attributable to movements in credit spreads. However, the
model was limited in the manner in which it estimated correlation risk: that is, the risk that
defaults of the components within the index would correlate. As the tranche positions in the
Synthetic Credit Portfolio increased, this limitation became more significant, as the value of the
tranche positions was driven in large part by the extent to which the positions in the index were
correlated to each other. The main risk with the tranche positions was that regardless of credit
risk in general, defaults might be more or less correlated.
Hmm, do you think the gaussian copula did better than the VaR with the tranches?
On January 30, the Model Review Group authorized CIO Market Risk to use the new
model for purposes of calculating the VaR for the Synthetic Credit Portfolio beginning the
previous trading day (January 27). Once the new model was implemented, the Firm-wide 10-Q
VaR limit was no longer exceeded. Formal approval of the model followed on February 1. The
formal approval states that the VaR calculation would utilize West End and that West End in turn
would utilize the Gaussian Copula model123 to calculate hazard rates124 and correlations. It is
unclear what, if anything, either the Model Review Group or CIO Market Risk did at the time to
validate the assertion that West End would utilize the Gaussian Copula model as opposed to
some other model, but that assertion later proved to be inaccurate.125
Surely the new correlation calibration for the gaussian copula spreadsheet made it into a productionized version for overnight runs so the P&L worked right, doh.
In early May 2012, in response to the recent losses in the Synthetic Credit Portfolio, Mr.
Venkatakrishnan asked an employee in the Model Review Group to perform a review of the
West End analytic suite, which, as noted, the VaR model used for the initial steps of its
calculations. The West End analytic had two options for calculating hazard rates and
correlations: a traditional Gaussian Copula model and a so-called Uniform Rate model, an
alternative created by the modeler. The spreadsheet that ran West End included a cell that
allowed the user to switch between the Gaussian Copula and Uniform Rate models.
The Model Review Group employee discovered that West End defaulted to running
Uniform Rate rather than Gaussian Copula in this cell, including for purposes of calculating the
VaR, contrary to the language in the Model Review Group approval. Although this error did not
have a significant effect on the VaR, the incident focused the reviewer’s attention on the VaR
model and ultimately led to the discovery of additional problems with it.
I’m gonna mark this as 16-May Gaussian Copula Kills Again “called it.”