You are currently browsing the tag archive for the ‘JPM’ tag.
Deus Ex Macchiato, Is it time to centralise capital calculation? here. Forgot about this oldie. These are the VaR results from reporting banks on the same sample portfolio varying by up to a factor of ten. Can you see the Clown Car now? I guess you could centralize the Clown Car so there is some sort of planning, maybe the clowns would occasionally give the same answers to the same questions. But this is Sisyphian, they’d still be clowns, albeit somewhat standardised clowns. Hey, here’s a crazy idea, forget the clowns for a minute, maybe find out why the Whale didn’t know the P&L of his positions in January and got caught off-sides? Does Anybody Remember Laughter?
As Dealbreaker acidly puts it, banks rarely differ from each other by more than a factor of ten. It’s no wonder, then, that investors are losing trust in capital ratios. The answer is clear. Centralize and standardise capital calculation. Throw all those internal models away, and use one common, regulator developed approach. Now that a lot of progress has been made on trade reporting, the data infrastructure exists to do this — or least it wouldn’t be too hard to extend what does exist to do it. Developing the models would be a huge undertaking, but compared to having each large bank do it individually, a central infrastructure would be cheaper and more reliable, and anyway you could pick the best of individual banks’ methodologies. You could even spin out the var teams from four or five leading banks into the new central body — just don’t pick the bank whose IRC is less than 10% of the average answer…
Shipkevitch, CFTC Law, CFTC’s Bart Chilton Warns of ‘Swapification’ of Futures Markets, here.
Finally, while I’m interested in hearing the concerns about futurization, I am more concerned about, a silent creeper. That is, the “swapification” of the futures markets. Specifically, I’m concerned that the conversion of certain standardized cleared swaps will be under-regulated–under-regulated–in the futures markets. It may be block rules or something else, but we need to be cautious about converting certain swaps to futures in an attempt to export the deregulated, opaque swaps trading model to these new futures markets. Let’s be cautious about allowing lax oversight of these futures contracts, regardless of how they were treated before they were futurized.
Jon Skinner, The OTC Space, How Big is the OTC Really, here.
So how does OTC compare with other markets? Comparing global market values at the same point in time i.e. end 2011, we have:
Instrument Global Market Value ($trn) Bonds 93 Loans 64 Equities 54 OTC Derivatives 27
Lisa Pollack, Alphaville, Ten times on the board: I will not put “optimizing regulatory capital: in the subject line of an email, here. Interesting material. Hagan is a really good quant, in the business for years after leaving JPL, I think. He did the dominant contemporary model for Rates volatility. He is in a really shitty position with JPM CIO blowing up, and he now has Pollack jumping all over his dick about VaR. What a nightmare.
More trivially, we find ourselves contemplating whether Hagan deserves a moniker. As an example of the importance those supporting trading play. Ideas thus far include: HAL, Miles Dyson, Potter!, and that type of fish that cleans whales. Further ideas and thoughts welcome in the below space.
Lisa Pollack, FT Alphaville, Risk limits are made to be broken, here. Shades of Strother Martin, but Pollack cannot, or will not, get it that the VaR is for dopes. Nobody even checks the VaR to see if the sign on the valuation is always right. There’s banks where the Firm VaR uses market data from a month ago and miss a chunk of the positions on the trading books because “it’s hard.” JPM VaR is a space age rocket science operation in comparison to those guys. But don’t lose sight of the point – VaR, with or without the rocket science, is a Clown Car operation in the best of circumstances. When things get too tense in an adverse market and the regulators and government officials need to be distracted – out comes the VaR Clown Car and one clown after another exits the car to the delight, amazement, eternal entertainment of the assembled officials, while the traders scramble to safety. It is not that unreasonable to blow through the VaR limits, maybe not quite as much as Pollack is showing, on a good day.
Pollack is super entertaining but she’s dancing in the weeds. It’s like giving the Titanic owners infinite shit because the ship split in half before it sank. “How you gonna sail a ocean liner across the Atlantic Ocean in TWO pieces? Jesus! Who in charge here, Clowns?” I’ll bet you that the London Whale and probably Ina Drew were as surprised as the rest us that VaR Clowns were required. Once they are “out of the car,” detailed analysis, cross referencing, and checking their historical statements, testimony, and publications is somewhat less than illuminating, because they are after all … clowns.
Big picture question: what do risk limits mean at JPMorgan? CEO Jamie Dimon explains:
JPMorgan Chase personnel, from Mr. Dimon on down, all told the Subcommittee that the risk limits at CIO were not intended to function as “hard stops,” but rather as opportunities for discussion and analysis.
Omg, look at all that opportunity for discussion about the CIO’s synthetic credit portfolio!!!
Lisa Pollack, Alphaville, Humongous credit derivative cake proves inedible, here.
THE MODEL GOT IT WRONG. ALL THE THEORETICAL UNDERPINNINGS OF VALUATION HAVE BROKEN DOWN AND THE VOLATILITY HAS BROKEN ALL HISTORICAL AND WORSE CASE BANDS.
03/23/2012 06:20:09 BRUNO IKSIL, JPMORGAN CHASE BANK, says: i did not fail, here.
Mr. Iksil later told the JPMorgan Chase Task Force investigation that he had not been able to sell as much credit protection as he would have liked (which would have generated more carry and profits to keep pace with the high yield rally). He said that two risk metrics – the “VaR” and “CS01” – prevented him from doing so. He later wrote in an email: “[T]he need to reduce VAR – RWA and stay within the CS01 limit prevented the book from being long risk enough.”
When asked about the February trading activity, the OCC told the Subcommittee that the CIO traders apparently believed that the prices in the markets were wrong, and that the traders had a strategy to defend their positions and keep the prices from falling by taking on more of them. Mr. Macris later said that all of the trades and losses were “well-communicated” to CIO management, meaning that his supervisors were fully informed about the status of the SCP book. 517
US Senate, JPMorgan Chase Whale Trade: A Case History of Derivatives Risk and Abuses, here. Very instructive to see how this plays out. After a quick read it still seems like the desk’s P&L correlation cooker simply failed and all the rest of the remarking, reporting, and VaR adjustments were simply compensating factors that are now under the spotlight. It is not like the conclusions here are so wide of the mark, but they don’t seem to get a handle on why this evolved the way it did. The only reasonable explanation is the Whale was flying blind on CDX tranches and got his positions exposed. It is not the case the Whale or the CIO didn’t know the market levels of his CDX and CDS trades, exactly. They had to move the CDX/CDS marks to preserve the hedge to the correlation book P&L, which was out.
The ability of CIO personnel to hide hundreds of millions of dollars of additional losses
over the span of three months, and yet survive internal valuation reviews, shows how imprecise,
undisciplined, and open to manipulation the current process is for valuing credit derivatives.
This weak valuation process is all the more troubling given the high risk nature of synthetic
credit derivatives, the lack of any underlying tangible assets to stem losses, and the speed with
which substantial losses can accumulate and threaten a bank’s profitability. The whale trades’
bad faith valuations exposed not only misconduct by the CIO and the bank’s violation of the
derivative valuation process mandated in generally accepted accounting principles, but also a
systemic weakness in the valuation process for all credit derivatives.
Gretchen Morgenson and Joshua Rosner, Reckless Endangerment: How Outsized Ambition, Greed, and Corruption Created the Worst Financial Crisis of Our Time, here. Now for just
A Washington Post Notable Nonfiction Book for 2011One of The Economist’s 2011 Books of the Year
In Reckless Endangerment, Gretchen Morgenson exposes how the watchdogs who were supposed to protect the country from financial harm were actually complicit in the actions that finally blew up the American economy. Drawing on previously untapped sources and building on original research from coauthor Joshua Rosner—who himself raised early warnings with the public and investors, and kept detailed records—Morgenson connects the dots that led to this fiasco. Morgenson and Rosner draw back the curtain on Fannie Mae, the mortgage-finance giant that grew, with the support of the Clinton administration, through the 1990s, becoming a major opponent of government oversight even as it was benefiting from public subsidies.
Matt Levine, DealBreaker, Senate Subcommittee Feasting On Whale Today, here.
CIO’s most senior quantitative analyst, Patrick Hagan, who joined the CIO in 2007 and spent about 75% of his time on SCP projects, told the Subcommittee that he was never asked at any time to analyze another portfolio of assets within the bank, as would be necessary to use the SCP as a hedge for those assets. In fact, he told the Subcommittee that he was never permitted to know any of the assets or positions held in other parts of the bank.
In the case of the synthetic credit portfolio of JPMorgan’s CIO, they had a good three months to build positions that would subsequently cause billions of dollars of losses. Our previous post outlined how, according to the bank’s Task Force Report, the CIO was going to unwind profitable high yield shorts at the beginning of 2012. Instead, the unit ended up building those positions further, along with long positions in the Markit CDX.NA.IG.9 index that were meant to hedge and finance them.
Positioning for credit losses, the JP Morgan way, here.
If it’s alright by you, FT Alphaville has a confession to make. This whole London Whale thing, the billions that JPMorgan lost as a result of the actions of its Chief Investment Office primarily in the first quarter of 2012… we kinda made a cottage industry of trying to figure out what the trades were. Not that it was just us, mind you.
Naturally, we had been hoping that we’d finally get some answers when the Task Force Report came out last week. The report has revealed in painful detail how a large, well-respected bank can get so much wrong. There were bad risk management practices, model deficiencies, spreadsheet errors, complacent management and more. But trade details? That’s left for us to piece together from various scraps.
Can haz spredshetz, here. I do not find the Cupcake Police case compelling. The Cupcake Police were clearly passengers in the story, however, as a side effect of this Pollack is explaining reasonably carefully how P&L and Risk theoretically works in an OTC derivative desk.
Spreadsheet errors sure are a fun, but serious, topic. The last time FT Alphaville dove into JPMorgan’s Task Force Report on its losses in synthetic credit thanks to the bank’s Chief Investment Office, we took you through the blunders around their shiny new VaR model (that didn’t work). This time we want to introduce you to the spreadsheets with valuation errors.
In order for any of this to make sense, we need to re-introduce you to the CIO’s Valuation Control Group (VCG). At FT Alphaville, we previously called them the “cupcake police” when explaining the importance of empowering the back office(VCG-type teams) to challenge front office marks, thereby ensuring more accurate reporting.
Tyler Durden, Zerohedge, Irony 101 Or How the Fed Blew Up JPMorgan’s “Hedge” in 22 Tweets, here. Durden says the cash register is innocent because the Fed did it.
Many pixels have been ‘spilled’ trying to comprehend what exactly JPMorgan were up to, where they are now, and what the response will likely end up becoming. Our note from last week appears, given the mainstream media’s ‘similar’ notes after it, to have struck a nerve with many as both sensible and fitting with the facts (and is well worth a read) but we have been asked again and again for a simplification. So here is our attempt, in 22 simple tweets (or sentences less than 140 characters in length) to describe what the smartest people in the room did and in possibly the most incredible irony ever, how the Fed (and the Central Banks of the world) were likely responsible for it all going pear-shaped for Bruno and Ina.
Lisa Pollack, Alphaville, Footnote 74: FACEPALM, here; and A tempest in a spreadsheet, here. Funny, but getting lost in the weeds. This is important because Pollack is one of the dozen or so folks who could end up writing the London Whale book that’ll get cited for decades. The 130+ pages in the JPM report dance around a lot, recounting a sequence of events without simply stating what obviously happened.
The cash register that JPM built for tracking the running value of the securities owned by the London Whale broke, probably in March or April 2012, and it could not be fixed before losing several billion dollars. Curiously , the “cash register” in this case is less euphemistic than you might have expected. The VaR, the risk managers, most of the people not directly on the CIO trading desk weave in an out of the official narrative but they are mostly irrelevant to what originally happened. They are passengers in a sad story. It really looks like the problem was either the code that read the market data to compute the inputs to the P&L calculator (the spreadsheet) or the P&L calculator itself (the supercomputer). The report doesn’t really carefully dissect this issue, not sure why. If the problem was A. the spreadsheet model for calibrating the correlations and the hazard rates for inputs, I bet the CIO desk and quants are/were more than smart and motivated enough to fix it or patch the underlying spreadsheet and analytics packages before losing much money. The CIO folks all probably remembered, all too vividly, how correlations behaved with the GM and Ford junk downgrades in May 2005 and designed their new correlation cooker to do something “better.” If the problem was B. programming the new “supercomputer,” I could see them not having enough time to fix the situation. B … final answer.
The report says there is “some evidence” that pressure was put on the reviewers to get on with approving the model in January because of the risk limit breaches being incurred with the old model around then. For example, as quoted above: “In an e-mail to Mr. Hogan on January 25, Mr. Goldman reported that the new model would be implemented by January 31 “at the latest” and that it would result in a “significant reduction” in the VaR.”
Hence the Model Review Group “may have been more willing to overlook the operational flaws apparent during the approval process.”
Back to the modeler though. He used to work at Numerix (a vendor), where a repricing model had been “developed under his supervision” that JPMorgan normally used in VaR calculations. The Numerix analytic suite had been approved by the Model Review Group. But the modeler, when developing the new VaR model, developed his own suite — called “West End”. This suite was not reviewed in advance of the new VaR model being rolled out, but rather only had a limited amount of backtesting completed on it.
David Murphy, Alphaville, The JPMorgan Whale’s regulatory motive, here. Just a wild guess for a movie plot – Whale takes leveraged position in CDX tranches that are no longer heavily traded like back in the day, say 2005. In May12 the big, and now publicly exposed, hedge is in the CDX series 9 where the Whale gets picked off by the hunch/pounce/kill boys. The new correlation/hazard rate cooker (the code that computes the inputs to the gaussian copula model) has a problem, maybe with Kodak maybe something else, and the Whale’s desk risk and the 238 second near real-time run time Credit P&L is shot – they are flying totally blind. They try to buy time marking the spreads to cover the model’s flaked out P&L and Risk while they fix it. The risk and regulatory requirements change while all this is going on so there is some JPM Risk executive who now wants someone to explain to him what this all means to his VaR model. Nobody has time to talk to him cause the VaR is just for mouth-breathers and there is a real problem here that folks need to think through. I wonder if it was helpful to debug the flaky model in the FPGA supercomputer once the CIO P&L went out, probably not, right? Bruno, Achilles, and probably even Ina got to read up on Verilog programming back in April 2012, cool thx prize winning Dataflow supercomputer implementation of the gaussian copula … Maserati, Bellagio, Bellagio, Kasparov.
If JPMorgan had just bought, say, senior tranche protection on a credit index, then while the bank’s position would indeed have been crash-hedged, it would have generated significant earnings volatility as the bonds would not have been marked-to-market but the derivatives would have been. In particular, in a tightening credit environment, such as we had earlier in the year as the ECB injected liquidity into the banking system, the derivatives would have lost money without a corresponding accounting gain on the bonds.
One way around this accounting mismatch is to restructure the derivatives position. The idea is still to be long crash protection — again, by buying protection on senior tranches, for example — but to offset this by also selling protection on the index. If done correctly this position will be indifferent to small moves in credit spreads (‘delta neutral’), but it will make money if there is a big increase in spreads.
This removes the fair value volatility from the position at the cost of introducing correlation risk: the amount of index you need to sell is a function of the correlation between the names in the index, so you have to readjust your hedge as the market price of correlation changes.
Deus Ex Macchiato, Whale Watching , the official tour, here. I think this is David Murphy again. Nice website, I should read it more frequently.
The firm’s main problem at this point was that two goals were in conflict. On one hand their position was so large (if unnoticed by regulators) that they would get crushed if they tried to leave too fast: on other other, they needed to leave to reduce capital. The solution, of course, was to try to change how capital was calculated.
the concern that an unwind of positions to reduce RWA would be in tension with “defending” the position. The executive therefore informed the trader (among other things) that CIO would have to “win on the methodology” in order to reduce RWA.
Chris Wilson, Yahoo, What would your signature look like if Jack Lew wrote it? (Interactive), here.
Now, Yahoo News exclusively brings you the Jack Lew Signature Generator. Just type in your name, hit the button, and see what your name would look like in his, er, signature style.
Felix Salmon, Reuters, How does JP Morgan Respond to a crisis? here. Figuratively, it’s the Pink Iguana that got them not the Black Swan.
The report doesn’t say how many eight-sigma events the CIO has ever seen: my guess is that this is the only one. But here’s an idea of how crazy eight-sigma events are: under a normal distribution, they’re meant to happen with a probability of roughly one in 800 trillion. The universe, by contrast, is roughly 5 trillion days old: you could run the universe a hundred times, under a normal distribution, and still never see an eight-sigma event. If anything was a black swan, this was a black swan. And it didn’t help JP Morgan’s “tail risk book” one bit.
Matt Levine, DealBreaker, JPMorgan Dissects A Whale Carcass, here. Unknown Unknowns are hard to cope with.
How should one read JPMorgan’s Whale Report? I suppose “not” is an acceptable answer; the Whale’s credit derivatives losses at JPMorgan’s Chief Investment Office are old news by now, though perhaps his bones point us to the future. One way to read it is as a depressing story about measurement. There were some people and whales, and there was a pot of stuff, and the people and whales sat around looking at the stuff and asking themselves, and each other, “what is up with that stuff?” The stuff was in some important ways unknowable: you could list what the stuff was, if you had a big enough piece of paper, but it was hard to get a handle on what it would do. But that was their job. And the way you normally get such a handle, at a bank, is with a number, or numbers, and so everyone grasped at a number.
Lisa Pollack, Alphaville, The London Whale, an oral history, here. Link to the task force report from JPM. Lisa is still not sure if the tranches did the Whale in.
It’s history, JPMorgan Task Force Report style.
Or rather, it’s a mostly oral history, lacking in technical detail, and it’s not all independently verified. Oh, and heavily reliant on one guy.
Oh, the report pg. 120 says tranches don’t do so good in VaR.
Appendix A: VaR Modeling
VaR is a metric that attempts to estimate the risk of loss on a portfolio of assets. A
portfolio’s VaR represents an estimate of the maximum expected mark-to-market loss over a
specified time period, generally one day, at a stated confidence level, assuming historical market
conditions. Through January 2012, the VaR for the Synthetic Credit Portfolio was calculated
using a “linear sensitivity model,” also known within the Firm as the “Basel I model,” because it
was used for purposes of Basel I capital calculations and for external reporting purposes.
The Basel I model captured the major risk facing the Synthetic Credit Portfolio at the
time, which was the potential for loss attributable to movements in credit spreads. However, the
model was limited in the manner in which it estimated correlation risk: that is, the risk that
defaults of the components within the index would correlate. As the tranche positions in the
Synthetic Credit Portfolio increased, this limitation became more significant, as the value of the
tranche positions was driven in large part by the extent to which the positions in the index were
correlated to each other. The main risk with the tranche positions was that regardless of credit
risk in general, defaults might be more or less correlated.
Hmm, do you think the gaussian copula did better than the VaR with the tranches?
On January 30, the Model Review Group authorized CIO Market Risk to use the new
model for purposes of calculating the VaR for the Synthetic Credit Portfolio beginning the
previous trading day (January 27). Once the new model was implemented, the Firm-wide 10-Q
VaR limit was no longer exceeded. Formal approval of the model followed on February 1. The
formal approval states that the VaR calculation would utilize West End and that West End in turn
would utilize the Gaussian Copula model123 to calculate hazard rates124 and correlations. It is
unclear what, if anything, either the Model Review Group or CIO Market Risk did at the time to
validate the assertion that West End would utilize the Gaussian Copula model as opposed to
some other model, but that assertion later proved to be inaccurate.125
Surely the new correlation calibration for the gaussian copula spreadsheet made it into a productionized version for overnight runs so the P&L worked right, doh.
In early May 2012, in response to the recent losses in the Synthetic Credit Portfolio, Mr.
Venkatakrishnan asked an employee in the Model Review Group to perform a review of the
West End analytic suite, which, as noted, the VaR model used for the initial steps of its
calculations. The West End analytic had two options for calculating hazard rates and
correlations: a traditional Gaussian Copula model and a so-called Uniform Rate model, an
alternative created by the modeler. The spreadsheet that ran West End included a cell that
allowed the user to switch between the Gaussian Copula and Uniform Rate models.
The Model Review Group employee discovered that West End defaulted to running
Uniform Rate rather than Gaussian Copula in this cell, including for purposes of calculating the
VaR, contrary to the language in the Model Review Group approval. Although this error did not
have a significant effect on the VaR, the incident focused the reviewer’s attention on the VaR
model and ultimately led to the discovery of additional problems with it.
I’m gonna mark this as 16-May Gaussian Copula Kills Again “called it.”
Frank Partnoy and Jesse Eisinger, The Atlantic, What’s Inside America’s Banks? here.
Some four years after the 2008 financial crisis, public trust in banks is as low as ever. Sophisticated investors describe big banks as “black boxes” that may still be concealing enormous risks—the sort that could again take down the economy. A close investigation of a supposedly conservative bank’s financial records uncovers the reason for these fears—and points the way toward urgent reforms.
Matt Levine, DealBreaker, Turns Out Wells Fargo Doesn’t Just Keep Your Deposits In A Stagecoach Full Of Gold Ingots, here.
There are lots of reasons for opacity in bank financial statements but surely a lot of them have to do with market making. For one thing: derivatives, a major villain in the Atlantic piece. Basically, OTC derivatives market-making doesn’t net as cleanly as does, like, buying and selling publicly traded shares of stock. In cash equities, you buy 100 shares of IBM from one customer and sell 100 shares to another customer and clip two cents and are left with zero shares. In derivatives, you buy $100mm of 7-year Libor swaps from one customer and sell $100mm of 6-year Libor swaps to another customer and sell $10mm of 8-year Libor swaps to a third; you’re left “flat” – i.e. zeroish DV01 – but have $210mm of derivatives notional for six-plus years. If you were running a directional investing business with Wells Fargo’s balance sheet – $1.4-ish trillion – and ended up with $2.8 trillion in derivatives notional you’d be … aggressive; if you were running a matched book then $2.8 or $28 or $280 trillion are all at least theoretically possible and one is not necessarily riskier than another.