You are currently browsing the tag archive for the ‘Pollack’ tag.
Deus Ex Macchiato, Is it time to centralise capital calculation? here. Forgot about this oldie. These are the VaR results from reporting banks on the same sample portfolio varying by up to a factor of ten. Can you see the Clown Car now? I guess you could centralize the Clown Car so there is some sort of planning, maybe the clowns would occasionally give the same answers to the same questions. But this is Sisyphian, they’d still be clowns, albeit somewhat standardised clowns. Hey, here’s a crazy idea, forget the clowns for a minute, maybe find out why the Whale didn’t know the P&L of his positions in January and got caught off-sides? Does Anybody Remember Laughter?
As Dealbreaker acidly puts it, banks rarely differ from each other by more than a factor of ten. It’s no wonder, then, that investors are losing trust in capital ratios. The answer is clear. Centralize and standardise capital calculation. Throw all those internal models away, and use one common, regulator developed approach. Now that a lot of progress has been made on trade reporting, the data infrastructure exists to do this — or least it wouldn’t be too hard to extend what does exist to do it. Developing the models would be a huge undertaking, but compared to having each large bank do it individually, a central infrastructure would be cheaper and more reliable, and anyway you could pick the best of individual banks’ methodologies. You could even spin out the var teams from four or five leading banks into the new central body — just don’t pick the bank whose IRC is less than 10% of the average answer…
Shipkevitch, CFTC Law, CFTC’s Bart Chilton Warns of ‘Swapification’ of Futures Markets, here.
Finally, while I’m interested in hearing the concerns about futurization, I am more concerned about, a silent creeper. That is, the “swapification” of the futures markets. Specifically, I’m concerned that the conversion of certain standardized cleared swaps will be under-regulated–under-regulated–in the futures markets. It may be block rules or something else, but we need to be cautious about converting certain swaps to futures in an attempt to export the deregulated, opaque swaps trading model to these new futures markets. Let’s be cautious about allowing lax oversight of these futures contracts, regardless of how they were treated before they were futurized.
Jon Skinner, The OTC Space, How Big is the OTC Really, here.
So how does OTC compare with other markets? Comparing global market values at the same point in time i.e. end 2011, we have:
Instrument Global Market Value ($trn) Bonds 93 Loans 64 Equities 54 OTC Derivatives 27
Lisa Pollack, Alphaville, Ten times on the board: I will not put “optimizing regulatory capital: in the subject line of an email, here. Interesting material. Hagan is a really good quant, in the business for years after leaving JPL, I think. He did the dominant contemporary model for Rates volatility. He is in a really shitty position with JPM CIO blowing up, and he now has Pollack jumping all over his dick about VaR. What a nightmare.
More trivially, we find ourselves contemplating whether Hagan deserves a moniker. As an example of the importance those supporting trading play. Ideas thus far include: HAL, Miles Dyson, Potter!, and that type of fish that cleans whales. Further ideas and thoughts welcome in the below space.
Lisa Pollack, FT Alphaville, Risk limits are made to be broken, here. Shades of Strother Martin, but Pollack cannot, or will not, get it that the VaR is for dopes. Nobody even checks the VaR to see if the sign on the valuation is always right. There’s banks where the Firm VaR uses market data from a month ago and miss a chunk of the positions on the trading books because “it’s hard.” JPM VaR is a space age rocket science operation in comparison to those guys. But don’t lose sight of the point – VaR, with or without the rocket science, is a Clown Car operation in the best of circumstances. When things get too tense in an adverse market and the regulators and government officials need to be distracted – out comes the VaR Clown Car and one clown after another exits the car to the delight, amazement, eternal entertainment of the assembled officials, while the traders scramble to safety. It is not that unreasonable to blow through the VaR limits, maybe not quite as much as Pollack is showing, on a good day.
Pollack is super entertaining but she’s dancing in the weeds. It’s like giving the Titanic owners infinite shit because the ship split in half before it sank. “How you gonna sail a ocean liner across the Atlantic Ocean in TWO pieces? Jesus! Who in charge here, Clowns?” I’ll bet you that the London Whale and probably Ina Drew were as surprised as the rest us that VaR Clowns were required. Once they are “out of the car,” detailed analysis, cross referencing, and checking their historical statements, testimony, and publications is somewhat less than illuminating, because they are after all … clowns.
Big picture question: what do risk limits mean at JPMorgan? CEO Jamie Dimon explains:
JPMorgan Chase personnel, from Mr. Dimon on down, all told the Subcommittee that the risk limits at CIO were not intended to function as “hard stops,” but rather as opportunities for discussion and analysis.
Omg, look at all that opportunity for discussion about the CIO’s synthetic credit portfolio!!!
Lisa Pollack, Alphaville, The London Whale, an oral history, here. Link to the task force report from JPM. Lisa is still not sure if the tranches did the Whale in.
It’s history, JPMorgan Task Force Report style.
Or rather, it’s a mostly oral history, lacking in technical detail, and it’s not all independently verified. Oh, and heavily reliant on one guy.
Oh, the report pg. 120 says tranches don’t do so good in VaR.
Appendix A: VaR Modeling
VaR is a metric that attempts to estimate the risk of loss on a portfolio of assets. A
portfolio’s VaR represents an estimate of the maximum expected mark-to-market loss over a
specified time period, generally one day, at a stated confidence level, assuming historical market
conditions. Through January 2012, the VaR for the Synthetic Credit Portfolio was calculated
using a “linear sensitivity model,” also known within the Firm as the “Basel I model,” because it
was used for purposes of Basel I capital calculations and for external reporting purposes.
The Basel I model captured the major risk facing the Synthetic Credit Portfolio at the
time, which was the potential for loss attributable to movements in credit spreads. However, the
model was limited in the manner in which it estimated correlation risk: that is, the risk that
defaults of the components within the index would correlate. As the tranche positions in the
Synthetic Credit Portfolio increased, this limitation became more significant, as the value of the
tranche positions was driven in large part by the extent to which the positions in the index were
correlated to each other. The main risk with the tranche positions was that regardless of credit
risk in general, defaults might be more or less correlated.
Hmm, do you think the gaussian copula did better than the VaR with the tranches?
On January 30, the Model Review Group authorized CIO Market Risk to use the new
model for purposes of calculating the VaR for the Synthetic Credit Portfolio beginning the
previous trading day (January 27). Once the new model was implemented, the Firm-wide 10-Q
VaR limit was no longer exceeded. Formal approval of the model followed on February 1. The
formal approval states that the VaR calculation would utilize West End and that West End in turn
would utilize the Gaussian Copula model123 to calculate hazard rates124 and correlations. It is
unclear what, if anything, either the Model Review Group or CIO Market Risk did at the time to
validate the assertion that West End would utilize the Gaussian Copula model as opposed to
some other model, but that assertion later proved to be inaccurate.125
Surely the new correlation calibration for the gaussian copula spreadsheet made it into a productionized version for overnight runs so the P&L worked right, doh.
In early May 2012, in response to the recent losses in the Synthetic Credit Portfolio, Mr.
Venkatakrishnan asked an employee in the Model Review Group to perform a review of the
West End analytic suite, which, as noted, the VaR model used for the initial steps of its
calculations. The West End analytic had two options for calculating hazard rates and
correlations: a traditional Gaussian Copula model and a so-called Uniform Rate model, an
alternative created by the modeler. The spreadsheet that ran West End included a cell that
allowed the user to switch between the Gaussian Copula and Uniform Rate models.
The Model Review Group employee discovered that West End defaulted to running
Uniform Rate rather than Gaussian Copula in this cell, including for purposes of calculating the
VaR, contrary to the language in the Model Review Group approval. Although this error did not
have a significant effect on the VaR, the incident focused the reviewer’s attention on the VaR
model and ultimately led to the discovery of additional problems with it.
I’m gonna mark this as 16-May Gaussian Copula Kills Again “called it.”
Susan Dominus, NYT, The Woman Who Took the Fall for JPMorgan Chase, here. Nice reporting, nice story arc, but it is like filming all of Citizen Kane and concluding that Rosebud was the name of an unsuccessful hedge Charles Foster Kane put on in troubled economic times. Big story misses the money shot and, wow, this was the mother of all money shots. Working backwards, this is a trade that reportedly lost $5B. Experienced hitters with massive resources and yards of balance sheet like this do not drop even a small percentage that kind of P&L on a wobbly macro hedge. That is totally “a dog ate my homework” line. Without the benefit of knowing where the London Whale P&L went the only rational options are A. some sort of epic unreported undetected fraud or B. a massive prop trading quant model derailment. Given the circumstantial Gaussian Copula history, the professional background of the players, and the timing here you have to bet B. The fact that the NY Times, WSJ, Bloomberg, or FT cannot pull this story together has fascinating repercussions. One being the quantitative literacy of the financial press is apparently so modest that repeatedly saying “VaR” to them generally stops any critical thought and further line of inquiry. Kind of like saying “it” to the Knights of Ni or “cdo” to NYT Finance reporters. Lisa Pollack where are you?
The trouble that eventually ended Drew’s career at the bank started out, the bank argues, as a precaution, the same kind of precaution, in fact, that set her on a successful career path at Chemical Bank: a major hedge against the possibility of a credit crisis.
Back in 2007, the bank asked the London office to execute a credit derivative hedge that would protect the bank in the event of a major crisis. (Some credit derivatives are, essentially, a bet on an outcome, like a corporation or government defaulting on their financial obligations.) The hedge not only protected the bank but also made money in 2008 when the markets collapsed.
Following the crisis, the team in London, including Iksil, continued to expand the position. (A credit trader’s position can be thought of as a collection of bets on outcomes.) Iksil’s position was eventually so large that he became known as the London Whale before his identity was confirmed. At some point in December of last year, a former executive from the group says, Drew checked in with Macris and Martin-Artajo about the position while the two men were in New York. They answered, but the executive, who understood the trade, remembers thinking that they did not give as full an answer as they could have. “I think they glossed over details to the point where Ina knew the product, the size they were trading, but she did not know what the true P.& L.” — profit and loss — “impact could possibly be in a stressful scenario,” he said. She was asking the right questions, he said, but did not seem to be picking up on what was not being said. Why didn’t he say anything? The usual reasons: less than total certainty, resistance to jumping rank, faith in Iksil’s judgment. Plus, he liked the guy.
Anh Nguyen, Computerworld UK, Top JP Morgan and UBS IT execs leave for HPC vendor, here. Wonder if they are going to sell the gaussian copula FPGA implementation to another bank? Computerworld UK appears to be an utter stranger to Irony.
The two new appointments are Stephen Weston, who was managing director and global head of applied analytics at JP Morgan, and Steven Hutt, former managing director and global head of credit analytics at UBS.
Weston was responsible for a major IT project using Maxeler technology at JP Morgan, which enabled the investment bank to run risk analysis and price its global credit portfolio in near real-time.
Almost missed the quotes:
“The overwhelming benefits of dataflow technology are directly measurable and undeniable,” said Weston.
“Having experienced them first-hand at JP Morgan, my new mission is to help the finance industry understand the revolution in value that this technology brings through making real-time computation and scenario analysis feasible, for both the largest and the most complicated problems.”
He said “revolution in value,” very clever.
Mr Dimon, and his lieutenants, are not only coy, they are Olympic freakin’ champions in the Art of Coy. They should hold seminars on it. Do they have a invisibility cloak for this unwind or what?!
Frankly, the whole JPMorgan Whale saga is starting to feel a lot like taking an exam. Panic, followed by resignation, then the late night conversations about the meaning of life, then the exam/earnings call itself, and then… a massive anti-climax.
But the more correct call here is that the Gaussian Copula is the FinQuant equivalent of the doomsday device in Strangelove’s Doomsday machine analysis; in the words of the good doctor WHY DIDN’T YOU TELL THE WORLD, EH?
Lisa Pollack, A back office failure to put right, here. Given a new correlation calibration for the copula it is not unusual to see single name CDS marks moved so that the correlation hedges behave as expected. That’s always a fight with the Back Office that historically goes in favor of the correlation trader. It is sort of reasonable. The single name CDS marks are just inputs to the model. If the model runs some sort of best fit, it is changing the CDS marks internally anyway. The fact that the traders run this optimization manually by overwriting the CDS marks in front of the controllers is an artifact of not coding up the best fit code in the model. This back office failure with models like this will not be put right anytime soon.
As Pollack points out the trader and the research folks know the models better than the controllers. Let’s pause for a bit to drive this point home. Think of the models like a 5×5 basketball game. On the one side there is how the UK Basketball team knows the 5×5 basketball game (think London Whale = Booger Cousins), on the other side there’s how the guys down at the Y know the 5×5 basketball game (think controllers at Princeton Y). This is not a case where if the guys at the Y generally put in a little more time then they are going to get it like the UK guys. Moreover there is little chance getting Booger Cousins, or any other UK player, to show up at your Y. The guys at the Y are going to do their thing and the UK guys are going to do their thing. The similarities (court, two baskets, ten people, a ball, wood floor) are only nominal (e.g., Uncle Drew “Where’d you get them shoes, at the hospital?”). For all we know the correlation calibration computation is changing weekly when the MTMs and the hedges are stable. When the hedges are stressed the correlations are probably being recooked daily. The only Back Office controller who is going to follow all this computational detail is going to be someone already working for the desk.
The question here is why did JPM reload balance sheet into the Gaussian Copula? We already know the Back Office story here, it’s the same old story.
Chris Whalen seemed to turn his attention to the riskiness of relying on correlation assumptions when trading tranches. Good point.
On pricing, though, there are consensus prices for tranches too, it’s just that they aren’t in CCPs yet. Even if there aren’t consensus prices for some of them, there are other ways of verifying prices using an independent provider, e.g. Markit Totem.
Lisa Pollack, Alphaville, Regulator captured, a case study, here. This is like a discussion of what exactly was wrong with Capone’s tax records and how he could have done a better job in tax preparation to avoid legal problems down the road. Depending on how JPM books the London Whale’s hedge position and the underlying it is entirely possible there are portions of the aggregate position that are absent from the VaR ( or so grossly approximated that the MTM dynamics are effectively absent from the VaR). If there isn’t a massive story about where the London Whale P&L went, then this is a story about the model used by the London Whale to manage the books. The regulator VaR vision is not likely to give a big heads up that the CIO management are going to notice. The interesting question here is how did the position control escape from the quantitative model the London Whale was actually using to manage his book.
These are all real smart people. Here is an educated guess on how this went down:
They probably ran the Gaussian Copula on Credit portfolios back in the day of the SCDO. They saw the historical correlation calibration break down in 2005, 2007, and finally 2008. They got a supercomputer in 2009 to run the Gaussian Copula. Took three years to port the code to FPGAs, made movies about how they ported the code and put them on You Tube, found a new correlation calibration mechanism and reloaded correlation trading via GC in standard tranches in 2012.
The WSJ reported on Thursday that JPMorgan’s regulators will conduct a thorough review of the bank’s models, according to “people close to the situation”.
Thanks to a letter from the the Office of the Comptroller of the Currency to Senator Sherrod Brown, we know that one particular model — the VaR model that JPMorgan’s Chief Investment Office switched to in January 2012, and which failed to alert management to outsized risks the division was taking — did not require regulatory approval before being used.
It is therefore understandable that regulators are trying to pick up the pieces of their own damaged reputations by conducting a big model review.
But that won’t be enough.
Felix Salmon, The dangerous Gaussian copula function, here. Felix briefly deconstructs MacKenzie’s GC paper.
MacKenzie is a very smart sociologist, who understands quants and copula functions much more deeply than I ever did. (And, like most journalists, I forgot nearly all of what I ever knew about them within weeks of writing the article.) His paper is largely sociological, and I wouldn’t recommend reading it if you don’t like running across phrases like “the beginnings of a typology of mechanisms of counterperformativity”. But the good news is that if you want an English-language translation, Lisa Pollack has done an amazing job of extracting the interesting bits, and there’s no reason for me to try to replicate what she’s already done so well.
Lisa Pollack, Approaching the Beach, here.
CNBC is reporting on Wednesday that JPMorgan has sold a substantial amount of its loss-making synthetic credit portfolio:
JPMorgan Chase has sold off 65 percent to 70 percent of its losing “London Whale” position, which led to a multibillion-dollar trading loss for the bank, CNBC reported on Wednesday.
In the past month, the bank [JPM 36.07 0.69 (+1.95%) ] said its chief investment office has sold the majority of its long holding in the CDX IG-9 10-year index.
To which we raise an eyebrow and say, “really?”
On Tuesday, FT Alphaville
ranted aboutdiscussed how it baffles us that the regulators didn’t spot the Whale’s positions earlier. We presented this graph using DTCC data, to give a sense of just how much the risk taking activity in the Markit CDX.NA.IG.9 index grew since the beginning of the year. This is the one credit index that we know for certain that JPMorgan is in, i.e. even the regulator is on record about it.
Zerohedge, ’Just The Facts’ On The JPM ‘Whale’ Unwind Rumor, here.
The bottom line is that the JPM unwind call is speculation.
Nothing factually provides any evidence that they have done any actual unwinds and in fact the lack of movement in IG9 tranche net notionals means we assume they continue to hold the tail-risk hedge – though have likely taken on opposing positions in IG18 and HY18 to reduce exposure overall.
Not understanding that the huge underperformance in the last quarter would leave a lot of credit options traders dramatically in-the-money or out-of-the-money – and likely led to the pick up in volumes this week that is being used as evidence for the unwinds on JPM occurring is a mistake. The single-name CDS roll and index options expiry is a huge impact on volumes (as we may see next week when DTCC provides the delayed data – though note that the DTCC data is aggregate across maturuty and so a roll would not impact it specifically).
No-one knows – though we assume some efforts have been made to at least reduce exposure as we noted.
MacKensie and Spears, ‘The Formula That Killed Wall Street’? The Gaussian Copula and the Material Cultures of Modelling, June 2012, here. MacKensie is always worth a read.
This paper presents a predominantly oral-history account of the development of the Gaussian copula family of models, which are used in finance to estimate the probability distribution of losses on a pool of loans or bonds, and which were centrally involved in the credit crisis. The paper draws upon this history to examine the articulation between two distinct, cross-cutting forms of social patterning in financial markets: organizations such as banks; and what we call ‘evaluation cultures’, which are shared sets of material practices, preferences and beliefs found in multiple organizations. The history of Gaussian copula models throws light on this articulation, because those models were and are crucial to intra- and inter-organizational co- ordination, while simultaneously being ‘othered’ by members of a locally dominant evaluation culture, which we call the ‘culture of no-arbitrage modelling’. The paper ends with the speculation that all widely-used derivatives models (and indeed the evaluation cultures in which they are embedded) help to generate inter-organizational co-ordination, and all that is special in this respect about the Gaussian copula is that its status as ‘other’ makes this role evident.
Joseph Cotterill, Alphaville, Copula culture, here. Get ready for BOOM time, this should be good. They’re starting to figure it out Re: the London Whale and the copula. I thought Salmon might puzzle it out first but it looks like Alphaville is closing fast. I wonder how goofy I have to make the posting title for these folks to catch on? Pink I has the top three positions in google search for the string “london whale pollack copula”, here this morning, maybe they have to use Bing at the Ft.com?
Lisa Pollack, The formula that Wall Street never believed in, here.
In ‘The Formula That Killed Wall Street’? The Gaussian Copula and the Material Cultures of Modelling, Donald MacKenzie and Taylor Spears present a history of the development of the one-factor Gaussian copula model, which is used to price various structured products, including Collateralised Debt Obligations (CDOs). As the title of the paper suggests, the model has many critics and has had a lot of blame placed at its feet.
Oh, but I can think of one big Wall Street player who believed in it, their nickname rhymes with Mundane Tail.