Home » References
Category Archives: References
Matt Levine, Bloomberg, Blackstone Made Money on Credit-Default Swaps With This One Weird Trick, here. CDS valuation has always had these large information asymmetries which make it more expensive to be a market maker on any of the proposed CDS exchanges. I don’t think exchange traded vanilla USD interest rate swaps would have this problem to the same degree.
Really, the only reason to cover this story is its majestic beauty. Which is a great reason to cover it, don’t get me wrong; it’s just that aesthetic appreciation of clever derivatives trades is sort of a specialized niche. Certainly “The Daily Show” didn’t muster much admiration and instead spent seven minutes criticizing everyone else for not covering the story. This is wrong. This trade is so lovely that the proper reaction is to love it and cherish it and hold it close to your heart, not to complain that nobody else does.
There’s one other reason not to worry unduly about this trade, and I hesitate to bring it up, but:It’s not really as bad as it looks. I mean, yes, it is very, very clever. It achieves the second-highest goal of any financial engineering, which is to create genuine value for both parties to a transaction (here, Blackstone and Codere) by taking that value from some third party who’s not in the room.3 So that is great. As Blackstone spins it:
We love Jon Stewart and he continues to be one of the funniest people on TV. But the somewhat boring truth is that we cooperated with Codere and its advisors to save it from bankruptcy or liquidation. We provided capital when no one else would, which allowed the Company to live and fight another day.And they could provide that capital efficiently because they took some value from their CDS writers.
Marco Avellaneda, NYU, Algorithmic and High-frequency trading: an overview, here.
Algorithmic trading: the use of programs and computers to generate and execute (large) orders in markets with electronic access.
Almgren and Chriss, NYU, Dec 2000, Optimal Execution of Portfolio Transactions, here.
We consider the execution of portfolio transactions with the aim of minimizing a combination of volatility risk and transaction costs aris- ing from permanent and temporary market impact. For a simple lin- ear cost model, we explicitly construct the efficient frontier in the space of time-dependent liquidation strategies, which have minimum expected cost for a given level of uncertainty. We may then select op- timal strategies either by minimizing a quadratic utility function, or by minimizing Value at Risk. The latter choice leads to the concept of Liquidity-adjusted VAR, or L-VaR, that explicitly considers the best tradeoff between volatility risk and liquidation costs.
Albanese, et.al., Jan 2011, Coherent Global Market Simulations For Counterparty Credit Risk, here. CVA floating point hacks.
To explain how greater performance can be achieved by doing more calculations, let us recall a few traits of the silicon economics affecting microchip and board designs. In recent times, there was in fact a radical shift in this landscape.
It used to be that:
(i) Computing capabilities were limited by the ability of ALUs to execute floating point and integer arithmetics
(ii) Memory was expensive and a scarce resource
(iii) Most algorithms were single-threaded and parallelism was best brokered transparently by middleware layers dispatching jobs to large grid farms
(iv) Code was best written in native C++ optimized in such a way to speed up the execution of a great variety of bespoke algorithms.
Although these practices are still widespread in the transition period we are living, the underling technology has now shifted quite radically.
(a) Nowadays, it is relatively cheap to populate microchips with highly capable ALU cores. The 8-socket CPU boards of the emerging generation entail as many as 80 cores capable of hyperthreading in the case of Intel or 96 cores in the case of AMD. Even more extreme ALU counts are seen in the GPU space where the AMD Firepro GPUs have 1600 cores and nVidia Fermis have 512.
(b) Memory is relatively cheap and readily available up to terabyte scale, thus enabling single node technology for portfolio processing as a viable alternative to grid computing.
(c) The clock frequency and bandwidths of data paths are not keeping pace with the compute power of ALUs and the massive memory available, rendering the memory bottleneck tighter than ever within the bounds of cost effective designs.
(d) Vastly different microchip architectures have emerged, including SIMD multiprocessors with up to 16-32 data registers located in discrete GPU parts as in the nVidia Fermi and ATI Firestream, the multicore MIMD designs on CPU boards by Intel and AMD and the emerging MIMD-SIMD hybrid fusion architectures, the Intel Sandybridge and AMD Booldozer.
(e) MIMD and SIMD designs are characterized by radically different threading models: SSE2/SSE3/AVX primitives rule with CPUs while the lightweight, no-frills threading models in CUDA/OpenCL are used for GPUs.
(f) Cache hierarchies for MIMD architectures are complex and involve up to 2 MB per core. GPUs instead are nearly cacheless except for a modest amount of shared memory located on individual SIMD microprocessors.
(f) On the programming language side we see the merit a bifurcation away from catch- all C++ coding. On the one hand, the variety of architectures motivates a revival of interest in low-level optimization of basic building block algorithms. On the other hand, the complexity of multi-threaded orchestration in shared memory designs using large scale in-memory processing motivates the use of higher level languages. Features such as garbage collection, managed thread pools and support for service oriented architectures are in fact essential for complexity management.
Nassim Taleb is a former trader who wrote a textbook on option and market making, and then became more philosophical in his best seller Fooled by Randomness, and now in The Black Swan. His big idea is that sometimes, unexpecting things happen: countries dissolve into anarchy, wars start, unknown authors become famous. His secondary ideas are variations on this theme, that people, especially experts, are generally biased, overconfident, and rationalize past event so they appear deterministic. Stated baldly, these assertion are hardly novel but true enough, and one can argue about their relevance in various cases. As a highly popular presentation of ideas near to my interests and vocation, I think it is worth critically examining if there is anything to his particular contribution to the literature on cognitive biases or social failures. My conclusion, in short, is no.
Burkhard Bilger, The New Yorker, Auto Correct, here.
What separates Levandowski from the nerds I knew is this: his wacky ideas tend to come true. “I only do cool shit,” he says. As a freshman at Berkeley, he launched an intranet service out of his basement that earned him fifty thousand dollars a year. As a sophomore, he won a national robotics competition with a machine made out of Legos that could sort Monopoly money—a fair analogy for what he’s been doing for Google lately. He was one of the principal architects of Street View and the Google Maps database, but those were just warmups. “The Wright Brothers era is over,” Levandowski assured me, as the Lexus took us across the Dumbarton Bridge. “This is more like Charles Lindbergh’s plane. And we’re trying to make it as robust and reliable as a 747.”
gothamist, An Interview With Steve Reich, Who Rewrote Radiohead, here.
A couple of years ago I saw a performance of “Music for 18 Musicians” at Carnegie Hall and you were one of the musicians. I think people have this image of a classical composer as being elevated above the musicians that perform their music. I’m not sure if they’re aware of Beethoven or Bach or Brahms or Bartok or Copland or a lot of other people, I think if they’re ignorant about all those people, they don’t know anything about anything.
John Mellor-Crummey, Rice, Current Research, here.
It is increasingly difficult for application developers writing complex scientific programs to attain a significant fraction of peak performance on modern microprocessor-based computer systems. Largely, this problem stems from the difficulty of expressing the application in a form that can effectively exploit the high-degree of instruction-level parallelism and deep memory hierarchies present in these systems. Furthermore, the complexity of these systems makes it difficult to pinpoint performance bottlenecks.
To address this issue, we have developed HPCToolkit — a novel suite of multi-platform tools for performance analysis of sequential and parallel programs.
Wikipedia, History of blogging, here. Guessing that nessus was the first guy to get rolled by internet trolls and that really optimistic Indian guy, Palith Balakrishnabati, still can’t drive 55? So ber is responsible for Matt Drudge and Robert Scoble? That seems like a heavy load to bear.
Usenet was the primary serial medium included in the original definition of the Internet. It featured the moderated newsgroup which allowed all posting in a newsgroup to be under the control of an individual or small group. Most such newsgroups were simply moderated discussion forums, however, in late 1983, mod.ber, was created, named after and managed by Brian E. Redman; he, and a few associates regularly posted summaries of interesting postings and threads taking place elsewhere on the net. Another moderated newsgroup, rec.humor.funny (rec.humor.funny via Google Groups), started on August 7, 1987, and remain active as of 2011.[dated info]
Rob Beschizza, boing boing, Baldrick knighted, here.
The BBC: “Blackadder star Sir Tony Robinson has received his knighthood from Prince William in a ceremony at Buckingham Palace.”
Sophie Pinkham, n+1, Scandinavian Style, here. Six volumes of Scandinavian hyperscrupulosity, who says no?
Karl Ove Knausgaard’s 3,500-page, six-volume magnum opus, My Struggle, is made from the material of the author’s daily life. The book has been described as an autobiographical novel, sometimes with “novel” in scare quotes, to indicate its excessive truthfulness. Like the author, the narrator is called Karl Ove Knausgaard, and, like the author, he is a Norwegian writer who lives in Stockholm with his second wife, the poet Linda Bostrom. As Knausgaard has explained in many interviews, his intention in writing My Struggle was to be absolutely honest, no matter how much shame this might cause.
Matt Levine, Bloomberg, Finance Ph.D.s Are Pretty Good at Finance If They Do Say So Themselves, here. Shades of Mr. Snipes in Passenger 57. Bargs killed it last night for the Knicks, does he have a Ph.D.?
And so the field is a parade of papers that identify factors reliably associated with statistically significant outperformance. But last month Ranadeb Chaudhuri,Zoran Ivkovich, Joshua Pollet and Charles Trzcinkawon. They wrote the most perfect possible finance paper, so everyone else can stop. Well, actually, they can’t, because Chaudhuri et al.’s conclusion is “the world needs more finance papers.” Sort of. Here (viaTyler Cowen) is the abstract:
Several hundred individuals who hold a Ph.D. in economics, finance, or others fields work for institutional money management companies. The gross performance of domestic equity investment products managed by individuals with a Ph.D. (Ph.D. products) is superior to the performance of non-Ph.D. products matched by objective, size, and past performance for one-year returns, Sharpe Ratios, alphas, information ratios, and the manipulation-proof measure MPPM. Fees for Ph.D. products are lower than those for non-Ph.D. products. Investment flows to Ph.D. products substantially exceed the flows to the matched non-Ph.D. products. Ph.D.s’ publications in leading economics and finance journals further enhance the performance gap.
Colossal, This Programmable 6.000-Part Drawing Boy Automata is Arguably the First Computer and It Was Built 240 Years Ago, here.
Designed in the late 1770s this incredible little robot called simply The Writer, was designed and built by Swiss-born watchmaker Pierre Jaquet-Droz with help from his son Henri-Louis, and Jean-Frédéric Leschot. Jaquet-Droz was one of the greatest automata designers to ever live and The Writer is considered his pièce de résistance. On the outside the device is deceptively simple. A small, barefoot boy perched at a wooden desk holding a quill, easily mistaken for a toy doll. But crammed inside is an engineering marvel: 6,000 custom made components work in concert to create a fully self-contained programmable writing machine that some consider to be the oldest example of a computer.
In this clip from BBC Four’s documentary Mechanical Marvels: Clockwork Dreams hosted by Professor Simon Schaffer, we go behind the scenes to learn just how this remarkably complex 240-year-old device was designed and constructed. The entire clip is well worth a watch, in fact here’s another bit about Merlin’s gorgeous silver swan automata:
Daniel Nadler, Institutional Investor, The Code-Free Movement Reaches Capital Markets, here.
Quants are hard to recruit, expensive to compensate and often require days to produce static, individual reports – few of which are integrated with one another.
Capital Market Quants eat pungent spicy food for lunch, talk incessantly about shitty old pre-CGI Sci-Fi movies, and then drone on and on about their latest Fantasy Basketball strategy or something they read in a book until you want to shoot yourself. Then they tell you: the code has bugs, and that premature optimization is the root of all evil, and that the production machine is inadequate, and the really smart guys work at the other place. Kind of like Cheech and Chong’s Mexican Americans song, here. Oh, Patrick Beverley played last night.