You are currently browsing the category archive for the ‘Publications’ category.
Cesar Torres, Ars, How swords, track changes, and Amazon led to The Mongoliad: Book Two, here. These guys and Tears of Steel, I am just catching on to some kind of Open Source thing but I am not sure what it is.
Stephenson’s project recruited other established authors like Greg Bear, Erik Bear, Mark Teppo, and Nicole Galland. New sci-fi voices—like Joseph Brassey and Cooper Moo—were invited to create a community that could generate an epic story as a finished product.
Tyler Cowen, Marginal Revolution, Introducing MRUniversity, here.
We think education should be better, cheaper, and easier to access. So we decided to take matters into our own hands and create a new online education platform toward those ends. We have decided to do more to communicate our personal vision of economics to you and to the broader world.
Steven Frank, IEEE Spectrum, Review: MITx’s Online Circuit Design and Analysis Course, here.
MIT’s Anant Agarwal has a thing for chain saws. The professor of electrical engineering and computer science said so himself as he welcomed his vast horde of online students. And it was a horde: More than 150 000 of us from dozens of countries had signed up for MIT’s inaugural MOOC, or massively open online course, which began in early March and ended in June. The course, dubbed 6.002x, was an adaptation of MIT’s undergraduate class in circuit design and analysis and was part of the university’s MITx initiative, which aims to offer anyone with an Internet connection access to a selection of its courses. Participants were lured by some powerful enticements: the prestige of MIT, the opportunity to learn from a renowned professor, and the price—free. Although MIT has made course materials publicly available for over a decade, this is its first online class involving scheduled instruction, supervision, and testing. Only participants who formally signed up for the 6.002x course can earn a credential certifying successful completion; MIT has not announced when the course will be offered again.
Sally Wiener Grotta, Daniel Grotta, IEEE Spectrum, Self-Publishing 101, here.
Journalist A.J. Liebling once famously noted, “Freedom of the press is guaranteed only to those who own one.” But with the advent of electronic publishing—and ancillary technologies such as online stores, e-readers, and print-on-demand (POD) systems—anyone with a computer and an Internet connection can be a publisher. For better or worse, whether they’re barely literate neophytes or New York Timesbest-selling authors, writers can publish whatever they want without having to get approval from an agent, a selection committee, or a peer review process. Online retailers like Amazon.com, Apple, and Barnes & Noble also pay a significantly higher percentage of per-copy revenue than traditional publishing houses, giving writers the potential to earn much more money. So it’s no wonder that many authors (including us) are bypassing conventional publishers.
True, e-publishing means that authors are responsible for all stages of a book’s production, including graphic design and marketing, but this isn’t that much different from the trend authors have experienced with many print book publishers in recent years. For example, for our last three books, our publishers (Peachpit Press and John Wiley & Sons) provided a template for our word processing program to allow us to format our manuscripts so they could be sent directly to a printing house. We were expected to do (or pay someone else to do) our own editing, copyediting, and proofreading, as well as secure permissions to quote copyrighted material. The publishers’ prepublication publicity consisted solely of sending out press releases and review copies.
Bissel, NYT Sunday Book Review, Neal Stephenson’s Novel of Computer Viruses and Welsh Terrorists, here. Like a a big slab of Buttah.
Let us say that novelists are like unannounced visitors. While Norman Mailer and Saul Bellow pound manfully on the door, Jonathan Franzen and Zadie Smith knock politely, little preparing you for the emotional ferociousness with which they plan on making themselves at home. Neal Stephenson, on the other hand, shows up smelling vaguely of weed, with a bunch of suitcases. Maybe he can crash for a couple of days? Two weeks later he is still there. And you cannot get rid of him. Not because he is unpleasant but because he is so interesting. Then one morning you wake up and find him gone. You are relieved, a little, but you also miss him. And you wish he’d left behind whatever it was he was smoking, because anything that allows a human being to write six 1,000-page novels in 12 years is worth the health and imprisonment risk.
High Frequency Traders, Packet Processing in High Frequency Trading, here.
HFT: The low-latency space is moving very quickly. How is it possible to ensure that solutions are future-proof?
Eric: Our technology has a number of benefits that can be used to ensure we are always up to date. First of all, we have a portable solution which runs on all the industry leading multi-core platforms. As those platforms become quicker and more powerful we will automatically benefit. There is also a trend in the industry for increasing numbers of cores. When we have more cores we will be able to distribute the packets to a larger number of cores, which means that within a latency budget we will be able to manage a larger bandwidth. This means that when the number of transactions increases we will be able to allocate them to more cores. Our technology can be implemented on a single processor board but also be extended over several boards if you need more cores to process the packets. Regarding networking features, we will be able to use more processing capability to implement more sophisticated features while also keeping the latency budget at the same level. Thanks to technology improvements we will have more processor cycles and we will be able to do more. We are very well positioned to benefit from processor technology improvement and fulfil future HFT requirements.
Funny; solutions that are ensured to be future-proof. This is why we can’t have nice things and also explains why there are Credit Default Swaps and the Bellagio.
DeLong, MARTIN WOLF: PANIC HAS BECOME ALL TOO RATIONAL, here.
Suppose that in June 2007 you had been told that the UK 10-year bond would be yielding 1.54 per cent, the US Treasury 10-year 1.47 per cent and the German 10-year 1.17 per cent on June 1 2012. Suppose, too, you had been told that official short rates varied from zero in the US and Japan to 1 per cent in the eurozone. What would you think? You would think the world economy was in a depression. You would have been wrong if you had meant something like the 1930s. But you would have been right about the forces at work: the west is in a contained depression; worse, forces for another downswing are building, above all in the eurozone. Meanwhile, policy makers are making huge errors.
Naked Capitalism, The Real Bombshell in the MF Global Post Mortem, here.
But the real stunner comes early in the report, and the media write ups thus far seem to have missed it completely. Recall that the trade that felled MF Global was one directed by Corzine, and has been depicted as a repo-to-maturity trade, in which the maturity of the repo matched that of the underlying asset exactly. That in turn allowed the trade to be treated as off balance sheet, which was helpful in presenting the firm’s results to ratings agencies and analysts.
The bet that commentators focused on was that the European governments would not default before the maturity of the short-term trades, and the transactions allegedly would have worked out had MF Global survived. (Note that press commentary has focused on an Italian bond). The problem has been widely described as one of short-term price moves, namely, that Corzine and other managers allegedly did not know that if the price of the maturing bonds it bought fell more than 5%, it would have to post more collateral, and that adverse price moves triggered the liquidity crisis.
It turns out this description of the trade isn’t accurate. It never was a real repo to maturity, as in maturity match funded externally. The funding was two days shorter than the maturity of the asset. But, no joke, MF Global dressed that up internally and somehow got accountants and regulators to buy off on this bogus characterization. And even worse, this scheme produced book profits at the expense of liquidity, the real scarce commodity at the firm:
Taleb, Prolog to his book due out Q4, here. Black Swan was a pretty good read. I expect no less from Antifragile.
FT on copulas. Johnny Cash gave it up for default correlation.
thedeal.com: CDS holders hindering GM debt restructuring
Infectious Greed: Two Contrarian Views on CDS
Fed Dallas: Debunking Derivatives Delrium
Geanakoplos: The Leverage Cycle. Unique discussion on the overall effect of having to post collateral in the mortgage and CMO market from an economic modeling context. Take a look at Geanaloplos Yale home page for more papers, some of which are for lay person consumptuion. Also remarkable that the Tobin Chair of Economics at Yale doesn’t come right out and say CDS did it, but he pretty much figuratively puts CDS at the scene of the crime and is checking if CDS has a gun permit.
Assume you are approached for an opinion on the computational performance of piece of software that computes the mark to market, risk, and P&L explanatories for a randomly selected 5 year vanilla single name credit default swap. Lets say the software completes computation in 1 second on a contemporary microprocessor in 2009. Assuming the computation output accurate, correct, and complete what benchmarks can you use to get a ballpark sense of how good the performance of 1 second average elapsed time really is.
Lets do a back of the envelope estimate of all the floating point multiplies and adds required for this credit default swap processing on a single microprocessor core. Patterson gives you that a contemporary microprocessor takes 4 clocks for a floating point multiply and 200 clocks to go to DRAM (an L1 and L2 cache miss). Lets assume we are working with a 65 nm Intel Pentium running at 3.6 GHz and that a floating point add on this core takes 2 clocks. So if my actual CDS computation is about half floating point adds and half fp multiplies then in one second I get over a billion 1,000,000,000 operations theoretically to compute required outputs. If I need to use fp divides, they are expensive at 20 clocks per divide and exponentials (vectorized) cost any where from 10 to 30 clocks depending on the accuracy requirements. Just for round numbers lets go with the billion fp ops per second as an average.
At inception a 5 year OTC vanilla default swap has 20 premium payment dates that are actually fixed to be IMM dates . The net present value of these scheduled premium payments is determined by discounting each by the risky discount factor determined by the cooked credit curve. The net present value of the other swap leg, the protection leg, is determined by integrating the product of default arrival probability and the default payout amount over the term of the premium leg. Since the hazard rate is typically piecewise constant (and the recovery rate constant over the future of the swap) the integral can be closely approximated by a weighted summation of expected protection values at each of the premium cashflow dates, the 20 IMM dates for a 5 year swap. Assuming all fp is double precision then each of the 20 cash flow dates requires 8 bytes so the entire NPV sum requires 160 bytes and certainly fits in the L1 cache (probably in the register file as well). So, lets estimate that all the mtm, deltas, gammas, explanatory evaluations for the single vanilla credit default swap requires 100 mtm evaluations. I don’t think its likely that there are even 30 separate mtm evaluations required, so 100 seems conservative. Thus 100 times we need to sum 20 discounted cashflows and 20 expected payoffs on default. Lets assume the code has not been optimized so you need 300 clocks for each valuation, times 100 evaluations, you need 30,000 clocks to do the valuations if the computation is hitting the L1 cache. Lets assume we don’t actually hit the L1 cache all the time and performance on the evals requires an extra 90,000 clocks for memory waits, for a total of 120,000 clocks. That accounts for 120 microseconds (correction: its 40 microseconds but certainly less than 120 microseconds so the remaining estimates should hold) of compute time so the balance of the time must be spent cooking curves.
Lets assume this code has to cook the underlying credit curve once for each of the 100 evaluations above. Again it seems unlikely that the par curve and various perturbed curves for computing the first and second divided differences and the scenario curves will amount to even 30 cooked curves. We cost it at 100 cooked curves to be conservative. Lets assume the bootstrap process has to fit five annual par spreads (although typically only 3 of the five are marked). Furthermore lets assume that the fp operation count of fitting the 5Y spread is the same as the 4Y, 3Y, 2Y, and 1Y spreads (although they obviously require fewer fp ops). Fixing the 5Y point involves some root finding algorithm for a fairly smooth and well behaved function. Presumably you can use Brent or even simple bisection at the cost of say 5 full mtm evaluations at 300 clocks a piece. These 1500 clocks need to be executed at each of the terms 1Y ( I know not really) , 2Y, 3Y, 4Y, and 5Y for a cost of 7500 clocks. For 100 cooked curves that is 750,000 clocks. Lets assume the L1 cache miss induces a factor of 10 memory wait state penalty so the cost is 7,500,000 clocks.
Now we are talking about using some clock, so the curve cooking + the evaluation comes to 7,500,000 + 120,000 or 7,620,000 clocks . Out of the 3.6 billion clocks available per second we have managed to use less than 0.3% of them to retire the required operations with tons of wasted clocks to buffer our estimations. Through this back of the envelope estimation we can find about 3 milliseconds of work to complete the whole computation end to end on a contemporary microprocessor. So that one second elapsed time performance we started off evaluating isn’t looking so much like A-list performance. Its missing normal performance by roughly a factor of 300. It appears to perform like competitive code running on a microprocessor from the year Titanic won the Oscar for best movie (1997)
BTW to eval 100 OTC trades – full mtm, risk, and P&L explanatories on the same credit curve looks like about 20 milliseconds using these estimates. The ball park time for the time required to run mtm, risk and explanatories on all the non terminated credit default swaps existing worldwide today: Assume 50,000 different curves and 10,000,000 non terminated CDS then its 50,000* 7.5 ms + 10,000,000* 0.12 ms about 26.25 minutes. Lets call it an hour because there are some OTC CDS with terms greater than 5Y and we need to leave some more buffer for general programmer apathy/cluelessness. But it would take a code rewrite; you’d have to get one of those 65 nm Intel microprocessors; all the trades from DTCC; and all the marks from MarkIT. On the other hand you could run it on your home computer after work, before dinner, and if you cannot afford a new computer but you do all the other stuff on your old PC-clunker; I’m thinking 2.5 hours and you’re done with the world’s overnight credit batch in time to catch the end of South Park w. the kids, I’m totally serial.
Active Sources for iPhone reading on the train
Blankfein’s speech to the council of Institutional Investors spring conference in Washington here
The 10 Principles are ready and published in FT here . Yves Smith says it is a must read and Felix Salmon says it is a listicle of utter genius. I’m telling you NTT is Peter Sellers character with a copy of Copleston next to the keyboard, stochastic calculus class in the past, and math PhD diploma on the wall. The screenplay almost writes itself.
Lots going on. Standard CDS contracts are changing so fast that by the time folks make up their minds whether the old CDS contracts did it or didn’t do it there will be nothing trading resembling the old CDS contracts. 100/500 will kick in for new CDS in North America towards the end of March. The pathway to a central counterparty via NYSE/Euronext, ICE, and/or CME is being cleared for CDS by the SEC. Just as the clearinghouse gets set up for CDS the observation is made, by Stanford Prof. Duffie, that setting an exchange up for the $27 trillion CDS market and not for the $458 trillion Rates market is a little weird. How much longer before the new quarterly rolled 5Y CDS contract gets a CUSIP and requires a look up in the Enterprise security database?
Portfolio.com runs a piece by Salmon arguing that the Gaussian Copula killed Wall Street. David Li must be thrilled with that line of investigation even with Falkenblog mounting a defense. Jonathan Jarvis/Vimeo runs a Credit Crisis video that lots of people watch.
Front-to-Back default swap analysis is interesting because default swaps are comparatively new securities with the first contracts printed in the mid 1990s. By 2008, the worldwide OTC default swap market has grown to become a $60 trillion market offering, among other things, a simple inexpensive way to short corporate and sovereign credit that did not exist previously.
In addition to the high volume trading, there is a lot of contemporary action in credit derivatives. In 2002, Warren Buffet famously labeled derivatives “Financial weapons of mass destruction” a couple years before disclosing that Berkshire-Hathaway carried billions of dollars of them on its books. New York Times reporter James B. Kelleher headlines Buffet’s “time bomb” goes off on Wall Street explaining that “ On Main Street, insurance products protects people from the effects of catastrophes. But on Wall Street, specialized insurance known as credit default swaps are turning a bad situation into a catastrophe.” In May 2008, the ratings agency Moody’s issued a warning that counterparty risk in the credit default swap market posed a greater threat to banks and dealers that other OTC derivatives market such as interest rate swaps. So we conclude that credit default swaps are more dangerous than all the other derivatives. We are further told that default swaps are a nefarious tool used by short sellers; an obscure unregulated financial world created in 1997 by a cell deep in a bulge bracket bank; a financial Ebola virus threatening to infect Main Street but the Street just keeps on printing them. I suppose you could sum up the recent action as terrifyingly dangerous yet at the same time oddly compelling and inevitable.