Home » Talk
Category Archives: Talk
London Quant Group, Chairman’s Blog, Reflections on the Autumn Seminar – September 2013, here.
The LQG Autumn Seminar was held at Pembroke College, Oxford over 2.5 days in early September and, on the basis of the feedback we received, was generally regarded as awesome.
Thirteen speakers covered a variety of subjects ranging from differing views on what’s actually going on with minimum volatility and low beta investing, through new ideas on risk parity and fundamental indexation, high frequency analysis and even “psychic returns” in investment. As usual the debate was sometimes intense, and as usual some speakers survived better than others…
Gil Tene, Infoq, How NOT to Measure Latency, here. Targets Azul GC ultimately, but interesting.
Gil Tene discusses some common pitfalls encountered in measuring and characterizing latency, demonstrating and discussing some false assumptions and measurement techniques that lead to dramatically incorrect reporting results, and covers simple ways to sanity check and correct these situations.
The Trading Show, 3-4 Dec 2013, NY, here.
The Trading Show New York brings together the entire spectrum of the quant and electronic trading community.
Confirmed participants include:
Daniel Nadler, CEO, Kensho Finance
Manoj Narang, CEO, Tradeworx
Marcos Lopez de Prado, Head of Quant Trading & Research, HETCO
Erik Gordon, CTO, Trillium Trading
Attilio Meucci, Chief Risk Officer, KKR
Danny Vinik, BI, Larry Summers Gave An Amazing Speech Explaining The Fundamental Economic Problem Of Our Time, here.
“Imagine a situation where natural and equilibrium interest rates have fallen significantly below zero,” Summers said. “Then conventional macroeconomic thinking leaves us in a very serious problem because we all seem to agree that whereas you can keep the federal funds rate at a low level forever, it’s much harder to do extraordinary measures beyond that forever, but the underlying problem may be there forever.”
There are a couple of other ways we could attack this.
– The Fed could allow for greater inflation, and thus incentivizing people to spend now if they’re hoarding money.
– We could also move to a cashless society where all money is electronic. This would make it impossible to hoard cash outside the bank, allowing the Fed to cut interest rates to below zero, spurring people to spend more.
Rob Beschizza, boing boing, Robot will beat you at Rock Paper Scissors even faster now, here. 100% win ratio. You knew after chess fell to the robots that something like this was on the way. Probably good at calling coin flips as well.
It uses high-speed recognition and reaction, rather than prediction.
Technically, the robot cheats because it reacts extremely quickly to what the human hand is doing rather than making a premeditated simultaneous action as the rules state.
Taking just one millisecond (ms) – a thousandth of a second – to recognise what shape the human hand is making, it then chooses a winning move and reacts at high speed.
Version one completed its shape 20ms after the human hand; version two finishes almost simultaneously.
Numerix, Blog, Advanced OIS Discounting – Building Proxy OIS Curves When OIS Markets are Illiquid or Nonexistent, here.
The G5 currencies (USD, EUR, GBP, JPY and CHF), along with a few others, have well-developed Overnight Index Swap (OIS) markets, enabling practitioners to construct OIS curves which can then be used to discount derivative cash flows collateralized in that currency. The OIS curve is also used to strip projection curves for the different LIBOR tenors from quotes of collateralized vanilla swaps.However, many currencies do not have Overnight Index Swaps markets, or the OIS markets are very illiquid. Constructing an OIS curve in these currencies is a much more difficult exercise.How can practitioners use vanilla swap quotes in the target currency plus cross-currency basis swaps to simultaneously strip both the implied OIS discounting curve and the projection curve? And what if the vanilla swap market is illiquid at the tenor you need but liquid at other tenors?Join Numerix on Wednesday, November 6th at 10:00am EST as featured speaker Dr. Ion Mihai, Quantitative Analyst at Numerix, discusses how to build proxy OIS curves from available market information in currencies where the Overnight Index Swap market is not well developed.Dr. Mihai will discuss:
- OIS discounting basics: review of the standard curve stripping approach
- What if there is no OIS curve?
- Simultaneous calibration of discounting and projection curves
- Assumptions behind the curve stripping approaches
- ExamplesAttendance is complimentary, Registration is required.Space is limited, reserve your seat today!
Gelman, Stat Modeling, On Blogging, here. Like the guy in the comments who says Clive James drags on, nice work.
The necessary conceit of the essayist must be that in writing down what is obvious to him he is not wasting his reader’s time. The value of what he does will depend on the quality of his perception, not on the length of his manuscript. Too many dull books about literature would have been tolerably long essays; too many dull long essays would have been reasonably interesting short ones; too many short essays should have been letters to the editor. If the essayist has a literary personality his essay will add up to something all of a piece. If he has not, he may write fancily titled books until doomsday and do no good. Most of the criticism that matters at all has been written in essay form. This fact is no great mystery: what there is to say about literature is very important, but there just isn’t all that much of it. Literature says most things itself, when it is allowed to.
In this talk we will introduce the particle method and show how it solves a wide variety of smile calibration problems: – calibration of the local volatility model with stochastic interest rates – calibration of stochastic local volatility models, possibly with stochastic interest rates and stochastic dividend yield – calibration to the smile of a basket of multi-asset local volatility -local correlation models, possibly with stochastic volatility, stochastic interest rates, and stochastic dividend yields – calibration of path-dependent volatility models and path-dependent correlation models.
The particle method is a Monte Carlo method where the simulated paths interact with each other to ensure that a given market smile is fitted. PDE methods typically do not work for these high-dimensional models. The particle method is not only the first available exact simulation-based method. It is also robust, easy to implement, and fast (it is as fast as a standard Monte Carlo algorithm), as many numerical examples will show. As of today, it is the most powerful tool for solving smile calibration problems. Icing on the cake for those who like maths: there are nice mathematics behind the scenes, namely the theory of McKean stochastic differential equations and the propagation of chaos.
Felix Salmon, Reuters, Whither bond returns? here.
El-Erian explains that the consistent and gratifying numbers posted by fixed-income portfolios were the result of a “virtuous circle”, where four different drivers all fed each other and helped push returns upwards:
- A secular fall in interest rates;
- A consistently negative correlation between fixed-income returns and equity returns, over a six-month time horizon;
- Monster flows into the asset class;
- Direct support from central banks.
Now, however, those four drivers are coming to an end. Interest rates are at zero; they can’t fall any lower. Over the long term, they have nowhere to go but up. Total assets in fixed-income profiles rose from $4.7 trillion in 1998 to $12.1 trillion at the beginning of 2013 — but now flows have turned negative, and investors are pulling their money out of bonds. And while the Fed is still spending some $85 billion a month buying bonds in the secondary market, that isn’t going to last forever: the taper is coming, sooner or later.
Erik Demaine, Presburger Award Talk, here.
Gelman, Stat Modeling, Bayes Related, here. The second book is published through GitHub.
The second book is Probabilistic Programming and Bayesian Methods for Hackers, by Cameron Davidson-Pilon with contributions from may others. This book is a bit more polished and less conversational (which is good in some ways and bad in others; overall it’s a plus for book #2 to differ in some ways from book #1, so you can get something out of reading each of them), with a more of a focus on business-analytics sorts of problems. I didn’t read this book in detail either, but from what I saw, I like it too!
WINE 2013: The 9th Conference on Web and Internet Economics, here.
Over the past decade, research in theoretical computer science, artificial intelligence, and microeconomics has joined forces to tackle problems involving incentives and computation. These problems are of particular importance in application areas like the Web and the Internet that involve large and diverse populations. The Conference on Web and Internet Economics (WINE, formerly Workshop on Internet & Network Economics) is an interdisciplinary forum for the exchange of ideas and results on incentives and computation arising from these various fields.
Co-located with The 10th Workshop on Algorithms and Models of the Web Graph (WAW 2013), WINE 2013 will be held from December 11, 2013 through December 14, 2013 in Cambridge, MA, USA. The conference will feature invited speakers, tutorials, paper presentations, and a poster session. Invited talks and accepted papers and posters will be presented from December 12, 2013 through December 14, 2013; a tutorial program will take place on December 11, 2013.
Quentin Hardy, NYT, Stark Earnings for Intel Reflect Its Changing Market, here. AVX2 and the multiple FPUs per core in Haswell could be the end of the road for general purpose commodity floating point architecture improvements for a long time. The server manufacturers are falling behind and the demand for Intel chips is slipping. You will still get some improvements in the x86 feature size reductions but the architecture doesn’t look like it is going to change much. There used to be a world of Cray and Amdahl and then Steve Wallach and Burton Smith where you had to hook up to The Guy who could deliver competitive floating point cycles. Sometime, call it ten years ago, that world where DARPA and NSF underwrote your floating point performance got replaced with a world where Joe and Suzy Sixpack underwrote your commodity floating point performance. It’s been a great ride, and we should thank the Sixpacks, but it’s nearing the end. It is reasonably clear that the Sixpacks want to use their chip real estate for wireless, mobile, solid battery life retina display not vectorized ILP hacks connected to coherent multicore caches. So someone else is going to pay for the R&D. It’s a good bet DARPA and NSF will play a role if only because it’s not like Navier Stokes simulation codes are going to run themselves. But who is going to be The Guy who delivers exceptional floating point performance for the computation I care about in 2016? It’s going to be a small niche market without much of the upside Wallach and Smith were chasing back in the day. This is not going to be an “all boats rise” kind of paradigm. However, there should be niche low latency demand of similar size to Spread’s 500MM USD Chicago to NY fiber run. Folks will eventually realize 1. there are no commodity microprocessor incremental floating point performance improvements on the horizon, 2. their parallel code is terrible, and 3. GPUs don’t solve their computational problems. Maybe your Broker Dealer can be the new age Burton Smith for automated trading. That is almost certainly what the Dataflow FPGA boys sold to the London Whale, so we know the sales pitch works. Now all we need to do is to get the technology/architecture right so the customer doesn’t hemorrhage P&L in such a spectacular fashion right after they roll out to production. To put it in perspective, in the 10 greatest white elephants the Maginot Line only cost 5bn francs and took 10 years to build as opposed to the London Whale’s supercomputer credit batch reportedly cost 7 bn USD and took 3 years to assemble. At least the Germans had to go around the Maginot Line. The London Whale was flying blind pretty much right after the Dataflow Supercomputer Realtime Credit batch went into production.
For years, Intel executives scoffed at potential threats to its computer chip business from makers of less expensive chips for video games and mobile phones. The largest maker of chips, Intel continued to focus on putting those chips in personal computers, where they could be sold at a high profit margin.
That strategy appears to have run its course. The quality of mobile and gaming chips made by other companies has improved to a point where they run most of the world’s mobile phones and tablets. And partly because more people are turning to those mobile devices, PC sales are waning.
The move to mobile devices started to hurt Intel’s results in recent quarters, but the quarterly earnings that the company reported on Wednesday were particularly stark. Net income was $2 billion, or 39 cents a share, a drop of 29 percent from a year earlier. Revenue was $12.8 billion, down 5 percent.
Macro Man, about/faqs, here. Solid.
Q. Why do you write the blog?
A. A number of reasons, but we have found the discipline required to set down thoughts on a daily basis, subjected to public scrutiny, to be highly useful. It provides an archive of our thinking at any particular point in time, and enforces a quality control of ideas. The comments section has proven to be a valuable resource, both in terms of providing feedback to our own thinking and in the ideas that readers occasionally share: a few of the latter have ended up in our portfolios. Finally, it provides a forum for the occasional rant, which is always useful to let off steam.
Jeff Birnbaum, 60 East Technologies, AMPS, here. hat tip sp.
Join our CEO, Jeffrey M. Birnbaum, on June 10th, 2013 as he delves into the world of big data and high performance covering the cutting edge of CPUs, Storage, and networking.
Sam Ro and Rob Wile, BI, What in the World is Going On? here. My Treasury position needs some improvement.
“Something happened in the middle of May,” said investing god Jeff Gundlach as he began his latest webcast on the state of the global markets and the economy.
He was referring to how global interest rates quietly rallied and how the Japanese stock market fell spectacularly.
He notes that the magnitude of the interest rate rally isn’t unusual. Having said that, Gundlach believes rates will stay low thanks to a “put” by the Federal Reserve. Should rates rise, Gundlach believes the Fed would actually expand quantitative easing. This is because high interest rates would put too much pressure on the economy, and it would cause Federal interest expenses to become too onerous.
“I certainly think the Fed is going to reduce quantitative easing,” he said. But he attributes the reduction to the shrinking Federal deficit.
“I’m starting to like long-term Treasuries,” said Gundlach as he predicted the 10-year Treasury yield would end the year at 1.7%.Read more: http://www.businessinsider.com/jeff-gundlach-june-webcast-presentation-2013-6?op=1#ixzz2VLZ59HIG
Scott Aaronson, Talk @ Microsoft Research, So You Think Quantum Computing Is Bunk? here. Really nice talk.
In this talk, I’ll take an unusual tack in explaining quantum computing to a broad audience. I’ll start by assuming, for the sake of argument, that scalable quantum computing is ‘too crazy towork’: i.e., that it must be impossible for some fundamental physical reason. I’ll then investigate the sorts of radical additions or changes to current physics that we seem forced to contemplate in order to justify such an assumption. I’ll point out the many cases where such changes seem ruled out by existing experiments, or by no-go theorems such as the Bell Inequality. I’ll also mention two recent no-go theorems for so-called ‘epistemic’ hidden-variable theories: one due to Pusey, Barrett, and Rudolph, the other to Bouland, Chua, Lowther, and myself. Finally, I’ll discuss my 2004 notion of a ‘Sure/Shor separator,’ as well as the BosonSampling proposal [A.-Arkhipov 2011] and its recent experimental realizations—which suggest one possible route to falsifying the Extended Church-Turing Thesis more directly than by building a universal quantum computer.
There appears to be a loophole in Goedel Incompleteness Theorem. It was vaguely perceived for a long time but not clearly identified. (Thus, Goedel believed informal arguments can answer any math question.) Closing this loophole does not seem obvious and involves Kolmogorov complexity. (This is unrelated to, well studied before, complexity quantifications of the usual Goedel effects.) I consider extensions U of the universal partial recursive predicate (or, say, Peano Arithmetic). I prove that any U either leaves unresolved an n-bit input (statement) or contains nearly all information about the n-bit prefix of any r.e. real (which is n bits for some r.e. reals). I argue that creating significant information about a SPECIFIC math sequence is impossible regardless of the methods used. Similar problems and answers apply to other unsolvability results for tasks allowing non-unique solutions, e.g. non-recursive tilings.
D.Hilbert asked if the formal arithmetic can be consistently extended to a complete theory. The question was somewhat vague since an obvious answer was “yes”: just add to the axioms of Peano Arithmetic (PA) a maximal consistent set, clearly existing albeit hard to find. K.Goedel formalized this question as existence among such extensions of recursively enumerable ones and gave it a negative answer. Its mathematical essence is the lack of total recursive extensions of universal partial recursive predicate.
This negative answer apparently was never accepted by Hilbert, and Goedel himself had reservations:
“Namely, it turns out that in the systematic establishment of the axioms of mathematics, new axioms, which do not follow by formal logic from those previously established, again and again become evident. It is not at all excluded by the negative results mentioned earlier that nevertheless every clearly posed mathematical yes-or-no question is solvable in this way. For it is just this becoming evident of more and more new axioms on the basis of the meaning of the primitive notions that a machine cannot imitate.” (Goedel. 1961 “The modern development …”)