You are currently browsing the category archive for the ‘Question’ category.
Julia La Roche, Business Insider, Ken Langone Trashes The NYTimes And Destroys A CNBC Guest For Relying On It For Information, here. OK, I guess you can only cite the Times Book Review or David Carr on the Red Carpet at the Oscars without getting clocked in an argument on TeeVee. So, now it is 2012 and there is no newspaper of record? Maybe truth is like an exhaustible natural resource and the NYT limited supply ran out before the Business section could get some.
Here’s an excerpt from a transcript:
Ken Langone: ”I have a question for you, sir. Do you have any specific facts to substantiate the opinion you just put on this television show?
Nelson Lichtenstein: Well, yes. Walmart has been in the past, has repeatedly been cited.
Langone: No, I’m asking you, if — I’m asking you if you have any facts. Don’t cite somebody else. That’s hearsay.
Lichtenstein: Do I, no, I’m not an investigator of Walmart and I read the New York Times article and the New York Times article was pretty devastating.
Langone: Oh, the New York Times, that pillar of journalism that never has any opinions. Let me tell you what. In a selection of who is more likely to be telling the truth, Walmart or the Times, the Times doesn’t even get in the race. Well, let me say this. You’re saying your source is the New York Times? Shame on you as a professor…
Salmon, Sell-side research isn’t inside information, here.
So let’s let brokerages’ clients trade what they like, so long as they’re not trading on genuinely inside information from the company in question. If we’re going to be serious about the Volcker Rule, and prevent the brokerages from trading for their own account, the least we can do is let them monetize their analysts’ research as best they can.
HPC Wire, NVIDIA’s Bill Dally Talks 3D Chips and More at GTC, here. Keep an eye out for Dally interviews and talks.
The conversation turned to Stanford and what Dally views as the University’s most promising research. He mentioned a program where researchers are looking to take supercomputing interconnect technology and deliver it to commercial datacenters. Stanford University has worked with Cray on the Dragonfly interconnect for the Cascade system and began pitching the technology to Google and Facebook. According to him, they loved the technology because of its low latency. The Stanford team plans to test the design on a small FPGA cluster and if everything goes as planned, they’ll start looking for a commercial adopter.
The Register, Inside Nvidia’s GK110 monster GPU, here.
At the tail end of the GPU Technology Conference in San Jose this week, graphics chip juggernaut and compute wannabe Nvidia divulged the salient characteristics of the high-end “Kepler2″ GK110 GPU chips that are going to be the foundation of the two largest supercomputers in the world and that are no doubt going to make their way into plenty of workstations and clusters in the next several years.
If you just want awesome graphics, then the dual-chip GTX 690 graphics card, which is based on the smaller “Kepler1″ GK104 GPU chip, which Nvidia previewed back in March, is what you want. And if you want to do single-precision floating point math like mad, then theTesla K10 coprocessor, also sporting two GK104 chips, is what you need to do your image processing, signal processing, seismic processing, or chemical modeling inside of server clusters.
NYT, Discord at Key JPMorgan Unit Is Faulted in Loss, here. Juicy but ultimately not moving the discovery forward. Where is the P&L on JPM’s counterparties? The loss can get larger or smaller, internal squabbles can spill into view, and there can be televised Congressional hearings but it’s all sort of missing the big picture. Who said “Follow the money?” They were smart.
John Scalzi, Epic troll, Straight White Male: The Lowest Difficulty Setting There Is, here. Found it via Brad DeLong, here. I cannot determine if DeLong knows about internet trolls in the same way he knows about Oh, say Eurozone austerity. But he is DeLong and we’re not, so it all sort of evens out in the limit.
Dudes. Imagine life here in the US — or indeed, pretty much anywhere in the Western world — is a massive role playing game, like World of Warcraft except appallingly mundane, where most quests involve the acquisition of money, cell phones and donuts, although not always at the same time. Let’s call it The Real World. You have installed The Real World on your computer and are about to start playing, but first you go to the settings tab to bind your keys, fiddle with your defaults, and choose the difficulty setting for the game. Got it?
Okay: In the role playing game known as The Real World, “Straight White Male” is the lowest difficulty setting there is.
This means that the default behaviors for almost all the non-player characters in the game are easier on you than they would be otherwise. The default barriers for completions of quests are lower. Your leveling-up thresholds come more quickly. You automatically gain entry to some parts of the map that others have to work for. The game is easier to play, automatically, and when you need help, by default it’s easier to get.
NYT, The Facebook Offering: How It Compares, here. Killer IPO chart from 1980 to Facebook. Log valuation and first day pop look the best.
Blodget, Great Job, Wall Street and Facebook! The IPO Was Perfectly Priced, here.
This price level was ideal for almost everyone involved–with the exception of short-term traders who bought the stock only to instantly flip it. (And no one should cry for them).
With such a modest pop, Facebook and its selling shareholders did not leave tens or hundreds of millions (or even billions) of dollars on the table–an expensive mistake that most companies make.
Salmon, Much ado about nothing, here. Salmon is great always will be; Felix TV … not feeling it so much.
This of course helps to point up just how silly all the Facebook IPO hype really was. Yes, Facebook is now a public company, but it’s still controlled by Mark Zuckerberg, and the IPO itself was a bit of a farce: delayed at the open, artificially supported by the underwriters at the close, and mainly serving to demonstrate that a brand-new company, which no one knows how to value, trading at a stratospheric valuation, can still somehow end up trading within an incredibly narrow range on enormous daily volume.
For that, you can probably thank the surprisingly old-fashioned book-building process, where a team of investment bankers took Facebook on a classic roadshow, complete with a slick and rather embarrassing video, all for a record-low fee of 1.1% of the proceeds. Still, never mind the low fee: the bankers were paid to do a job, and they did it, providing a rock-solid bid at exactly $38 per share and thereby sending a clear signal to any potential future client: we’re never going to let investors lose money on the first day. Frankly, there are worse ways of spending money to try to bolster your reputation.
Damodaran, Musings on Markets, Facebook and “Field of Dreams”: Hoodies, Hubris and Hoopla, here.
Bottom line: Facebook, in spite of its ubiquitous presence in our lives, is just one company and not a very big one (at least in terms of revenues and earnings) yet. The market will obsess about it tomorrow but it will move on very quickly to the next worry, fear or fad.
Roger McNamee, Roger and Mike’s Hypernet Blog, here.
In December 2010 I decided to open source my investment strategy, in the form of a slide show and presentation called Ten Hypotheses for Tech Investing. When you open source ideas, you expose them to improvement. I presented the Ten Hypotheses to many smart people, including executives at Google, Facebook, Twitter, Yelp, the New York Times, Wall Street Journal, NBC, and many others … and they shredded them. It was fantastic!!!
From that process emerged ideas like the Hypernet. While I characterize the Hypernet as a hypothesis, it already exists. We use it every day. In reality, the first four of the Ten Hypotheses are new interpretations of the present, rather than predictions about the future. This blog post will focus on those four hypotheses.
Which is better: McNamee@TED2011, here or Weyland@TED2023, here. Octagon cage match? Oh, I don’t know, I guess I’ll take McNamee after I finish selling some 10Y US Treasuries in the 401K account so I can get some Facebook shares on Monday.
Business Insider, And Now JP Morgan’s $2 Billion Trading Loss Is Already $3 Billion (And Counting), here. Ok so the reported losses are growing.
Jamie Dimon said it could get worse… and it is.
The JP Morgan trading loss that was $2 billion four days ago is now $3 billion, report Nelson Schwartz and Jessica Silver-Greenberg in the New York Times.
Because every hedge fund in the world knows JP Morgan is stuck in a position so big that it can’t unwind it… and they’re betting against it.
Zerohedge, Jamie Dimon “Invited” To Testify Before Senate, here. And there will be compelling TV on its way.
Update: JPMORGAN SAYS DIMON TO AGREE TO TESTIFY TO SENATE. Ummmm, there was an option?
As everyone (or at least Zero Hedge) long expected, JPM’s prop trading debacle just got political and senators are about to demonstrate to the world just how little they understand about modern IG9-tranche pair trades. Expect to hear much more about JPM’s “shitty” prop deal.
Zerohedge, So How Are JPM’s Prop “Counterparties” Faring? here. But Bluecrest and Blue Mountain are not reporting like they are Party B. There are reports that Boaz Weinstein/Saba is Party B but no actual P&L numbers being whispered. Odd?
Now one thing we know is that when it comes to reporting one’s results to an aggregator: when you have a profit you never under-represent it. And in this special case, since the funds are likely eager to recruit more like-minded hedge funds to their side of the trade, the best way to do it is by showing profits.
Which, for the early part of May, when the bulk of the JPM losses took place, are oddly missing for the two biggest players across from JPM…
So: where are the profits really going?
Lisa Pollack, Alphaville, Recap and tranche primer, here; and The high yield tranche piece, here. Pollack going to get the book deal for the London Whale, clearly. Once she nails down the positions maybe we will get to the Gaussian Copula substory. Chances seem to be improving that this story is about a bunch of smart guys who tried to resurrect a dead quant model. Maybe it would be better to recast the story as a Zombie Quant model or go with the J Depp/Dark Shadows/Vampire Quant model. But even though this story has massive potential to connect with loads of people, Pollack is not locating Party B P&L nor has anyone else. It’s a problem if you cannot get that puzzle piece. Plus there is way more premeditation here than just Keystone Cops stuff started to happen on 6 Apr. Maybe there is some offshore vehicle hole getting filled up whose reference entity name cannot be spoken? That would change the story’s complexion, right? Maybe the London Whale is a red herring?
Coverage of the
$2bn$3bn loss emanating from JPMorgan’s Chief Investment Office on its synthetic credit portfolio continues a pace, and FT Alphaville’s tour continues too.
The desire to understand what the trade was and the rationale behind it continues to bug us and many others. Interestingly, some of the discussion of late has come full circle. Bloomberg kicked off the London Whale saga on April 6th, and their follow-up on April 9th contained a detail that has now come back into the narrative. This time, though, it’s more than a mere sidenote — more on this in a minute.
While these more recent explanations are satisfying, we’re still scratching our heads a bit.
The challenge remains: to find trades that have managed to deteriorate with the speed that CEO Jamie Dimon has claimed they have — small in the first quarter, $2bn “all in the second quarter”, and “it kind of grew as the quarter went on”.
Now, credit tranches, which are leveraged positions on credit indices that themselves already involve a lot of leverage, could do this if the model used to determine hedge ratios wasn’t up to the task or if the trades were just outright foolish.
NYT, Joe Weisenthal vs. the 24-Hour News Cycle, here.
Weisenthal is the lead financial blogger for Business Insider, a Web site that covers the worlds of technology and finance with a mix of pithy reporting, snarky analysis and slide shows. Lots of slide shows. The site is run by Henry Blodget, a former Wall Street technology analyst who, after accepting a lifetime ban for deceptively hyping stocks during the dot-com boom, has devoted himself to figuring out exactly what our monkey brains desire and then producing more of it. The result, which now draws 15 million viewers each month, resembles an earnest version of Gawker or an unabashedly capitalistic Huffington Post. Each item is wrapped with a loud, blunt headline — a recent sample: “The Next 19 Hours Will Be Critical for the Global Economy” — and decorated with a picture that illustrates the story or one of a beautiful woman. Ideally, both.
Interesting piece in the Times about Weisenthal at Business Insider. Nominally we peek at the day to day activities and recent personal history of Blodget’s lead financial blogger. The methodology shift in the coverage of financial markets represented in the “figuring out what our monkey brains desire” quote needs more elaboration. There is something significant happening in the shift from the NYT narrative style of financial market coverage to the more episodic Blodget and Felix Salmon styles of coverage. Needs to be articulated properly.
Al Pacino, Any Given Sunday, here; or Clint Eastwood, here. Now we are going to have to endure a seemingly endless stream of Taleb’s gloating i-have-been-warning-you-about-VaR-for-years interviews; the FinQuant equivalent of the Icky Shuffle. Look Team Firm Risk, you all have seen It’s a Wonderful Life, right? Well every time a bank gets their bell rung, Taleb gets another Fox and Friends interview.
Naked Capitalism, JP Morgan Loss Bomb Confirms That It’s Time to Kill VaR, here.
One of the amusing bits of the hastily arranged JP Morgan conference call on its $2 billion and growing “hedge” losses and related first quarter earning release was the way the heretofore loud and proud bank was revealed to have feet of clay on the risk management front. Jamie Dimon said that the bank had determined that its value at risk model was “inadequate” and it would be using an older model. And no wonder. The Financial Times report contained this bombshell:
JPMorgan also restated its “value at risk”, a measure of maximum possible daily losses, of the CIO [the unit that executed the trading strategy that blew up] in the first quarter from $67m to $129m.
“Synthetic credit portfolio”. That’s the book where the $2bn in mark-to-market losses took place for JP Morgan, according to an announcement made on Thursday. A result which has now cost them a their AA- rating from Fitch and landed them on negative outlook with S&P, as announced late on Friday.
FT Alphaville has analysed the credit trades that might be in that portfolio, in an attempt to reason through what may have gone on. The fact, however, remains that we know precious little. Why is that? Is this acceptable that after the financial crisis that this can happen to a bank, let alone a systemically important one like JP Morgan?
Got a buck that says you cannot find a Firm Risk person on 13 May 2012 who knows substantially more about the positions than Lisa Pollack.
Zerohedge, Double or Nothing: How Wall Street is Destroying Itself, here.
This fragile business model is in fact descended from the Martingale roulette betting system. Martingale is the perfect example of the failure of theory, because in theory, Martingale is a system of guaranteed profit, which I think is probably what makes these kinds of practices so attractive to the arbitrageurs of Wall Street (and of course Wall Street often selects for this by recruiting and promoting the most wild-eyed and risk-hungry). Martingale works by betting, and then doubling your bet until you win. This — in theory, and given enough capital — delivers a profit of your initial stake every time. Historically, the problem has been that bettors run out of capital eventually, simply because they don’t have an infinite stock (of course, thanks to Ben Bernanke, that is no longer a problem). The key feature of this system— and the attribute which many institutions have copied — is that it delivers frequent small-to-moderate profits, and occasional huge losses (when the bettor runs out of money).
Re: Picking a FinQuant Platfom, here. Smart guy at a meeting this week points out you cannot simply compare quoted fab size features between different foundaries (Intel, TSMC, and Global Foundaries for example) without noting the possibility that different features are potentially being quoted for feature size and the variation in the process can be significant. Both seem to be reasonable points for follow up, so here goes the reference search.
International Technology Roadmap for Semiconductors, website about the ITRS, here.
The objective of the ITRS is to ensure cost-effective advancements in the performance of the integrated circuit and the products that employ such devices, thereby continuing the health and success of this industry.
ITRS, 2011 Assessment, Executive Summary, here.
The 2011 ITRS has kept the MPU chip size model unchanged from the 2009 and 2010 versions. The Design ITWG had updated the MPU model in the 2009 ITRS, based upon their most recent available data and models. The new data and model indicate that logic transistor size is improving at the rate of the lithography (0.7 linear, 0.5 area reduction every technology cycle). Therefore, in order to keep the MPU chip sizes flat to the 140 mm2 target, the number of transistors can be doubled only every technology cycle. The technology cycle rate is projected to be on a 2-year cycle through 45 nm/2010, and turn to a three-year cycle after 2010. Therefore the transistors per MPU chip can double only every three years after 2013, unless increased chip size is allowed for specific applications which have markets that can afford the higher costs.
Hiroshi Iwai, Tokyo Institute of Technology, Technology Roadmap for 22nm CMOS and beyond, here. See pgs. 14-17 in presentation slides.
EE Times, Globalfoundries, TSMC square off in litho, here.
Globalfoundries, TSMC, and, of course, Intel Corp., also have slightly different strategies in lithography, which is a big factor in scaling to bring products to market. At the SPIE event this week, Intel elaborated on its lithography strategy. Intel sees extreme ultraviolet (EUV) as its primary option for next-generation lithography (NGL). Maskless remains an option at Intel.
Globalfoundries is also high on EUV, and has suddenly warmed up to maskless lithography. TSMC is more bullish on maskless for NGL, but it is also warming up to EUV. In optical, TSMC has mainly relied on ASML Holding NV as its sole lithography vendor, while Globalfoundries prefers a dual-source strategy with ASML and Nikon Corp.
Hot Hardware, Global Alliance: Intel, IBM, GlobalFoundries, TSMC, and Samsung Announce New Partnership, Sep 2011, here,
Five of the largest and most advanced semiconductor manufacturers have signed a joint development and research agreement in what might be a record-breaking partnership. It’s not unusual to see IBM, Samsung, and GloFo pairing up—all three companies are part of the Common Platform Alliance–but the presence of Intel and TSMC is noteworthy.
The five companies have committed to a $4.4B investment in New York State that’s intended to create 6900 jobs, including 2500 high-tech positions in Albany, East Fishkill, Utica, and Canandaigua (the author’s home). As some of you will recall, GlobalFoundries is building its own new Fab 8 in New York State as well. The company began installing equipment earlier this year and will be capable of up to 60,000 wafer starts a month once the facility ramps up to full production.
ExtremeTech, The dream is dead: AMD gives up its share in GlobalFoundries, here.
According to the new agreement, AMD has negotiated a “take or pay” agreement for wafer prices in 2012, as well as a “framework” for pricing in 2013. A take-or-pay agreement is a contract in which the buyer (AMD) either accepts a product or pays the manufacturer a penalty. In 2011, AMD negotiated a wafer price arrangement with GF in which it only paid for “good” dies. Scuttlebutt indicated that GF was quite unhappy with this deal, as it left the company losing money on every Llano wafer it could build.
So what did AMD get? Manufacturing flexibility. Previously, Sunnyvale had agreed to manufacture 28nm APUs solely with GlobalFoundries. This new agreement voids that arrangement, freeing AMD to work withTSMC other foundries. It’s not an agreement that came cheap — not only is AMD giving up its 8.8% equity share of GF, it’s agreed to pay the manufacturer some $425 million by the end of Q1 2013. AMD will take a $703M charge against the transaction.
It’s a tad odd for AMD to be paying GF, given that AMD is giving up its valuable shares, but there’s a likely reason behind it. GlobalFoundries invested significant amounts of capital in order to meet AMD’s timeline for 32nm and 28nm parts. The foundry built a 28nm SOI variant — 28nm-SHP — specifically for AMD. With AMD now looking for other foundry partners, GF is left with production lines it can’t flip a switch and convert to bulk silicon. The $425 million probably accounts for some of that cost.
Semi MD, website, here.
Semiconductor Manufacturing and Design was created by engineers and journalists to shed light on some of the incredibly complex technology and business issues in manufacturing and designing semiconductors. SemiMD consists of a monthly newsletter, daily updates, videos, research results, roundtable discussions, and a portal that serves as a forum for exchanging ideas and answering questions in this complex market. SemiMD is a joint venture of Sperling Media Group LLC and Extension Media LLC.
SemiMD covers the technical and business news related to semiconductor manufacturing, including DFM, equipment and materials, changes in the supply/demand equation, and related topics.
io9, Wikipedia’s founder will help make academic research available to all, Wow, Gowers is doing good. But it looks like Jimmy Wales is going to get to do the historic Berlin “Tear down this paywall” speech. Hope the ACM and IEEE archives open up soon.
Microprocessor Report, website, here. Haven’t seen this in a long while. Used to be fabulous for general purpose HPC stuff. Not sure how much of a player it is these days. Seems to be mostly locked behind paywall.
Tom’s Hardware, Leaked Slide Shows Intel Haswell Set for March-June 2013, here.
Intel is set to launch its new Ivy Bridge processor in April 2012 and will make the move to 22 nm on LGA 1155. It will feature faster integrated graphics controller, lower TDP, higher clock speeds and overclocking ceiling with the 22 nm process. Right around the corner, Intel is set to release its “tock” strategy with the Ivy Bridge successor, codename Haswell.
Haswell is expected to have Advanced Vector Extensions 2 (AVX2), DX11.1, OpenGL 3.2, Thunderbolt, Transactional Synchronization Extensions (TSX) and Windows 8 support.
It’s the AVX2 and possibly the Transactional Synchronization Extensions you are willing to think hard about from the FinQuant app side. For example, I think w. AVX2 there is mumbling about giving you 8-way SIMD scatter-gather. Think of that as single clock binary protocol parsing, after everything is aligned and suitable genuflecting has taken place with the compiler switches. Most of the other headline stuff seems to be there to do a bunch of SoC stuff to fight for mobile systems market share.
HPC Wire, Myricom Claims Lowest UDP, TCP Latency for High Frequency Trading, here.
Myricom DBL 2.0 software has benchmarked application-to-application UDP latency of under 3.5 microseconds and transparent sockets TCP latency of 4.0 microseconds. For HFT applications, DBL enables unmatched networking performance for UDP multicast and TCP order execution, all over industry-standard 10-Gigabit Ethernet.
So, we’re getting expectations set at 3 to 4 microseconds for kernel bypass. Which in turn will presumably be overtaken by RDMA NIC to L3, or will it still cost me multiple microseconds to traverse the TCP stack for ECC, retransmission, and packet ordering?
Jones, Evan, Write Latency and Block Size, here. Don’t know him; like the data; I’ll read the blog for a bit. Let’s play I’m not going to pay a lot for this muffler. I’m thinking, just for a target, why would I willingly allocate more than 10 microseconds per message on all logging (local, prime broker, regulatory, venue, etc, etc) of cash equity HFT messages with off-the-shelf hardware and no substantial mods to the Linux kernel? When you look under the covers, or you know more than me today, maybe it has to be 20 or 30 microseconds but for this game you have to explain (account for the time) for anything above 10 microseconds.
Honeyman, Linux NFS Client Write Performance, 2001, here. This is one of those deals, when honey speaks you listen even if it was ten years ago, but it can get messy, witness, here. Don’t ask why. Never explain, never complain; Fun People; cwtvomk (Come wipe the vomit off my keyboard), that’s just honey warming up, respect.
Smith, Greg, Tuning Linux for low PostgreSQL latency, here.
Linux Dev Center, Low latency in the Linux Kernel, here.
Torvalds, email archive, here.
Westnet, The Linux Page Cache and pdflush: Theory of Operation and Tuning for Write-Heavy Loads, here.
Working on an HPC for Presidents document:
Besides Hagan, Perold, Shaw, PJW, and Nick Patterson where there ever any people on the Street or the City that could “competitively” write and compile an optimized piece of floating point code? If you know of any stories/examples – please send a heads up in the comments. I’m guessing Shaw or Muller must have hired somebody at some point.