You are currently browsing the category archive for the ‘Programming’ category.
Gelman and Basboll, American Scientist, To Throw Away Data: Plagiarism as a Statistical Crime, here.
Much has been written on the ethics of plagiarism. One aspect that has received less notice is plagiarism’s role in corrupting our ability to learn from data: We propose that plagiarism is a statistical crime. It involves the hiding of important information regarding the source and context of the copied work in its original form. Such information can dramatically alter the statistical inferences made about the work.
In statistics, throwing away data is a no-no. From a classical perspective, inferences are determined by the sampling process: point estimates, confidence intervals and hypothesis tests all require knowledge of (or assumptions about) the probability distribution of the observed data. In a Bayesian analysis, it is necessary to include in the model all variables that are relevant to the data-collection process. In either case, we are generally led to faulty inferences if we are given data from urn A and told they came from urn B.
I then looked up Acton in Wikipedia and was surprised to see he’s still alive. And he wrote a second book (published at the age of 77!). I wonder if it’s any good. It’s sobering to read Numerical Methods That Work: it’s so wonderful and so readable, yet in this modern era there’s really not much reason to read it. Perhaps William Goldman (hey, I checked: he’s still alive too!) or some equivalent could prepare a 50-page “good parts” version that could be still be useful as a basic textbook.
ars technica, Recursion or While Loops: Which is better? here.
The reason why these questions appear so much in interviews, though, is because in order to answer them, you need a thorough understanding of many vital programming concepts—variables, function calls, scope, and of course loops and recursion—and you have to bring the mental flexibility to the table that allows you to approach a problem from two radically different angles, and move between different manifestations of the same concept.
Experience and research suggest that there is a line between people who have the ability to understand variables, pointers, and recursion, and those who don’t. Almost everything else in programming, including frameworks, APIs, programming languages and their edge cases, can be acquired through studying and experience, but if you are unable to develop an intuition for these three core concepts, you are unfit to be a programmer. Translating a simple iterative loop into a recursive version is about the quickest possible way of filtering out the non-programmers—even a rather inexperienced programmer can usually do it in 15 minutes, and it’s a very language-agnostic problem, so the candidate can pick a language of their choice instead of stumbling over idiosyncrasies.
If you get a question like this in an interview, that’s a good sign: it means the prospective employer is looking for people who can program, not people who have memorized a programming tool’s manual.
Cambridge Community Television, Latke vs Hamentaschen: The Great Debate, here.
An annual MIT tradition since 2003 and an occasional tradition before that! Each team of prominent scholars presents an argument in favor of their respective food. Following the debate, votes are cast, ballots are counted, and the champion is crowned. Don’t miss this exciting debate between the fruit-filled cookie known as the hamentasch and the fried potato pancake otherwise known as the latke!
theOwl84, FastC++: Coding Cpp Efficiently, Efficient Processing of Arrays using SSE/SIMD and C++ Functors, here.
A performance benchmark of the above code gives the following results for arrays larger than the
CPUcache size (ca. 4M floats)): The pure
C++code processing one element at each loop iteration is the baseline with 100% of the runtime. The 4-element
SSEkernel is 3.6% faster and the 8-element kernel runs about 5% faster than the
C++version. The main performance bottleneck is memory bandwidth. When the input arrays are small enough to fit in cache (4K float elements), the results are as follows: 4-element
SSE+300% and 8-element +312%.
In summary, this article showed how to split a loop into loop management and computation code. Then, we added
SSEintrinsics to process multiple elements of the array at the same time. Last, we further unrolled the loop kernel to hide load latencies. This pattern is valid for a variety of loop kernels and can be extended to accumulation (i.e. sum, dot product) and multiple inputs.
Neves and Aumasson, Implementing BLAKE with AVX, AVX2, and XOP, here.
We first considered the future AVX2 256-bit vector extensions, and identified the most useful instructions to implement add-rotate-xor algorithms, and BLAKE in particular. We wrote assem- bly implementations of BLAKE-256 and BLAKE-512 exploiting AVX2’s unique features, such as SIMD memory look-up. Although we could test the correctness of our implementations using Intel’s emulator, actual benchmarks will have to wait until 2013 for processors implementing the Haswell microarchitecture. We observed that AVX2 may boost the performance of BLAKE-256 in tree and multistream mode, thanks to its ability to compute two instances with a single vector state.
We then reviewed the applicability of the recent AVX and XOP advanced vector instructions, as respectively found in Intel and AMD latest CPUs, to implementations of BLAKE-256. While AVX provides a minor speed-up compared to SSE4 implementations, the powerful XOP instructions lead to a considerable improvement of more than 3 and 2 cycles per byte for BLAKE-256 and BLAKE- 512, respectively. This is in mainly due to the dedicated rotation instructions, and to the vpperm instruction, which allows permuted message blocks to be reconstructed very efficiently. Although message loads take up a considerable amount of instructions, our proposed technique of message caching doesn’t seem to improve (neither degrade) speed, be it on Intel’s or AMD’s hardware.
Simon Johnson, NYT, Economix, The Debate on Bank Size is Over, here. Sort of plausible interpretation.
While bank lobbyists and some commentators are suddenly taken with the idea that an active debate is under way about whether to limit bank size in the United States, they are wrong. The debate is over; the decision to cap the size of the largest banks has been made. All that remains is to work out the details.
Ellis Hamburger, The Verge, Appple’s broken promise: why doesn’t iCloud ‘just work’? here.
“I’ve rewritten my iCloud code several times now in the hopes of finding a working solution,” wrote developer Michael Göbel in a blog post, and “Apple clearly hasn’t.” The problem is this: Apple has failed to improve the way it syncs databases (“Core Data”) with iCloud, yet has continued to advertise and market iCloud as a hassle-free solution.
“The promise of iCloud’s Core Data support is that it will solve all of the thorny issues of syncing a database by breaking up each change into a transaction log. Except it just doesn’t work,” said a very prominent developer who asked not to be named in order to stay in Apple’s good graces. iCloud apparently chokes hard on the databases it’s supposed to be so proficient at handling. From a user perspective, this means that despite a developer’s best efforts, data disappears, or devices and data stop syncing with each other.
Natalie Wolchover, Simons Foundation, In Computers We Trust? here. Performance improving deduction mechanism. Simultaneously, the opposite of doping and identical to doping. I suppose these folks are not role models for children?
Shalosh B. Ekhad, the co-author of several papers in respected mathematics journals, has been known to prove with a single, succinct utterance theorems and identities that previously required pages of mathematical reasoning. Last year, when asked to evaluate a formula for the number of integer triangles with a given perimeter, Ekhad performed 37 calculations in less than a second and delivered the verdict: “True.”
Shalosh B. Ekhad is a computer. Or, rather, it is any of a rotating cast of computers used by the mathematician Doron Zeilberger, from the Dell in his New Jersey office to a supercomputer whose services he occasionally enlists in Austria. The name — Hebrew for “three B one” — refers to the AT&T 3B1, Ekhad’s earliest incarnation.
Cullen Roche, Pragmatic Capitalism, “Whatever It Takes”, here. Might turn it up to 11.
Ben Bernanke once said that a determined central bank could always end a deflation. And I guess we’re about to find out whether that’s true or not. What’s currently going on in Japan is rather incredible. They’re throwing the kitchen sink, garbage disposal and all the plumbing at their fight with deflation. Here are some of the headlines from the last 24 hours via Bloomberg:
*NAKASO SAYS BOJ WILL NOT BE BOUND BY PAST POLICIES*
*IWATA: MONETARY POLICY FIRST INFLUENCES ASSET MARKETS*
*IWATA: JAPAN’S DEFLATION BECAUSE OF DEFLATIONARY EXPECTATIONS*
*KURODA SAYS HE WILL DO WHATEVER IT TAKES TO END DEFLATION*
This explicit targeting of stock prices, expectations and a “whatever it takes” mentality could change the future of economic policy. If it works the monetarists will dominate the future of policy. If it fails, it will be another blow to the school. I guess we’ll know the answer in the next 12-18 months…
HPC Wire, Genome Analysis Center Selects Convey for Next-Gen Sequencing, here. Convey is Steve Wallach’s new thing.
Convey’s hybrid-core architecture pairs classic Intel x86 microprocessors with a coprocessor comprised of FPGAs. Particular algorithms—BWA-based alignment, for example—are optimized and translated into code that’s loadable onto the coprocessor at runtime.
Lisa Pollack, Alphaville, Footnote 74: FACEPALM, here; and A tempest in a spreadsheet, here. Funny, but getting lost in the weeds. This is important because Pollack is one of the dozen or so folks who could end up writing the London Whale book that’ll get cited for decades. The 130+ pages in the JPM report dance around a lot, recounting a sequence of events without simply stating what obviously happened.
The cash register that JPM built for tracking the running value of the securities owned by the London Whale broke, probably in March or April 2012, and it could not be fixed before losing several billion dollars. Curiously , the “cash register” in this case is less euphemistic than you might have expected. The VaR, the risk managers, most of the people not directly on the CIO trading desk weave in an out of the official narrative but they are mostly irrelevant to what originally happened. They are passengers in a sad story. It really looks like the problem was either the code that read the market data to compute the inputs to the P&L calculator (the spreadsheet) or the P&L calculator itself (the supercomputer). The report doesn’t really carefully dissect this issue, not sure why. If the problem was A. the spreadsheet model for calibrating the correlations and the hazard rates for inputs, I bet the CIO desk and quants are/were more than smart and motivated enough to fix it or patch the underlying spreadsheet and analytics packages before losing much money. The CIO folks all probably remembered, all too vividly, how correlations behaved with the GM and Ford junk downgrades in May 2005 and designed their new correlation cooker to do something “better.” If the problem was B. programming the new “supercomputer,” I could see them not having enough time to fix the situation. B … final answer.
The report says there is “some evidence” that pressure was put on the reviewers to get on with approving the model in January because of the risk limit breaches being incurred with the old model around then. For example, as quoted above: “In an e-mail to Mr. Hogan on January 25, Mr. Goldman reported that the new model would be implemented by January 31 “at the latest” and that it would result in a “significant reduction” in the VaR.”
Hence the Model Review Group “may have been more willing to overlook the operational flaws apparent during the approval process.”
Back to the modeler though. He used to work at Numerix (a vendor), where a repricing model had been “developed under his supervision” that JPMorgan normally used in VaR calculations. The Numerix analytic suite had been approved by the Model Review Group. But the modeler, when developing the new VaR model, developed his own suite — called “West End”. This suite was not reviewed in advance of the new VaR model being rolled out, but rather only had a limited amount of backtesting completed on it.
David Murphy, Alphaville, The JPMorgan Whale’s regulatory motive, here. Just a wild guess for a movie plot – Whale takes leveraged position in CDX tranches that are no longer heavily traded like back in the day, say 2005. In May12 the big, and now publicly exposed, hedge is in the CDX series 9 where the Whale gets picked off by the hunch/pounce/kill boys. The new correlation/hazard rate cooker (the code that computes the inputs to the gaussian copula model) has a problem, maybe with Kodak maybe something else, and the Whale’s desk risk and the 238 second near real-time run time Credit P&L is shot – they are flying totally blind. They try to buy time marking the spreads to cover the model’s flaked out P&L and Risk while they fix it. The risk and regulatory requirements change while all this is going on so there is some JPM Risk executive who now wants someone to explain to him what this all means to his VaR model. Nobody has time to talk to him cause the VaR is just for mouth-breathers and there is a real problem here that folks need to think through. I wonder if it was helpful to debug the flaky model in the FPGA supercomputer once the CIO P&L went out, probably not, right? Bruno, Achilles, and probably even Ina got to read up on Verilog programming back in April 2012, cool thx prize winning Dataflow supercomputer implementation of the gaussian copula … Maserati, Bellagio, Bellagio, Kasparov.
If JPMorgan had just bought, say, senior tranche protection on a credit index, then while the bank’s position would indeed have been crash-hedged, it would have generated significant earnings volatility as the bonds would not have been marked-to-market but the derivatives would have been. In particular, in a tightening credit environment, such as we had earlier in the year as the ECB injected liquidity into the banking system, the derivatives would have lost money without a corresponding accounting gain on the bonds.
One way around this accounting mismatch is to restructure the derivatives position. The idea is still to be long crash protection — again, by buying protection on senior tranches, for example — but to offset this by also selling protection on the index. If done correctly this position will be indifferent to small moves in credit spreads (‘delta neutral’), but it will make money if there is a big increase in spreads.
This removes the fair value volatility from the position at the cost of introducing correlation risk: the amount of index you need to sell is a function of the correlation between the names in the index, so you have to readjust your hedge as the market price of correlation changes.
Deus Ex Macchiato, Whale Watching , the official tour, here. I think this is David Murphy again. Nice website, I should read it more frequently.
The firm’s main problem at this point was that two goals were in conflict. On one hand their position was so large (if unnoticed by regulators) that they would get crushed if they tried to leave too fast: on other other, they needed to leave to reduce capital. The solution, of course, was to try to change how capital was calculated.
the concern that an unwind of positions to reduce RWA would be in tension with “defending” the position. The executive therefore informed the trader (among other things) that CIO would have to “win on the methodology” in order to reduce RWA.
Chris Wilson, Yahoo, What would your signature look like if Jack Lew wrote it? (Interactive), here.
Now, Yahoo News exclusively brings you the Jack Lew Signature Generator. Just type in your name, hit the button, and see what your name would look like in his, er, signature style.
Michael Feldman, HPC Wire, HPC Programming in the Age of Multicore: One Man’s View, here. Wellein sounds solid on programming for performance.
At this June’s International Supercomputing Conference (ISC’13) in Leipzig, Germany, Wellein will be delivering a keynote titled, Fooling the Masses with Performance Results: Old Classics & Some New Ideas. The presentation was inspired by David H. Bailey 1991 classic article Twelve Ways to Fool the Masses When Giving Performance Results on Parallel Computers, and is intended to follow the humorous tone of Bailey’s original work.
Peter Murray, Singularity Hub, New Video of Army’s Alpha Dog Robot: “This Thing is Awesome”, here.
Most articles about Alpha Dog go kind of like, “Impressive, but man, really loud.” Well Boston Dynamics has just released a video of the new and improved – and much quieter – prototype.
Lee Hutchinson, Ars, $17,000 Linux-powered rifle brings “auto-aim” to the real world, here. I’m going to go ahead and call “First” here on combining the alpha dog and the Linux powered auto aim rifle. There may be others that think you could just combine them, but I’m pretty sure you needed Linux to get the alpha dog and the auto-aim to communicate so their previous “Firsts” don’t count. Ubuntu Linux gets it’s first quiet mobile auto aim civilian snipe shortly, calling it, First.
The image displayed on the scope isn’t a direct visual, but rather a video image taken through the scope’s objective lens. The Linux-powered scope produces a display that looks something like the heads-up display you’d see sitting in the cockpit of a fighter jet, showing the weapon’s compass orientation, cant, and incline. To shoot at something, you first “mark” it using a button near the trigger. Marking a target illuminates it with the tracking scope’s built-in laser, and the target gains a pip in the scope’s display. When a target is marked, the tracking scope takes into account the range of the target, the ambient temperature and humidity, the age of the barrel, and a whole boatload of other parameters. It quickly reorients the display so the crosshairs in the center accurately show where the round will go.