You are currently browsing the tag archive for the ‘Code’ tag.
First, some background: Under Dodd-Frank, the CFTC was given the task of regulating the $300 trillion market for swaps in the U.S. The basic point was to bring light to a dark market and prevent another AIG by pushing as much of the over-the-counter swaps market as possible onto exchanges where prices and volume are posted. With about 80 percent of those swaps rules written,according to CFTC Chairman Gary Gensler, and a bunch of them now in effect, traders have begun “futurizing their swaps”—that is, trading futures contracts instead of entering into swaps deals. Some say that’s a clever way around Dodd-Frank. Others see it as merely a natural evolution of financial instruments.
Whatever the reason, it’s happening. And as arcane as the details may be, the potential consequences are enormous, as evidenced by Thursday’s packed house. The general consensus of those present was that Thursday was the most crowded CFTC hearing in recent memory. Lawyers and lobbyists lined the walls; congressional staffers and industry suits packed the chairs. More than 150 people crammed into the CFTC’s main conference room, and a healthy number of folks watched on TVs in the hallway outside.
Dodd-Frank has upended the derivatives market, and in the shakeout that follows, there will be winners and losers. Perhaps those with the most at stake areIntercontinentalExchange (ICE) and the Chicago Mercantile Exchange (CME), the two biggest futures exchanges in the U.S. As more companies and traders start favoring futures over swaps, the two exchanges stand to capture a much bigger portion of that activity. The potential losers? Dealers such as Goldman Sachs (GS) that have done a lot of swaps business. Standing at the back of the room, Chris Giancarlo, chair of the Wholesale Markets Brokers’ Association, likened the fight over swaps and futures to “the Maginot Line for the exchanges.”
Easley, de Prado, & O’Hara, SSRN, The Volume Clock: Insights into the High Frequency Paradigm, here. 2 LFT structural weaknesses
Over the last two centuries, technological advantages have allowed some traders to be faster than others. We argue that, contrary to popular perception, speed is not the defining characteristic that sets High Frequency Trading (HFT) apart. HFT is the natural evolution of a new trading paradigm that is characterized by strategic decisions made in a volume-clock metric. Even if the speed advantage disappears, HFT will evolve to continue exploiting Low Frequency Trading’s (LFT) structural weaknesses. However, LFT practitioners are not defenseless against HFT players, and we offer options that can help them survive and adapt to this new environment.
Nerval’s Lobster, Mars Rover Curiosity: Less Brain Power Than Apple’s iPhone 5, here. 3. So, remember that 8 hour to 238 second 2011 award winning Credit valuation and risk computation on the million dollar FPGA supercomputer with the Dataflow acceleration? That’s the Apollo Creed here, on a good day. There are days, (e.g., July 27, 2018 57.6 million km distance between Mars and Earth) when you can send the entire credit portfolio to Mars, compute the entire Risk and Valuation for the portfolio in the down time on the spare computer in the Mars Rover, then send the results back to Earth, and finish in ~360 seconds. That’s just about 50% slower than the 2011 award winning Credit valuation and risk computation on the million dollar FPGA supercomputer with the Dataflow acceleration. So, the message here is, I guess, if your computing infrastructure is on Mars … and has less brain power than an iPhone5 … then you are probably not going to be at the very top of the USD fixed/float Vanilla Swap League tables … on most days, but …if you own an iPhone5 here on earth … you have more brain power … than the 2011 award winning Credit valuation and risk computation on the million dollar FPGA supercomputer with the Dataflow acceleration?
“To give the Mars Rover Curiosity the brains she needs to operate took 5 million lines of code. And while the Mars Science Laboratory team froze the code a year before the roaming laboratory landed on August 5, they kept sending software updates to the spacecraft during its 253-day, 352 million-mile flight. In its belly, Curiosity has two computers, a primary and a backup. Fun fact: Apple’s iPhone 5 has more processing power than this one-eyed explorer. ‘You’re carrying more processing power in your pocket than Curiosity,’ Ben Cichy, chief flight software engineer, told an audience at this year’s MacWorld.”
Coding the Markets, CPU cache size, here. Cute, maybe I should read this more. Seen similar code to verify the number of FP units per core, processor clock speed, etc. I don’t know off the top of my head how I would verify the number of cores per chip once you get to 6+.
Here’s some code I wrote back in 2009 to figure out the L1 and L2 cache sizes on a Windows host. It works by populating an area of memory with randomized pointers that point back into that same area. The randomization defeats stride based attempts by CPUs to predict the next cache access. The size of the memory region is stepped up in powers of two, and timings taken. The timings should show when the region size exhausts the L1 and L2 caches. This code illustrates why control of memory locality is so important in writing cache friendly code.
mathbabe, Columbia data science course, week 1: what is data science? here. It was either Knuth or Tupac, I forget which, who said: the world is composed of code, data, and users. Code is pure and good, data is dirty and bad, and users are somewhere in between. One problem with users is if you engage with them it is only a matter of time before they bring you their data.
I’m attending Rachel Schutt’s Columbia University Data Science course on Wednesdays this semester and I’m planning to blog the class. Here’s what happened yesterday at the first meeting.
Rachel started by going through the syllabus. Here were her main points:
- The prerequisites for this class are: linear algebra, basic statistics, and some programming.
- The goals of this class are: to learn what data scientists do. and to learn to do some of those things.
- Rachel will teach for a couple weeks, then we will have guest lectures.
- The profiles of those speakers vary considerably, as do their backgrounds. Yet they are all data scientists.
- We will be resourceful with readings: part of being a data scientist is realizing lots of stuff isn’t written down yet.
- There will be 6-10 homework assignments, due every two weeks or so.
- The final project will be an internal Kaggle competition. This will be a team project.
- There will also be an in-class final.
- We’ll use R and python, mostly R. The support will be mainly for R. Download RStudio.
- If you’re only interested in learning hadoop and working with huge data, take Bill Howe’s Coursera course. We will get to big data, but not til the last part of the course.
Arnie Cooper, Fast Company, How Algorithms Rule The World, here. Code is good. Code helps get you paid. Code helps you rule the world.
In his new book, Aisle50 cofounder Christopher Steiner counts the (many, many) ways digits have come to dominate. “If you look at who has the biggest opportunity in society right now,” he says, “it’s developers.”
Business Insider, LIBOR Scandal Claims New York-Based Barclays Executive And Trader, here. Data is bad. Data keeps you from getting paid. Data helps the world rule you.
Don’t even send email about data. If the Barcap folks had even said in their bromails “here’s a program that you can run to show the levels of Libor we are comfortable with” they would still be rockin the Bloomberg at 750 Seventh Ave. today.
This is your career
This is your career on data.
The fallout from an investigation into the attempted manipulation of global benchmark interest rates has again rocked Barclays Plc, as the bank recently ousted a top executive and a trader in New York for their roles in the scandal, according to regulatory filings obtained on Tuesday.
Walking Randomly, Vectorizing code to take advantage of modern CPUs, here.
I’ve been playing with AVX vectorisation on Sandy Bridge CPUs off and on for a while now and thought that I’d write up a little of what I’ve discovered. The basic idea of vectorisation is that each core in a modern CPU can operate on multiple values (i.e. a vector) simultaneously per instruction cycle.
Sandy bridge (and the newer Ivy Bridge) processors have 256bit wide vector units which means that each CORE can perform certain operations on up to eight 32-bit floats or four 64-bit doubles per clock cycle. So, on a quad core you have 4 vector units (one per core) and could operate on up to 16 doubles or 32 floats per clock cycle.
This all sounds great so how does a programmer actually make use of this neat hardware trick? There are many routes:-
Intel, Intel SPMD Program Compiler, here. Wow.
ispc is a compiler for a variant of the C programming language, with extensions for “single program, multiple data” (SPMD) programming. Under the SPMD model, the programmer writes a program that generally appears to be a regular serial program, though the execution model is actually that a number of program instancesexecute in parallel on the hardware. (See the ispc documentation for more details and examples that illustrate this concept.)
ispc compiles a C-based SPMD programming language to run on the SIMD units of CPUs and the Intel Xeon Phi™ architecture; it frequently provides a 3x or more speedup on CPUs with 4-wide vector SSE units and 5x-6x on CPUs with 8-wide AVX vector units, without any of the difficulty of writing intrinsics code. Parallelization across multiple cores is also supported by ispc, making it possible to write programs that achieve performance improvement that scales by both number of cores and vector unit size.
There are a few key principles in the design of ispc:
- To build a small set of extensions to the C language that would deliver excellent performance to performance-oriented programmers who want to run SPMD programs on the CPU.
- To provide a thin abstraction layer between the programmer and the hardware—in particular, to have an execution and data model where the programmer can cleanly reason about the mapping of their source program to compiled assembly language and the underlying hardware.
- To make it possible to harness the computational power of SIMD vector units without the extremely low-programmer-productivity activity of directly writing intrinsics.
- To explore opportunities from close coupling between C/C++ application code and SPMD ispc code running on the same processor—to have lightweight function calls between the two languages and to share data directly via pointers without copying or reformatting.
ispc is an open source compiler with a BSD license. It uses the remarkable LLVM Compiler Infrastructure for back-end code generation and optimization and is hosted on github. It supports Windows, Mac, and Linux, with both x86 and x86-64 targets. It currently supports the SSE2, SSE4, AVX1, AVX2, and Xeon Phi “Knight’s Corner” instruction sets.
mathbabe, Explain your revenue model to me so I’ll know how I’m paying for this free service, here.
When you find a website that claims to be free for users, we should know to be automatically suspicious. What is sustaining this service? How could you possibly have 35 people working at the underlying company without a revenue source?
We’ve been trained to not think about this, as web surfers, because everything seems, on its face, to be free, until it isn’t, which seems outright objectionable (as I wrote about here). Or is it? Maybe it’s just more honest.
When I go to the newest free online learning site, I’d like to know how they plan to eventually make money. If I’m registering on the site, do I need to worry that they will turn around and sell my data? Is it just advertising? Are they going to keep the good stuff away from me unless I pay?
And it’s not enough to tell me it’s making no revenue yet, that it’s being funded somehow for now without revenue. Because wherever there is funding, there are strings attached.
OK here’s our revenue model.
- Always raise when dealt full house or better unless you are playing baseball.
- Exchange traded interest rate swap market microstructure knowledge will help pay off the kids’ college loans
- Do not pay more than 20 USD for Dwight Howard in the upcoming fantasy basketball season. Bad back; shitty FT%/TOs; and you will not additionally get Rondo and BGriff at par if anyone at your auction is awake.
- The Seventh Seal is better than Seven Samurai but just a little better.
- Always bet on Lipton in a theory fight, footage from early STOC and FOCS here.
- After you know what the floating point quant code is actually doing; just worry about keeping the FP units busy don’t think too much about the L2 cache hit rate unless forced.
- Read everything written by Neal Stephenson, Sarah Vowell, Walter Rudin, Barbara Tuchman, Tom Sharpe, John Keegan, Winston Churchill, Woody Allen, Harold N. Shapiro, Flannery O’Connor, and John Toole before reading anything by Nassim Taleb or you won’t get it.
- Running Mountain Lion at home will end in tears but what choice do you really have?
- Stay synchronous unless you are doing IO. There are only 5 people who can really do asynch stuff well and I forgot their names and they are not on LinkedIn. Oh, and avoid doing IO.
- Buster Keaton or Chaplin but sometimes Paul F. Tompkins or Greg Proops and always the old Bedazzled, Chebyshev polynomials, and This Will Destroy You. Definitely not anything running -O2, or written for MIX, or playing at the soul depleting AMC Hamilton 24 (except maybe Dark Knight and Tropic Thunder didn’t totally suck, Oh and Sherlock Holmes and Zoolander, but nothing else, well LOTR, Stepbrothers, Bridesmaids, Dirty Rotten Scoundrels, Ballad of Ricky Bobby, and Gladiator of course).
Tayori Limited, Autosys Solved. To R11 or not R11, here. Backups are good to have if there’s a problem. If you are not sure the backups are working reliably, you might consider delaying the upgrade to R11. There are websites written by people with expertise in upgrading from 4.5 to R11, that you can read and ask questions in the comments, apparently.
Why you might consider delaying your upgrade:
You might consider remaining on AutoSys 4.5 if you have particular skills in AutoSys 4.5 or you have a large number of users that would need to be retrained to WCC from either the Motif GUI’s or from the Web Interface. Having said that, I do not feel that those two reasons should warrant staying on AutoSys 4.5, as they are both easy to overcome. The only valid reason I can really think of to remain on AutoSys 4.5 is if the performance you are getting is acceptable and you do not have the technical skills, or resources, available to do an upgrade. The technical skills unavailability will only be an issue if you do not have the funds to get a consultant who has worked with AutoSys R11 to do the upgrade for you.
IEEE Spectrum, RBS Group Banking Nightmare Continues for Some, here. There must be a fairly short list of Enterprise software updates that get an IEEE Spectrum column. We hear that the updated IT software was autosys.
Your company needs to ensure critical business transactions are processed on time and without errors. Failed transactions lead to lost revenue and reduced productivity. CA Workload Automation helps prevent this by enabling you to schedule and manage workload processing across your entire IT environment. Using CA Workload Automation you can update workload processing schedules in real-time to accommodate changing business requirements. You will increase IT productivity by validating schedules and finding errors before they cause production outages. You will increase resource utilization by ensuring workloads are continuously being processed.
- Increase productivity through more efficient resource management
- Flexibility to change workload schedules in real time
- Lower IT costs by automating routine tasks
- Prevent production errors by validating schedules
That is all.
josh.com, Island ECN 10th Birthday Source Code Release! here. via Adam Sussman via Joseph Gawronski on LinkedIn. Matching engine code.
In honor of Island’s tenth birthday, I am releasing the source code to the Foxpro matching engine as it appeared on 12/15/2003. This was the last version before the Java migration. While the Java version is some *very* nice code, and ran several orders of magnitude faster – this code it particularly interesting precisely because it ran on such a slow platform and is so easy to read, highlighting the idea that it is usually the architecture and algorithms that matter more then raw platform speed.
You can view the full source code here, but the only part that really matters is that actual matching engine, which is the enter2order procedure.
Econ Coding Notes, Chicago Booth, RA Manual: Notes on Writing Code, here. For research assistants. Interested to see the notes on 1. Data and 2. Users. I would imagine the only thing worse than data for the research assistants would be the users.