Take a look again at the Flynn case for Dataflow in his Maxeler video, here. I think the argument that is playing out on the video is predicated on the observation that Moore’s Law ceased delivering extra clock cycles in about 2002 for big commercial microprocessors. Let’s accept that as true, even though I think if I go check, David Patterson will denote that “no more clock cycles for you” crossover time to sometime more like 2006. In the absence of any subsequent breakthroughs in Instruction Level Parallelism harvesting by compilers, figuratively then Dataflow architecture has a window of opportunity to demonstrate its advantages to the market unlike any previous opportunity that Arvind would have experienced in the 80s and 90s. OK?
The other argument that Flynn plays out is that compiler assisted parallel coding is challenging and typically has not demonstrated scaling parallel speed ups beyond 8 or so conventional cores. Again fine, fits with my experience from SGI compilers to now, I have no issue. Parallel programming with pragmas is no way to write code for fun and profit.
The jump to “so therefore dataflow is good for a bunch of general purpose mathmatics” needs further justification, maybe it is I don’t know. In particular financial applications like: closed form expression evaluation, discounting, lattices, and Monte Carlo for big portfolios of trades/positions, all hit the memory hierarchy slightly differently. The vectorization opportunities also vary in degree between these financial mathematics applications. I’m not sure about this but I think there were science experiments conducted somewhere uptown that showed you can teach dolphins PERL and Python and they can parallelize these position inventory calculations to speed, just don’t let them try to do load balancing because that will delay the project.
Perhaps Dataflow has some large performance advantage in 2012 but the costs of converting to FPGAs or waiting for the latest FGPA Xilinx parts in your supercomputer more than compensates for the advantage. Certainly you do not expect a massive popular dataflow movement where all your friends, holding copies of Dataflow for Dummies, flock about you to find out how to program dataflow on their phone apps. That’s not happening. Best case you are going to get a cool dataflow programming result that will make you a hero in the dataflow community but as a reward you will have to simply smile inwardly to yourself knowingly, because really no one else will ever know what you did or what you are talking about. What about debugging the production infrastructure and code, doesn’t that kill you inside a little bit every time you think about it? So, maybe it’s just me but this Dataflow architecture performance advantage better be pretty big for each of these specific financial computations to account for all the obvious really bad stuff.
Taking Flynn’s argument on face value I would guess there is cross over time for Dataflow architecture to maintain a significant performance advantage over off-the-shelf architecture, assuming everything else stays the same. I would think the cross over time will be closer to the time we see volume production of 16 multicore microprocessor chips, 2015? But that assumes nothing else changes. Big assumption.