It is possible that we’ll see standardization of componentry around specific projects like Hadoop – although even that seems unlikely with the rampant proliferation of query, import and other ecosystem projects – but I do not expect to see a standard stack of software used to tackle generic Big Data problems, because there really aren’t many generic Big Data problems. Inconvenient as that might be from a vocabulary perspective.
Some dates for perspective, Codd's paper was published in 1970, and the first public release of MySQL was in 1998.
It is still very early days, with the MapReduce paper only being published in 2004; presuming MapReduce is the one true abstraction you need, never mind Dremel or Pregel.