Everyone seems to be playing the
"how are we going to program for lots of cores" game
since Intel announced their experimental
Maybe we can go forward by looking backward:
Jonathan LaCour on 2007-02-20
Stelios Sfakianakis on 2007-02-21
You should read Jonathan's comment.
joe on 2007-02-21
Jonathan LaCour on 2007-02-21
Ah, the "throw more hardware at it" approach. That is indeed looking back; I remember in the 90's when one of my employers scaled a Web application by deploying a Sun E4000 for every 8 concurrent connections they needed to support.
People with stupid amounts of money notwithstanding, being able to support 80 concurrent connections* is not worth the money one of these beasts would cost, especially when you'd have to deploy them in pairs for redundancy.
The other concern is memory; processes will have more overhead, and that adds up pretty quickly. Sure, you can pile more on, but again, that's cash out of your pocket.
Erlang is certainly cool, but I join others in the hope that Python will get better at concurrency over time.
* Assuming you're using Apache and mod_worker; mod_event isn't ready for prime time, and may never be at the current rate. Lighttpd shows more promise here.
Mark Nottingham on 2007-02-22
We've hit the wall as far as speed goes with Moore's Law. The only option at this point is to scale out, with the choices being spread between either 80 separate machines, or an 80 core processor; both ends of that spectrum fit in "throw more hardware at it", if you measure hardware by counting transistors. Your comment confuses me since you seem to be impugning the "throw more hardware" approach. Do you know a way to scale w/o throwing more hardware (transistors) at the problem?
joe on 2007-02-22
Mark Nottingham on 2007-02-23
© 2002-14 Joe Gregorio