Big data, small compute – big compute, small data.

I saw the new Harry Potter last night (in IMAX, no less!). Loads of fun. And easier to get into than any other movie. Once in a while it pays to never have time to see movies.

That movie took an immense amount of computing horsepower. And really demonstrates, at least to me, the extent to which network workloads are really beginning to split into two.

When it’s put onto DVD, the movie will size up to something like 5 Gigabytes. Requested by 250,000 technology executives (or more likely, their kids) on a quiet evening – for display on their new IP TV’s or mobile handsets, or through their set top boxes – that’s going to put some burden on the network. Big data (5 Gb), relatively small compute (decode, check for authorization, etc.). Caches will help, but it’s a classic parallel throughput problem.

Which is an entirely different workload than that required by the banking customer with whom I met last week – who was worrying about Sarbanes-Oxley compliance, and the introduction of provisioning technology to manage and audit risk. Running risk analytics over large data warehouses – big compute, small data (nowhere near 5Gb).

The industry keeps trying to solve both problems with the same systems. I’m not sure I believe that’s a useful pursuit – it seems more true every day: there is no one hammer for all nails.

Leave a comment

Filed under General

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s