PDA

View Full Version : Titan Supercomputer



Winter_Wolf
2012-10-30, 06:00 PM
Good gods, that is a lot of computer (http://www.tomshardware.com/news/oak-ridge-ORNL-nvidia-titan,18798.html).

Don't think it would fit in my room, though. :smalltongue: Too bad the article neglected to mention what the Dept. of Energy is going to do with their new supercomputer.

TSGames
2012-10-30, 07:08 PM
:smalltongue: Too bad the article neglected to mention what the Dept. of Energy is going to do with their new supercomputer.
With that much computing power the answer is: whatever they want.

Gwyn chan 'r Gwyll
2012-10-30, 07:09 PM
I am proud to say I understood almost not a single word in that article!

Wyntonian
2012-10-30, 08:02 PM
Too bad the article neglected to mention what the Dept. of Energy is going to do with their new supercomputer.

Play Team Fortress? Hack all of the internet at once? Create an algorithm that adds third items to all their lists?

Pokonic
2012-10-30, 08:07 PM
Ooooh...


We could probably get the forums to work with searching if it was moved onto that beauty...:smallbiggrin:

Ravens_cry
2012-10-30, 08:13 PM
<insert Crysis joke here/>

Dr.Epic
2012-10-30, 09:08 PM
Yeah, but it still doesn't compare to...

http://www.hrwiki.org/w/images/f/f8/Strong_Bad_Email_119.png

the Lappy 486

Traab
2012-10-30, 09:13 PM
I had a question about a part of that article.


That translates to 299,008 CPU cores,

If they can link that many cores together, then why are/were quad core desktop computers considered so awesome? Or is it not that they are linked together, but that this computer can do 299k things at once or something?

Eldest
2012-10-30, 11:40 PM
I had a question about a part of that article.



If they can link that many cores together, then why are/were quad core desktop computers considered so awesome? Or is it not that they are linked together, but that this computer can do 299k things at once or something?

That's why it's a supercomputer, and your desktop is just a plain old computer. But yes, running parallel processes is the big advantage of supercomputers over regular computers, in my understanding.

the_druid_droid
2012-10-30, 11:55 PM
Good gods, that is a lot of computer (http://www.tomshardware.com/news/oak-ridge-ORNL-nvidia-titan,18798.html).

Don't think it would fit in my room, though. :smalltongue: Too bad the article neglected to mention what the Dept. of Energy is going to do with their new supercomputer.

Given it's DOE funded, I'm guessing a big focus will be on energy storage and generation materials, possibly some climate work, radiation modelling, and/or particle physics simulations or data collation is also a possibility. In short, it'll be for whatever gets the computing time grants funded.

I also read recently that there's a new supercomputer online at the University of Wyoming, called Yellowstone, that will be the first of its kind to be solely dedicated to geoscience research.

EDIT: Oh, it's specifically ORNL, which means it will almost certainly see a lot of work in the nuclear sciences.

tyckspoon
2012-10-31, 12:16 AM
I had a question about a part of that article.



If they can link that many cores together, then why are/were quad core desktop computers considered so awesome? Or is it not that they are linked together, but that this computer can do 299k things at once or something?

Multi-core CPUs brought some of the power of server/supercomputer clusters to the standard home/business user at an affordable and operable price, and greatly multiplied the power available to those servers and supercomputers that were already using multiple physical chips. (For sake of reference: In the Bad Old Days of single-core processors, the cluster in the article would only have 18,688 computing units available to it.. and that's not counting the extra power it now has thanks to the advancements in using graphical processors for general-purpose computing.)

Mikhailangelo
2012-10-31, 02:04 AM
With that much computing power the answer is: whatever they want.

Haha, is funny, because people once say same for 32mb hard drive computers!

BladeofObliviom
2012-10-31, 02:19 AM
Yeah, but it still doesn't compare to...

http://www.hrwiki.org/w/images/f/f8/Strong_Bad_Email_119.png

the Lappy 486

This is the best reference. Just so you know. :smallwink::smallbiggrin:

OracleofWuffing
2012-10-31, 02:22 AM
Too bad the article neglected to mention what the Dept. of Energy is going to do with their new supercomputer.
My bet's on "Wait until the people stop looking, and hit the "roll around in porn and malware" button." :smallbiggrin:

factotum
2012-10-31, 02:36 AM
That's why it's a supercomputer, and your desktop is just a plain old computer. But yes, running parallel processes is the big advantage of supercomputers over regular computers, in my understanding.

It would be almost completely pointless having that many cores in a general-purpose desktop computer, mind you, because the sort of applications the typical person runs aren't written to take advantage of them. It's really hard to write a program that will make efficient use of nearly 300,000 parallel threads of execution, and if you're leaving some of the cores idle, they might as well not be there!

Gravitron5000
2012-10-31, 02:35 PM
Good gods, that is a lot of computer (http://www.tomshardware.com/news/oak-ridge-ORNL-nvidia-titan,18798.html).

Don't think it would fit in my room, though. :smalltongue: Too bad the article neglected to mention what the Dept. of Energy is going to do with their new supercomputer.

Here's an article from Oak Ridges National Laboratory (http://www.ornl.gov/info/press_releases/get_press_release.cfm?ReleaseNumber=mr20121029-00)

Here's the second paragraph:


Titan, which is supported by the Department of Energy, will provide unprecedented computing power for research in energy, climate change, efficient engines, materials and other disciplines and pave the way for a wide range of achievements in science and technology.

Also so the scientists at Oak Ridges can brag about their fancy new supercomputer.

Dr.Epic
2012-10-31, 02:41 PM
This is the best reference. Just so you know. :smallwink::smallbiggrin:

Nah. Here's the best reference ever seen, done, or eaten. (http://www.hrwiki.org/wiki/Limozeen:_%22but_they%27re_in_space!%22)

scurv
2012-10-31, 04:40 PM
For bonus point, Take a guess at how long that beast will take to crack a 10 diget alpha-numeric passy. assuming numbers, letters and 10 special chars.

TSGames
2012-10-31, 08:06 PM
For bonus point, Take a guess at how long that beast will take to crack a 10 diget alpha-numeric passy. assuming numbers, letters and 10 special chars.
Let's see....
10+10+26+26=72 total possible values per digit
(72^10)/(20e12)=187195.312131s = maximum time
187195.312131/2=93597.6560656s = 1559.96093443 minutes = 25.9993489071 hours average time

So in ~26 hours it can crack upwards of 95% of passwords in use today. But I have the strangest feeling it may be used for other purposes or at least cracking stronger passwords...


Also, China is already building it bigger (http://www.computerworld.com/s/article/9233097/China_building_a_100_petaflop_supercomputer?taxono myId=13)...

Traab
2012-10-31, 08:42 PM
Nah. Here's the best reference ever seen, done, or eaten. (http://www.hrwiki.org/wiki/Limozeen:_%22but_they%27re_in_space!%22)

WRONG! The greatest reference is THIS! (https://www.youtube.com/watch?v=x2rS-ha8DbE)

Trixie
2012-11-02, 10:50 AM
For question everyone is asking, isn't DoE in charge of simulating behaviour of nuclear materials, instead of testing, necessitating the computing power? :smallconfused:


If they can link that many cores together, then why are/were quad core desktop computers considered so awesome? Or is it not that they are linked together, but that this computer can do 299k things at once or something?

If you want to pay 100 mln $ for desk computer, sure, go ahead :smallamused:

Also, it's 300K different things at once, most probably, parallel programming is hard.

Traab
2012-11-02, 01:59 PM
Now, how do you even set things up to run 300k tasks simultaneously? I mean, it seems like it would take an awful lot of setup time to tell the computer to do this this this this that these and a little of those all at once. In other words, sure it could do 300k math problems at once, but how do you input 300k math problems for it to do?

Gravitron5000
2012-11-02, 02:37 PM
Now, how do you even set things up to run 300k tasks simultaneously? I mean, it seems like it would take an awful lot of setup time to tell the computer to do this this this this that these and a little of those all at once. In other words, sure it could do 300k math problems at once, but how do you input 300k math problems for it to do?

Most of the math problems that this thing will be asked will have to be broken down into multiple steps, and many of those steps can be run at the same time without affecting the results of other steps. Having your program or simulation figure out how to split up the tasks to take advantage of this the goal, but writing a program that can efficiently do this is no small task. Also, with a beast like this, it is likely that there will be a whole lot of people using it at the same time running their separate programs while the operating system that is running on the computer will keep track of who is running what, and to a certain extent, keep the users from messing with one another's work.

Trixie
2012-11-02, 02:41 PM
Now, how do you even set things up to run 300k tasks simultaneously? I mean, it seems like it would take an awful lot of setup time to tell the computer to do this this this this that these and a little of those all at once. In other words, sure it could do 300k math problems at once, but how do you input 300k math problems for it to do?

Eh, they probably have one 'computer' per 64-256 cards sending data and collecting results? Getting 128 cards to do 128 things is easy, it's getting them to do one thing in cooperation that's very hard. This also makes range of problems this computer can tackle much smaller.

In fact, your computer sends at this moment data to 2-8 cores on your CPU and to dozens to hundreds of computing units on your GPU, it's nothing special.

Dr.Epic
2012-11-02, 03:19 PM
WRONG! The greatest reference is THIS! (https://www.youtube.com/watch?v=x2rS-ha8DbE)

PFF! That's different. And lame. And differently lame.:smallwink::smalltongue:

the_druid_droid
2012-11-03, 08:44 PM
Now, how do you even set things up to run 300k tasks simultaneously? I mean, it seems like it would take an awful lot of setup time to tell the computer to do this this this this that these and a little of those all at once. In other words, sure it could do 300k math problems at once, but how do you input 300k math problems for it to do?

There's actually a couple ways this can work. As folks have already mentioned, it's doubtful that one program will use all the cores available. Most likely, an individual user will take time from a few hundred to a few thousand cores, depending on their job.

Second, how the work is divided up depends on the problem. In general, there are two approaches - data parallelism and task parallelism. In the first, you have lots of items you want to operate on, and you split those items between the cores and have them perform the same operation on each of the bits (for example, dividing a million-item list into 1k chunks and then sorting each chunk and recombining). In the second, each core would have the same set of data, but would do differing things to it.

In my own experience, scientific applications tend to fall more into the data-parallelism category, where you have large matrices or many data points to process for example.

As far as actually doing the division, several methods exist, one of the most popular of which (as far as I know) is MPI, or the Message Passing Interface. It works in C, C++, and FORTRAN (and possibly some others) and provides an interface that tells the controller how to split and recombine the data between processes running on different cores, without requiring major language modifications.

nedz
2012-11-04, 12:57 AM
How about this (http://en.wikipedia.org/wiki/Difference_engine) for a reference ?

factotum
2012-11-04, 02:45 AM
In fact, your computer sends at this moment data to 2-8 cores on your CPU and to dozens to hundreds of computing units on your GPU, it's nothing special.

But your computer probably doesn't spend most of its time at 100% utilisation. When you're spending the money to put together a machine with the sort of computing power we're talking in this thread, if it spends most of its life half idle then you just wasted your money--you could have built one half as powerful that would have done the job just as effectively. Therefore, making sure you get the best possible utilisation out of your 300k cores is a critical factor that isn't something the programmers of desktop machines need to worry themselves over!

Neftren
2012-11-04, 02:40 PM
Eh, they probably have one 'computer' per 64-256 cards sending data and collecting results? Getting 128 cards to do 128 things is easy, it's getting them to do one thing in cooperation that's very hard. This also makes range of problems this computer can tackle much smaller.

In fact, your computer sends at this moment data to 2-8 cores on your CPU and to dozens to hundreds of computing units on your GPU, it's nothing special.

It's actually easier than it sounds (to write parallel programs). You can typically put something together using OpenCL, NVidia CUDA, or cluster existing programs together (MPI) in some fashion. Getting one card, two cards, or 128 cards to do the same thing isn't all that different once you have driver level software/firmware to handle the scheduling aspect.

The problem is that your research interest has to be suitably parallel. This heavily favors matrix-based computations, as an example.





factotum, you don't always want to run at 100% CPU load. Electricity costs really add up when you're running a supercomputer (or even just a data center).

the_druid_droid
2012-11-04, 03:02 PM
The problem is that your research interest has to be suitably parallel. This heavily favors matrix-based computations, as an example.

Yeah, this is quite important. Also, in addition to parallel matrix operations, a lot of simulations these days for materials science, physics and chemistry make use of molecular dynamics approaches, which take the alternate path of assigning groups of atoms/molecules to separate processors for calculation.

I'm not personally sure about the climate work, although depending on their model they may have quite a lot of vector-type calculations.

invinible
2012-11-04, 07:01 PM
I noticed that 1 of the rules for using the Titan Supercomputer is that what you are doing needs to required at least 1/5th of the system to be done.

factotum
2012-11-05, 02:32 AM
factotum, you don't always want to run at 100% CPU load. Electricity costs really add up when you're running a supercomputer (or even just a data center).

According to another article I found, this machine uses 9MW of power at full load. At the sort of prices they charge for electricity in the UK (approx 20p per kWh), that would be £1800 an hour or a little under £16 million per annum. Now, what I *can't* find is how much this machine cost to build in the first place--I suspect that number is going to be very high compared to that estimated electricity cost, though.

That's significant, because it goes back to what I was saying earlier--if the machine spends most of its time idle you could have built a less powerful, cheaper one to do the same task.

Winter_Wolf
2012-11-05, 11:34 AM
Maybe they have an unadvertised, private source of power for it. Secret nuclear reactors in basements, people! [/tinfoil hat]

I just thought the original article I linked to was sloppy for not even giving the kinds of projects that needed that kind of power, or at least some ludicrous comparison like running 1000 copies of some kind of software that people might have heard of.

Neftren
2012-11-05, 04:55 PM
According to another article I found, this machine uses 9MW of power at full load. At the sort of prices they charge for electricity in the UK (approx 20p per kWh), that would be £1800 an hour or a little under £16 million per annum. Now, what I *can't* find is how much this machine cost to build in the first place--I suspect that number is going to be very high compared to that estimated electricity cost, though.

That's significant, because it goes back to what I was saying earlier--if the machine spends most of its time idle you could have built a less powerful, cheaper one to do the same task.

I'm going to define "idle" as "0-1% utilisation" at this point, just so we're all on the same page. Correct me if your definition if idle is different, and we can revise from there.

Anyways, back to cost-benefit analysis. Some rough Googling presents a $1.2bn (http://www.dailymail.co.uk/sciencetech/article-2005920/Japan-creates-worlds-fastest-supercomputer-fast-MILLION-desktop-PCs.html) price tag for Japan's K supercomputer, currently #2 on the TOP500 list. Peak consumption is 12.6 MW, though further down the page, it lists 9.89 MW, so let's go with the latter number as the expected figure.

They're comparable I suppose. I'd have to do some more research on Titan though.

I do have to say though, that the arguments presented thus far comparing supercomputers and regular consumer computers don't really apply. First off, any large computing center will probably be running around 60-70% load, but that's an aside (remember electricity scales exponentially). The average user might think "oh, a bigger computer means a faster computer!" when that really isn't always the case. Sure you can use a big computer to accelerate your programs, but what's of more interest is that a bigger computer allows you to solve bigger problems. Some programs only gain a limited benefit from tacking on more cores, and need an alternative approach when switching to parallel.

It's not about building a less powerful, cheaper one to do the same task. Sometimes it's about building a bigger machine that can handle a larger-sized problem, courtesy of the terabyte or more of RAM attached to the thing. Atmospheric simulations come to mind here, if you're trying to model a very large region (or the entire planet, in Climate simulations).

Studoku
2012-11-08, 10:32 AM
Maybe they have an unadvertised, private source of power for it.

Row upon row of hamsters in wheels.