Lataa esitys
Esittely latautuu. Ole hyvä ja odota
JulkaistuOtto Seppälä Muutettu yli 9 vuotta sitten
1
© CSC - IT Center for Science Ltd. (Tero Tuononen) Elektroniikkayhdistys 13.1.2009 Green IT (eng) CSC:n superkone-ympäristö (fin) Konesalivierailu (viittoen)
2
© CSC - IT Center for Science Ltd. (Tero Tuononen) Imagine your toaster being the size of a matchbox! 50 – 150W on a postage stamp Watts/socket ~constant Multicores demand memory •Capacity and bandwidth •Each memory DIMM takes 5- 15W Sockets/rack increasing “Virtualization may offer significant energy savings for volume servers because these servers typically operate at an average processor utilization level of only 5 to 15 percent “ (Dietrich 2007, US EPA 2007). 11 “The typical U.S. volume server will consume anywhere from 60 to 90 percent of its maximum system power at such low utilization levels “ (AMD 2006, Bodik et al. 2006, Dietrich 2007).”
3
© CSC - IT Center for Science Ltd. (Tero Tuononen) Then multiply your problem by thousands • Standard rack cabinet equals 0,77m 2 / 1,44m 3 • HPC rack density (cpus&ram/rack) increases • Enter the power ! • current system cabinets 25 – 40 kW • this year 60kW/cabinet • vendors predict 80 -100kW racks in 2-3 years • It becomes impossible to feed enough air thru the cabinet (wind speed issue) • Water is ~20 times more efficient coolant than air (in practice) • Liquid cooling and massive 2tn + racks • Machine rooms face yet another challenge – sheer mass of computing infrastructure
4
© CSC - IT Center for Science Ltd. (Tero Tuononen) Source: IDC, U.S. Environmental Protection Agency “At a datacenter level we estimate consumption levels in Western Europe to have exceeded 40TWh in 2007 and this is expected to grow to more than 42TWh in 2008. …which translated into €4.4 billion for entire datacenters” IDC, LONDON, October 2, 2008 Why GREEN IT has become an issue? Is it the price of energy we are talking about?
5
© CSC - IT Center for Science Ltd. (Tero Tuononen) GREEN machine rooms ? PUE = Power Usage effectiveness (total facility power / IT power) DCIE= Data Center Infrastructure efficiency (1 / PUE) * 100% And maybe one more coming DCP =Data Center Productivity ( Useful work / total facility power) Source: the green grid, *U.S. Environmental Protection Agency (2007) (TWh) (PUE*) Metrics and equations for machine room efficiency
6
© CSC - IT Center for Science Ltd. (Tero Tuononen) About machine room efficiency.. Source: the green grid, EPA* Q: Why PUE can not be 1.0 (theoretical minimum)? A: In order to guarantee operational environment you will need to: • Provide reliable power supply in form of: uninterruble power supplies (UPS), generators, backup batteries, switchgear(s), cables, rails,.. In general: any electrical component has power losses and efficiency rates below 100% – the more you put them into use the more power you lose. • Create coolant (cool water & air) that requires usually loads of extra energy consumed by: cooling chillers, computing room air conditioning units (CRAC), cooling towers, humidification units, pumps, direct exchange units …
7
© CSC - IT Center for Science Ltd. (Tero Tuononen) How to improve machine room efficiency ! Facility power Reduce redundancy wherever applicable! (Tiers 1-4) State-of-the-art transformers (>98%) State-of-the-art UPS systems (>95%) State-of-the-art switchgears, power cables, rails,.. Variable speed :chillers, fans and pumps, CRACs, Modular, upgradable facility approach Facility cooling (interior) Do not over-cool! Tune the air and water temperatures as high as possible * Do not over-size your cooling gear, efficiency is worse at low usage levels Hot/cold isle approach (air) Reduce the area/ air dimensions to be cooled Liquid cooled/closed racks (Water is ~15x more efficient than air) *ASHRAE
8
© CSC - IT Center for Science Ltd. (Tero Tuononen) And improving further... Facility cooling (exterior) …access to cold water supply hence no need for large chillers District/remote cooling from local energy company? Heat dissipation fed back to district heating system etc.(more complex?) Large HPC sites consider CPH plants of their own to create power and cooling Economizer or water-side free cooling (in moderate or mild climate region) •Get the cool water from (river), deep lake, sea, groundwater source and maybe return it back slightly warmer •Cool /cold (<15 / <7 Celcius) outside air (nights, winter time) •Permafrost, ice/snow - how likely ? Feasibility?
9
© CSC - IT Center for Science Ltd. (Tero Tuononen) Green computing systems? (Green500.org) In computing technology the GREEN is defined by computational operations achieved by consumed wattage i.e. Mflops/Watt – ratio (the higher the greener). State-of-the-art technology exceeds 530MFlops/Watt • IBM PowerXCell and BlueGene systems • Top result of 535MFlops/Watt TOP-DOG /Petascale systems out there – see the difference in architectures and how it affects the power consumption: • IBM (hybrid Cell, AMD, PowerPC) Roadrunner (2.5MW) 445MFlops/W • Sun (AMD) Ranger (7MW) 152 Mflops/W. • Hybrid system is ~3x times more energy efficient than traditional x86 based
10
© CSC - IT Center for Science Ltd. (Tero Tuononen) Trends in 2011-2015 : Hosting a Petaclass system with different scenarios (assuming 3MW and 1MW systems, with facility efficiency of 1.6 and 1.25)
11
© CSC - IT Center for Science Ltd. (Tero Tuononen) Some conclusions on Green IT
12
© CSC - IT Center for Science Ltd. (Tero Tuononen) CSC:n superkone-ympäristö ”LOUHI, Pohjan Akka” Suomen tieteellisen laskennan lippulaiva kuvattuna uudessa Pohja -konesalissa lokakuussa 2008.
13
© CSC - IT Center for Science Ltd. (Tero Tuononen) Mikä on Louhi ja mihin se pystyy Perinteisen superkoneyhtiön (Cray Inc.) valmistama massiivisesti rinnakkainen supertietokone Käyttää tavallisia AMD Inc. valmistamia prosessoreja kuten koti-pc:tkin (n. 2 500 kpl) Käyttöjärjestelmänä ”viilattu” Linux Otettu käyttöön vaiheittain huhtikuusta -07 alkaen, nyt täydessä laajuudessaan Hinta n. 7M€ Tehokas elinaika n. 4 vuotta Laskentateholtaan 31. maailmassa ja 9. Euroopassa Vastaa n. 5 000 tehokasta pc:tä Teoreettinen laskentakyky n.16 000 laskutoimitusta/ihminen/s Keskusmuistia n. 11 TB Levyjärjestelmä 70TB (satoja kovalevyjä)
14
© CSC - IT Center for Science Ltd. (Tero Tuononen) CPU1 muisti net1 1,4m 3 /s 1 räkki sisältää 3 x 8 bladea á 4 tai 8 CPU + puhallin XT4 Compute blade
15
© CSC - IT Center for Science Ltd. (Tero Tuononen)
17
Louhen fyysiset mitat ja sijoitus saliin PA = 3.6 x 6 (21.5m2) Korkeus 2 m Massa: 15 000 kg Koko järjestelmä asennettu 60x60cm laatoitukselle, joka on korotettu (80cm) teräsjalkojen varaan. Kantokyky 600kg/jalka, Laatan pistekuormakestävyys 9kN. 2 x 10 laskentaräkkiä 2 dataräkkiä
18
© CSC - IT Center for Science Ltd. (Tero Tuononen) Sähkönsyötön periaate Louhen sähköteho on 300 - 520kW, joka Syötetään kahdelta UPS-suojatulta (10min) keskukselta 72h varavoima 2500hv/2MW63A 3000A : 100kg/jm
19
© CSC - IT Center for Science Ltd. (Tero Tuononen) 475 kW sähköteho (kw) ~ 80 sähkökiukaan lämpöteho (24/7), joka pitää siirtää pois: laitteesta ilmaan, ilmasta veteen,.. 13 – 15 C 30 - 35 C 75m 3 / s 1,4m 3 /räkki n. 9 C n. 17 C n. 40l/s
20
© CSC - IT Center for Science Ltd. (Tero Tuononen)..vedestä alkoholiin ja katolle. Kompressorijäähdytin 1,3MW Glykoliputket Katolle (12. krs) Kattolauhduttimet
21
© CSC - IT Center for Science Ltd. (Tero Tuononen) Kiitos mielenkiinnosta, kysymyksiä?
Samankaltaiset esitykset
© 2024 SlidePlayer.fi Inc.
All rights reserved.