Data centers should get greener and smaller – knowledge

The data center of the future has gotten a bit dirty. Quite a lot of muck, actually. So much that a worker unpacks the high-pressure cleaner and hoses the data center. The shower leaves the data memory unmoved. It’s waterproof, after all.

“Natick” is the name of the data center of the future – at least if the Microsoft engineers are to be believed. Two years ago, the software company sank the white steel tank, packed with 864 servers, in the North Sea. This summer he brought Natick back to the surface with algae, mini crabs and sea anemones. Internally, the data center was not impressed by any of this. It did, says Microsoft, what it should do: calculate, and do it more economically and efficiently than its counterparts on land.

The software company is not alone with these research activities. Other digital companies are also working to make their data centers fit for the future. They should no longer be cost drivers, energy guzzlers and bottlenecks. Instead, they should become smaller, greener and more decentralized. However, this is urgently needed if the dream of a thoroughly digital world is not to end suddenly.

After all, nowadays nothing works without data centers: the mostly inconspicuous buildings are the backbone of the Internet. This is where the images that people upload to Instagram are stored. The videos from Netflix are played here. Here Google searches are carried out, Amazon orders and the battles of the computer game “Fortnite”. More and more calculations are made here – for all those processes that have been outsourced to the nebulous “cloud”, that global, invisible computer. In short: Without the estimated eight million data centers around the world, no smartphone would work and no digital information would reach its recipient.

But data centers are also huge power guzzlers. The centers around the world pulled a good 200 terawatt hours from the socket in 2018 – around one percent of global electricity consumption. This is the conclusion that Eric Masanet from Northwestern University comes to in a recent study that the engineer and colleagues published in the specialist magazine Science published. It could have been worse: Since 2010, the global computing load in the centers has increased sixfold, Internet traffic has increased by a factor of ten, and storage space has even increased by a factor of 25. Nevertheless, according to Masanet, power consumption has only increased by six percent in the same period . Economical processors and increasingly busy servers have made this possible.

But can it go on like this? In 2023, the number of Internet users worldwide will exceed the five billion mark for the first time, estimates the network equipment provider Cisco. Two thirds of all people can then save their cat pictures online. The amount of data will also rise rapidly, by around 60 percent per year, as the consulting firm IDC has determined. According to this, 175 zettabytes or 175 quadrillion megabytes could be reached by the middle of the decade. IDC estimates that almost half of them will be in the cloud – and thus in data centers. Can all of this be achieved without the energy consumption going through the roof?

Not with today’s technology: So far, data centers have been rather inhospitable places. Anyone who has the opportunity to visit such a data temple is standing in the middle of large cupboards. Everyone is piled to the top with servers, none of which are bigger than a half-height cutlery drawer from the kitchen. Diodes flash, it’s loud, cold, dry and drafty.

Servers run hot as soon as they do complex calculations – much like the old laptop when it plays videos for hours

There is a reason for the uninviting environment: servers overheat as soon as they do complex calculations – much like the old laptop at home on the sofa when it has to play videos for hours. In order to dissipate the heat, data centers have so far mostly blown cold air into the server room through holes in the floor. The air flow is directed over the processors and extracted again from the ceiling – now significantly warmer due to the heat from the computer. The principle works quite well for server cabinets with an output of up to 20,000 watts.

In the future, however, engineers expect 100,000 watts per cabinet or more. Dissipating such amounts of heat with air alone would be extremely inefficient and expensive. The power requirement for cooling, which in today’s data centers is between ten and 20 percent of the total energy consumption, would increase massively. Therefore, the centers are increasingly switching to water-cooled systems.

Microsoft is going a different way. Instead of pumping water through servers, the software company wants to put its servers in the water. In the case of the white computing cylinder called Natick, which was sunk in June 2018 off the Scottish Orkney Islands at a depth of 35 meters, fresh water is fed to the processors from an internal, closed cooling circuit. The water heats up and flows out through pipes, where a heat exchanger transfers the energy to the sea – without the risk of a water bill. A similar system is also used to keep the inside of submarines cool.

The biggest concern in the run-up to Natick was that algae or other sea creatures would settle on the cooling fins of the 14 meter long steel tank and impede the exchange of heat, says project manager Ben Cutler on the Microsoft website. The engineers therefore experimented with different coatings, even the use of sound and ultraviolet light was discussed to drive away marine life. In the end, an alloy of copper and nickel prevailed. The material dissipates heat well and at the same time resists the growth of marine organisms, but is somewhat more expensive than conventional heat exchangers.

Fears that the surrounding water would become very hot due to the power of the submerged data center – after all, 240 kilowatts – were apparently not materialized. A few meters away from the steel cylinder, temperatures were measured that were only a few thousandths of a degree Celsius higher than before, writes project manager Cutler in the trade journal IEEE Spectrum. However, the measurement data have not yet been published in independent specialist journals. It is also unclear what effects huge server farms, composed of many individual computing cylinders, would have on the maritime environment.

For Stockholm’s municipal utilities, however, it can’t get hot enough. The Swedes are going in the opposite direction. They want to use the waste heat from data centers to heat their homes. The up to 85 degrees Celsius hot water from the cooling systems is fed into the city’s existing district heating network. According to the Stockholm engineers, ten megawatts of power are enough to heat 20,000 apartments. For comparison: a modern large data center, such as that operated by Facebook, among others, reaches 120 megawatts. By 2035, ten percent of the city of Stockholm should be heated with the waste heat from data centers.

Nordic countries are already very popular with the operators of the centers: The climate is frosty, which reduces the cost of cooling systems. The electricity is cheap (or, as in Sweden, heavily subsidized) and mostly comes from renewable sources. Facebook, for example, has set up a huge data center in Luleå, Sweden, right next to a hydropower plant. The power for the Natick cylinder on the Orkney Islands also comes from wind, sun and waves. According to Microsoft, it has been shown that a data center can be operated with a power mix that was previously considered “unreliable”.

Based on the weather forecast, the algorithm predicts the hours in which a particularly large amount of green electricity can be expected

Unreliable, but above all impractical: All the major digital corporations claim that they can count on electricity from renewable energies. Most of the time, however, the companies acquire global eco-certificates, while the electricity comes from the nearest coal-fired power station. In order to also become greener locally, Google has recently been experimenting with a new algorithm in its data centers: Based on the weather forecast for the coming day, it predicts the hours in which a particularly large amount of regenerative electricity can be expected, and places unnecessary computing tasks in precisely these Periods. As project manager Ana Radovanovic writes in the internet giant’s blog, this includes editing videos and training in the company’s own translation software.

“The first results show that the climate-friendly load distribution works,” says Radovanovic, but without giving concrete figures on CO2 savings. In any case, there is potential: According to Google estimates, only about two thirds of all calculations by the company have been made with green electricity. Artificial intelligence should also help to better adapt the cooling systems to the predicted computing needs. Three degrees less room temperature theoretically reduce energy costs by a quarter. In practice, Google wants to have reduced electricity consumption by 30 percent in this way.

The problem: Up in the north, where it is cool and the electricity is clean, none of these amounts of data are required. The metropolitan areas are elsewhere. Due to the large distances, however, the latency increases, as computer scientists call the delay in retrieving information: If a data center is 100 kilometers away, it takes a thousandth of a second before it can react to a click. If there are 5000 kilometers between the computer and the server, 50 thousandths of a second pass. This is negligible when playing a movie. However, if the local word processing is moved to the cloud, which requires a lot of interaction with the server, a high latency becomes noticeable.

The trend is therefore towards small, decentralized data centers right on the doorstep. Natick should also contribute to this. More than half of all people live less than 200 kilometers from a coast, according to Microsoft. Data centers sunk in the sea – efficient, quickly connected and without high property costs – could therefore represent a good alternative.

But only if a diver doesn’t have to stop by for repairs every few days. Data centers like Natick, named after a city in the US state of Massachusetts, therefore work autonomously – for years until the end of their planned lifespan. It apparently worked well off the Orkney Islands. According to Microsoft, some servers stopped working during the test run. Overall, however, the failure rate was only one eighth of the value of a comparable land-based data center.

Project manager Cutler blames the dry nitrogen atmosphere in the hermetically sealed cylinder, which prevented corrosion and temperature fluctuations. And he points out that no technicians shuffled through the data center and accidentally bumped into servers, that they didn’t rip cables or cause other chaos.

However, this could also be achieved without messy steel cylinders coated with algae and crustaceans: through autonomous, completely maintenance-free data centers on land. Microsoft, Google & Co. are already working on this.

.

Chip manufacturer: Home office brings Intel sales surge

DThe expansion of data centers in the corona crisis gave the chip giant Intel a strong boost in the past quarter. Consolidated sales increased 20 percent year over year to $ 19.7 billion.

Data center business soared 43 percent to $ 7.1 billion. The corona crisis had made it necessary to expand network capacities due to the move to home offices and the increased use of streaming services. The need for chips for it is now decreasing, said Intel’s chief financial officer George Davis.

At the same time, more notebooks for home work have been bought in recent months. Intel benefited from this with a 7 percent increase in sales to $ 9.5 billion in its computer chip division. At the same time, the group had to postpone the introduction of a new chip generation by another six months. Intel shares lost more than 10 percent in after-hours trading.

The processors with structure widths of 7 nanometers are expected to come into the first computers by the end of 2022, said Intel boss Bob Swan. That would be a year later than originally targeted. The lower the structure widths, the more processors fit on a semiconductor wafer during production. In addition, the chips work more efficiently and energy-saving.

Error rate too high

The problem with Intel’s 7-nanometer production is that it sheds too many unusable chips. In order to be able to work profitably, low error rates are important in chip production. Intel has now found the causes of the problem and is eliminating them.

At Intel, the transition to 10-nanometer technology, which the company is now focusing on for the time being, had already been delayed. The smaller competitor AMD and its production partners are already producing 7-nanometer chips. According to Intel’s quarterly report, the AMD share rose by almost 8 percent. Unlike AMD and various other chip groups, Intel relies on its own production instead of outsourcing it to specialized providers.

The renewed delay at Intel also sheds new light on Apple’s recent decision to replace the Group’s processors in its Mac computers with chips it developed itself. In the past, Apple had to repeatedly brake the renewal of its model range because the required Intel processors were not available.

“We see a world in which everything basically turns into a computer.”

Despite the problems, Swan was confident about the future. In addition to the main processors, there were more and more Intel semiconductors in various devices, he emphasized. The market is big: “We see a world in which everything basically turns into a computer.” And Intel is geared towards growth.

An important building block for this is the acquired company Mobileye, which offers automakers equipment for driver assistance systems and works on technology for autonomous driving. In the past quarter, however, Mobileye sales dropped 27 percent to $ 146 million due to weakening auto sales.

The bottom line, Intel made quarterly earnings of $ 5.1 billion – 22 percent more than a year earlier. However, the sales forecast for the rest of the year was below analysts’ expectations.

.

RI2208-LCS: Thomas-Krenn presents standard servers with water cooling

Thomas-Krenn introduced its first standard server with water cooling under the name RI2208-LCS (Liquid Cooled Server). It is a variant of the universal system RI2208 with a Supermicro mainboard with LGA3647 sockets and Intel’s C622 chipset. The two Xeon SP CPUs can be set up by users with up to 1 TB of RAM, and the 2U system also offers space for up to eight data carriers.

The water cooling comes from Cloud & Heat, which covers, among other things, the processors, chipsets, voltage converters and the RAID controller. The server is also equipped with coolant connections for installation in a micro data center of the Dresden company. The latter is a 19-inch rack that contains all the necessary components for the operation of the water cooling and offers management tools for the administrator.

In his announcement of the system, Thomas-Krenn promised that the RI2208-LCS would consume significantly less power than the RI2208, which is conventionally cooled with fans. Furthermore, customers should use the waste heat for building heating, among other things, and the server should also cause less noise and less dust.

Details on the specifications of the RI2208-LCS can be found on the product page. The manufacturer does not specify a fixed price for the server.


More from iX magazine


More from iX magazine


(fo)

To home page

.