Depending on the application of the user’s system, it may be necessary to modify the default configuration of the network adapters and the system/chipset configuration. This slide deck describes common tuning parameters, settings & procedures that can improve performance of the network adapter. Different Server & NIC vendors may have different recommendations for the values to be set – but the general tuning approach should be similar. For the hands-on demo we will utilize Mellanox ConnectX adapters – thus we will implement the recommended settings issued by Mellanox.
Today Nimbus Data Systems announced HALO 2013, an enhanced version of the company’s award-winning storage operating system. HALO 2013 features improved analytics to gauge the performance and efficiency of Nimbus Data flash memory arrays.
With a new REST-based API, HALO 2013 gives administrators full access to all Nimbus features and statistics, facilitating storage management in large multi-vendor data centers. HALO Mobile brings these advanced monitoring features to the palm of your hand, streaming live statistics directly to iOS and Android-based smartphones and tablets.
Nimbus Data is a pioneer in all-flash storage systems, and today’s announcement extends the first-mover advantage the company has established for itself,” says Benjamin Woo, managing director of Neuralytix, an industry analyst firm. “Nimbus Data recognizes the importance of instrumentation and integration, and providing an open API to the full features of its flash arrays will help drive down total cost of ownership.”
In this video from the GPU Technology Conference, Nvidia CEO Jen-Hsun Huang provides an update on GPU computing for the Cloud.
The NVIDIA GRID Visual Computing Appliance (VCA) is a powerful GPU-based system that runs complex applications such as those from Adobe Systems Incorporated, Autodesk and Dassault Systèmes, and sends their graphics output over the network to be displayed on a client computer. This remote GPU acceleration gives users the same rich graphics experience they would get from an expensive, dedicated PC under their desk.
Information will be the most valuable resource in the 21st century. Operating on large volumes of diverse data sources to get the right actionable insights at the right time presents new challenges and opportunities for system design. Addressing these opportunities requires a rethinking of future server and data center design—with a datacentric focus across both hardware and software. Here, we’ve presented a brief introduction to some recent research activities in this exciting emerging area, with a specific focus on system architecture and systems software.
Over at the Energy Sciences Network, Jon Bashor writes that ESnet’s FasterData.es.net is popular online repository of tips and tricks for improving network performance. Maintained by Brian Tierney for the last 15 years or so, the site comprises around 120 pages, about 50% of the page views where on the pages of that focus on tuning Linux hosts for better network performance on network paths above 1 gigabit per second.
There are a few settings that aren’t defaults and if you use them, they can gain you a lot in terms of performance,” said Tierney, cautioning that the same settings will actually downgrade the performance of slower networks or home routers. “The site contains a lot of arcane details that are hard to memorize, so users can go to a page, copy and paste the settings into their systems.”
The OFA User Workshop, April 18-19, provides opportunities to share experiences and learn from a community of OFS users.
The International Developer’s Workshop, April 21-24, will focus on the development and improvement of OFS as well as major developments in RDMA, etc. Agenda and more information is available on OpenFabrics.org.
Registration for the two events is now open. More details are available in this month’s OFA Newsletter, which features an interview with Susan Coulter, HPC Network Administrator at Los Alamos National Laboratory.
Over at the Emulex Blog, Sonny Singh writes that increasing complexity of data center environments and growth in storage have led to significant concerns about silent data corruption.
But really what it comes down to is without end-to-end protection technology, data corruption can go unnoticed until recovery is difficult and costly or even impossible to perform. Furthermore, without end-to-end integrity checking, these silent data corruptions can lead to unexpected and unexplained problems.
This week Mellanox announced that the company now has approximately 19 percent of Q4 2012 total market share for 10 GigE products. As represented by Crehan market study, Mellanox’s share is also the fastest growing of all vendors.
The growing demand for Mellanox’s Ethernet solutions is due to the increase in the utilization and efficiency of compute and storage infrastructures that our products enable,” said Gilad Shainer, vice president of marketing development at Mellanox Technologies. “Utilizing Mellanox’s cost-effective, high density, low power Ethernet adapters, switches, cables and software, leading enterprises, cloud and Web 2.0 companies can reduce their IT budget and gain better return on investment on their systems.”
The conference will focus on the following topics: Progress of Exascale in the European Union, high-performance interconnects, Accelerators and Parallel I/O, communication libraries (MPI, SHMEM, PGAS), GPU computing (CUDA, OpenCL) Big Data, advanced topics / technologies / development including server and storage systems, and hands-on clustering, network, troubleshooting, tuning, optimizations. The conference is open to the public and will bring together system managers, researchers, developers, computational scientists and industry affiliates.
Having been to this event several times, I can tell you that Lugano is one of the most beautiful towns in the world. It’s a solid three-day workshop, and this year they’ll be treating attendees to a boat trip on lake Lugano with an on-board apero and dinner.