Over at Forbes, Tom Coughlin writes that RDMA extends the capability of fast direct access to memory between computers in a cluster to greater distances, within a Metropolitan Area Network (MAN ) or even in a Wide Area Network (WAN) that can span continents.
RDMA over a WAN allows some very useful capabilities that can increase the overall power of a clustered computer system. It can provide remote collaboration with a remote file system allowing access as though it were local, enabling apparent real-time collaboration. RDMA also allows very efficient file transfer over a WAN. This direct data placement is accomplished with little impact on the processors on either end of the file transport. These features are very useful for working with large data files such as those common in many HPC applications. Storage at a Distance will not directly impact conventional client computing since these devices typically don’t have access to dedicated high-speed Internet connections. However with the growth of on-line (cloud) services the use of RDMA could accelerate many background processes within a given data center and between data centers. This could improve overall cloud performance and provide services such as fast backups and replications of data to provide data recovery. Thus Storage at a Distance could have a great impact on the overall performance and capabilities available over the Cloud.
Mellanox reveals a single switch that merges InfiniBand and Ethernet technologies for data center solutions.
Mellanox’s new InfiniBand to Ethernet gateway functionality built within Mellanox switches provides the most cost-effective, high-performance solution for data center unified connectivity solutions,” said Gilad Shainer, vice president of marketing at Mellanox Technologies. “Mellanox’s systems enable data centers to operate at 56Gb/s network speeds while seamlessly connecting to 1, 10 and 40 Gigabit Ethernet networks. Existing LAN infrastructures and management practices can be preserved, easing deployment and providing significant return-on-investment.”
Mellanox introduces an open Ethernet switch initiative designed to give users custom designs and superb return on investment.
The market’s move toward SDN and open source networking offers a variety of advantages that help drive data center productivity and currently is not available with traditional proprietary software,” said Gilad Shainer, vice president of marketing at Mellanox Technologies. “Our demonstration with Quagga highlights the power of Open Ethernet to provide the capability to fully customize open source software packages on top of Mellanox 40 and 56GbE switches, enabling our customers to add differentiation and competitive advantages in their networking infrastructure while reducing cost.”
Performance, availability and scalability requirements of large scale cloud businesses cannot be met with traditional IT approaches to storage, that typically excel in one of these areas and fall short in another,” said Charles Wuischpard, CEO Penguin Computing. “To meet the demands of our customers that require storage solutions at the petabyte scale we based our large scale storage appliance Icebreaker CS on software from Scality. With its distributed no-shared architecture and its sophisticated Advanced Resilience Configuration, Scality RING offers excellent storage scalability and great availability without compromising performance.”
Mellanox is speeding up VXLAN with an innovative hardware solution that enables large-scale cloud infrastructures.
To meet the growing demand of cloud computing services, cloud providers must be able to take full advantage of new software techniques to scale-up their cloud networks without reducing performance or efficiency of the infrastructure,” said Gilad Shainer, vice president of marketing at Mellanox Technologies. “With ConnectX-3 Pro, cloud providers will be able to easily scale and grow their business and provide new value-add services while reducing the cost of their cloud infrastructure; ushering in the age of Cloud 2.0
Over at Nex7′s Blog, Andrew Galloway from Nexenta Systems writes that while ZFS is one of the most powerful, flexible, and robust filesystems, it does have its own share of caveats, gotchya’s, and hidden “features.”
Deduplication Is Not Free. Another common misunderstanding is that ZFS deduplication, since its inclusion, is a nice, free feature you can enable to hopefully gain space savings on your ZFS filesystems/zvols/zpools. Nothing could be farther from the truth. Unlike a number of other deduplication implementations, ZFS deduplication is on-the-fly as data is read and written. This creates a number of architectural challenges that the ZFS team had to conquer, and the methods by which this was achieved lead to a significant and sometimes unexpectedly high RAM requirement. Every block of data in a dedup’ed filesystem can end up having an entry in a database known as the DDT (DeDupe Table). DDT entries need RAM. It is not uncommon for DDT’s to grow to sizes larger than available RAM on zpools that aren’t even that large (couple of TB’s). If the hits against the DDT aren’t being serviced primarily from RAM or fast SSD, performance quickly drops to abysmal levels. Because enabling/disabling deduplication within ZFS doesn’t actually do anything to data already on disk, do not enable deduplication without a full understanding of its requirements and architecture first. You will be hard-pressed to get rid of it later.
Our friends at Avere are offering a free copy of NAS Optimization for Dummies.
Big NAS performance comes from your ability to scale, eliminate sources of latency, and gain the advantages of the cloud. Get started with Avere Systems’ Special Edition of NAS Optimization for Dummies by Allen G. Taylor.
In this book, you’ll find:
How to configure NAS storage for optimal performance
Ways to reduce the cost of upgrades as your storage needs grow
How to minimize the impact of multiple users hitting the storage systems at the same time
In this video, Nebula CEO Chris Kemp discusses his new product called the Nebula One and the future of cloud computing with Cory Johnson on Bloomberg Television. Kemp was formerly the CTO of NASA IT.
Nebula One brings the cloud to you, under your control, behind your firewall. It is an integrated hardware and software appliance providing distributed compute, storage, and network services in a unified system.
The Nebula One has to be cool — they’ve got Patrick Stewart and Andy Bechtolsheim in their launch video!
Today Nimbus Data Systems announced HALO 2013, an enhanced version of the company’s award-winning storage operating system. HALO 2013 features improved analytics to gauge the performance and efficiency of Nimbus Data flash memory arrays.
With a new REST-based API, HALO 2013 gives administrators full access to all Nimbus features and statistics, facilitating storage management in large multi-vendor data centers. HALO Mobile brings these advanced monitoring features to the palm of your hand, streaming live statistics directly to iOS and Android-based smartphones and tablets.
Nimbus Data is a pioneer in all-flash storage systems, and today’s announcement extends the first-mover advantage the company has established for itself,” says Benjamin Woo, managing director of Neuralytix, an industry analyst firm. “Nimbus Data recognizes the importance of instrumentation and integration, and providing an open API to the full features of its flash arrays will help drive down total cost of ownership.”