Category Archives: Storage

Why the Main Frame Will Probably Never Truly Be Replaced by Cloud Computing

Over at Wired, Tom Bice writes that while the cloud has certainly made its impact on the computing world, the traditional main frame is still an excellent option for reliability, security and even scalability.

“The mainframe is not nearly as trendy as today’s hot topics like Big Data or the cloud, but it continues to serve as the central nervous system of major industries like finance and healthcare, which is something the public cloud has yet to achieve. Over the years, the mainframe has adapted with each new wave of technology to maintain its place at the center of many computing environments. At the same time today’s mainstream virtualization and security approaches have been part of the mainframe platform for decades.

Read the Full Story.

 

Happy Birthday Amazon Web Services–The Service Turns 8 Years Old

Over at InfoWorld, David Linthicum writes that cloud computing giant Amazon Web Services (AWS) has now been offering Storage as a Service (SaaS) for eight years. Not only did AWS re-define how the cloud was used, it is now in a position of domination in the arena.

“The unique aspect of AWS is that Amazon.com pushed ahead with its own way of doing cloud, rather than try to replicate the work of others. Storage as a service was around then, but AWS’s use of well-defined APIs made the difference. Moreover, AWS presented the value case to developers, helping embed the notion of storage services into actual software. Finally, the Amazon name mattered greatly, thanks to the company’s reputation for having excellent internal technology that supported an amazing scale of operations.

Read the Full Story.

Cloud Computing in the Not-Do-Distant Future–More, Bigger, Better

Over at InfoWorld, Paul Krill reports from the IDC Directions conference in Silicon Valley on the future of cloud computing. A lot of the growth will be in the Big Data aspects of the cloud including data center builds, Hadoop services and in-memory databases.

“Speaking at this week’s IDC Directions conference in Silicon Valley, IDC analyst Frank Gens offered projections on a number of technology areas, including growth for cloud computing. “We know the last seven years, folks were building out global data centers. We haven’t seen anything yet. We’ll see a doubling of the footprint.”

Read the Full Story.

MPSTOR Announces New VMware Version of its Orkestra Open Source Software

MPSTOR has announced a new version of its OpenStack-driven cloud software called Orkestra that repurposes VMware. Orkestra and the VMware datacenter work excellently in concert with Open Stack technologies and provides superb flexibility.

“We were able to save significant capital and operating expenses and avoid vendor lock-in while simply migrating our existing infrastructure onto the open platform using MPSTOR’s VMDK version of Orkestra IaaS,” said Christian Sestu, CTO of cloud resources at Filippetti, a leader in Cloud Computing and IT solutions in Italy.  “It was simple to install and runs seamlessly on top of our existing VMware.”

Read the Press Release.

Silicon Valley Bank and Farnam Street Financial Gives Codero $8 Million in Funding

Codero has announced an $8 million round of funding from Silicon Valley Bank and Farnam Street Financial to expand its world wide data center footprint.

“We have outpaced our industry’s growth, expanding faster than other hosting and cloud providers due to our commitment to providing customers with unparalleled performance, expertise, support and value,” said Emil Sayegh, president and CEO of Codero Hosting. “The support of SVB and Farnam Street Financial helps us accelerate our growth and capitalize on our market success.”

Read the Press Release.

Dropbox Experiences Downtime but Quickly Gets to Work Restoring Services

Over at ITworld, Jeremy Kirk reports that Dropbox went down for awhile on Friday and immediately got to work to fix the outage and as of Sunday 99% of the users could access their files.

“One of the issues revolved around photos. It disabled photo sharing and turned off a “Photos” tab on dropbox.com. Photos were still available through the desktop client and the “Files” tab on dropbox.com, it (blog post) said. The Photos tab remained disabled on Sunday. “Were continuing to make a lot of progress restoring full service to all users, and are doing so in careful steps,” it said. Service outages and probes by cyberattackers are some of the biggest concerns for users of cloud-based services.

Read the Full Story.

High Performance Computing in the Cloud Doubles over the Last Two Years

Over at ihotdesk, Paul Sells reports that the High Performance Computing (HPC) world has increasingly turned to the cloud for data storage and compute power.

“Earl Joseph, program vice president for technical computing at IDC, said: “The most surprising findings of the 2013 study are the substantially increased penetration of co-processors and accelerators at HPC sites around the world. “[Also of note was] the large proportion of sites that are applying Big Data technologies and methods to their problems and the steady growth in cloud computing for HPC.”

Read the Full Story.

Talkin’ Cloud 100 Report Names Salesforce as the Top of its Annual List

Nine Lives Media, a division of Penton, names Salesforce number one on the list followed by Amazon Web Services, Microsoft Office 365 and Windows Azure, Oracle Cloud and Google Apps to round out the top five.

“The Talkin’ Cloud 100 is the only report and list that offers a full 360-degree view of cloud computing in the IT channel,” said Amy Katz, president, Nine Lives Media. “Our research shows that the cloud ecosystem is thriving, growing and rewarding a range of companies from various channel backgrounds — including pure CSPs, brokers, aggregators, VARs and MSPs.”

Read the Full Story.

UOL Selects Virtustream Software to Deliver Enterprise Class Cloud Solutions in Brazil

Virtustream today announced that it has partnered with Brazilian internet services giant Universo Online (UOL). The move comes as demand in Brazil has sharply risen for cloud infrastructure services and data management software.

“We are very pleased to support UOL’s entry into the enterprise class cloud market,” said Rodney Rogers, chairman and chief executive officer at Virtustream. “As Brazilian enterprises increasingly focus on gaining efficiencies and reducing costs through cloud solutions, this partnership will provide them with the efficiencies they seek and the enterprise grade security and performance they need, all delivered by a provider they already know and trust.”

Read the Press Release.

Cloud Brings Big Data to all Levels of Enterprise

At TechRepublic, Nick Hardiman writes that the cloud will play a big role in Big Data. As massive data sets move to the cloud, enterprises of all sizes can take advantage of cost-effective and scalable data analysis.

“For the growing amount of unstructured data produced by social media, sensor networks, and federated analytics data-and for constantly changing data that needs to be replicated to other operating sites or mobile workers-NoSQL technologies better fit those use-cases. Unstructured data can be terabytes or even petabytes in size.

Read the Full Story.

Real-Time Data Streaming from Google is Here

Over at InformationWeek, Thomas Claburn reports that Google has improved BigQuery, its Web service for intensive data analysis. The new features include: real-time data streaming, ability to query portions of a table and interface improvements.

“Raj Pai, CEO of social analytics company Claritics, said in a Google case study that time-consuming complex queries of large data sets on Hadoop clusters can be processed by BigQuery in as little as 20 seconds. As a consequence, his company has been able to develop apps four times faster and to spend about 40% less time focused on IT infrastructure.

Read the Full Story.

Slidecast: TwinStrata CloudArray – Disaster Recovery as a Service

[youtube http://www.youtube.com/watch?v=F65pkFUCdRg?rel=0&w=511&h=383]

In this slidecast, Nicos Vekiarides from TwinStrata presents: TwinStrata CloudArray 4.5 with DRaaS. The new offering is an on-demand disaster recovery as a service (DRaaS) for VMware users.

Whether your goals are to increase storage capacity, improve off-site data protection, implement disaster recovery or all three of the above, TwinStrata CloudArray is the most comprehensive storage solution available today,” said Nicos Vekiarides, CEO of TwinStrata. “TwinStrata has made great strides in delivering enterprise-class functionality at a fraction of the cost typically required of storage solutions. What’s exciting is CloudArray 4.5 enables organizations to enjoy a full business continuity plan without the need for backup software or a dedicated disaster site– a once unthinkable proposition.”

Read the Full Story * View the slides * * Download the MP3Subscribe on iTunesSubscribe to RSS

RDMA and Storage at a Distance

Over at Forbes, Tom Coughlin writes that RDMA extends the capability of fast direct access to memory between computers in a cluster to greater distances, within a Metropolitan Area Network (MAN ) or even in a Wide Area Network (WAN) that can span continents.

RDMA over a WAN allows some very useful capabilities that can increase the overall power of a clustered computer system. It can provide remote collaboration with a remote file system allowing access as though it were local, enabling apparent real-time collaboration. RDMA also allows very efficient file transfer over a WAN. This direct data placement is accomplished with little impact on the processors on either end of the file transport. These features are very useful for working with large data files such as those common in many HPC applications. Storage at a Distance will not directly impact conventional client computing since these devices typically don’t have access to dedicated high-speed Internet connections. However with the growth of on-line (cloud) services the use of RDMA could accelerate many background processes within a given data center and between data centers. This could improve overall cloud performance and provide services such as fast backups and replications of data to provide data recovery. Thus Storage at a Distance could have a great impact on the overall performance and capabilities available over the Cloud.

Read the Full Story or see Coughlin’s recent Open Fabrics presentation over at inside-Cloud.

Mellanox Announces Integrated InfiniBand to Ethernet Switch System

Mellanox reveals a single switch that merges InfiniBand and Ethernet technologies for data center solutions.

Mellanox’s new InfiniBand to Ethernet gateway functionality built within Mellanox switches provides the most cost-effective, high-performance solution for data center unified connectivity solutions,” said Gilad Shainer, vice president of marketing at Mellanox Technologies. “Mellanox’s systems enable data centers to operate at 56Gb/s network speeds while seamlessly connecting to 1, 10 and 40 Gigabit Ethernet networks. Existing LAN infrastructures and management practices can be preserved, easing deployment and providing significant return-on-investment.”

Read the Full Story.

Mellanox Reveals a Flexible Alternative to Closed-Code Ethernet Switches

Mellanox introduces an open Ethernet switch initiative designed to give users custom designs and superb return on investment.

The market’s move toward SDN and open source networking offers a variety of advantages that help drive data center productivity and currently is not available with traditional proprietary software,” said Gilad Shainer, vice president of marketing at Mellanox Technologies. “Our demonstration with Quagga highlights the power of Open Ethernet to provide the capability to fully customize open source software packages on top of Mellanox 40 and 56GbE switches, enabling our customers to add differentiation and competitive advantages in their networking infrastructure while reducing cost.”

Read the Full Story.

Penguin Computing Unveils Large-Scale Storage Platform

Penguin Computing has revealed its new Cloud CS Storage Platform that will utilize Scality’s RING Organic Storage software.

Performance, availability and scalability requirements of large scale cloud businesses cannot be met with traditional IT approaches to storage, that typically excel in one of these areas and fall short in another,” said Charles Wuischpard, CEO Penguin Computing. “To meet the demands of our customers that require storage solutions at the petabyte scale we based our large scale storage appliance Icebreaker CS on software from Scality. With its distributed no-shared architecture and its sophisticated Advanced Resilience Configuration, Scality RING offers excellent storage scalability and great availability without compromising performance.”

Read the Full Story.

Mellanox Introduces ConnectX-3 Pro Adapter

Mellanox is speeding up VXLAN with an innovative hardware solution that enables large-scale cloud infrastructures.

To meet the growing demand of cloud computing services, cloud providers must be able to take full advantage of new software techniques to scale-up their cloud networks without reducing performance or efficiency of the infrastructure,” said Gilad Shainer, vice president of marketing at Mellanox Technologies. “With ConnectX-3 Pro, cloud providers will be able to easily scale and grow their business and provide new value-add services while reducing the cost of their cloud infrastructure; ushering in the age of Cloud 2.0

Read the Full Story.

The Things Nobody Told You About ZFS

Over at Nex7′s Blog, Andrew Galloway from Nexenta Systems writes that while ZFS is one of the most powerful, flexible, and robust filesystems, it does have its own share of caveats, gotchya’s, and hidden “features.”

Deduplication Is Not Free. Another common misunderstanding is that ZFS deduplication, since its inclusion, is a nice, free feature you can enable to hopefully gain space savings on your ZFS filesystems/zvols/zpools. Nothing could be farther from the truth. Unlike a number of other deduplication implementations, ZFS deduplication is on-the-fly as data is read and written. This creates a number of architectural challenges that the ZFS team had to conquer, and the methods by which this was achieved lead to a significant and sometimes unexpectedly high RAM requirement. Every block of data in a dedup’ed filesystem can end up having an entry in a database known as the DDT (DeDupe Table). DDT entries need RAM. It is not uncommon for DDT’s to grow to sizes larger than available RAM on zpools that aren’t even that large (couple of TB’s). If the hits against the DDT aren’t being serviced primarily from RAM or fast SSD, performance quickly drops to abysmal levels. Because enabling/disabling deduplication within ZFS doesn’t actually do anything to data already on disk, do not enable deduplication without a full understanding of its requirements and architecture first. You will be hard-pressed to get rid of it later.

Read the Full Story.