Eucalyptus Systems has announced that the company will be discussing its open source, AWS-enabled private and hybrid cloud solutions in several upcoming shows.
“Eucalyptus Systems provides progressive IT organizations in enterprises and technology businesses with the leading open source software for building AWS-compatible private clouds. Eucalyptus supports industry-standard AWS APIs, including EC2, S3, EBS, ELB, Auto Scaling, CloudWatch, and IAM. By providing an open source platform for cloud computing, Eucalyptus is dedicated to the success of its active and rapidly growing ecosystem of customers, partners, developers and researchers.
Over at The Nation, Asina Pornwasin reports that Dell made several announcements last week at the annual Dell World regarding the direction the company is going in the newest of technology environments. Chief among these are big data, cloud computing, social media and mobility.
“Our vision with a consistency strategy through the last five years is to become the leading provider of end-to-end scaled solutions. We invested US$13 billion, doubling the enterprise services solution business from about $10 billion to more than $20 billion. And we built across the portfolio. Now, as a private company, we accelerate our strategy, take a longer-term view of innovation,” Michael Dell said.
It was announced today that Codero has been named to the UP-START Cloud Awards list not once but twice as a leader in cutting-edge cloud hosting solutions. The company was tapped as best in show in the “Best Cloud Hosting Solution” category as well as “Best Hybrid Cloud Solution”.
“It is an honor to be finalists in the 2013 UP-START Cloud awards in two categories. There is a lot of noise and confusion in the cloud computing sector right now and acknowledgement such as this helps set Codero’s best-of-breed technologies apart,” said Emil Sayegh, CEO and president of Codero Hosting. “We’re thrilled that, like our many customers that have adopted our cloud and hybrid solutions, the UP community recognizes the value of our unique and differentiated approach on both our Cloud 2.0, and On-Demand Hybrid hosting solutions.”
Over at TechTarget, George Lawton and Jan Stafford report that Boeing recently revealed–at the WSO2 conference in San Francisco–a new platform-as-a-service (PaaS) line called the The Boeing Edge. Boeing says that this platform will expand their core strategies, improve customer service and provide added means for revenue.
“The PaaS platform needed an SOA infrastructure to support the wide disparities between the IT systems of different airlines. Crabbe said some airlines have older systems that work well, and the cost of moving to new platforms could be disruptive. At the same time, airlines want to leverage new technologies, but are capital constrained. Boeing is positioning the PaaS to move data and assemble applications to create better processes and workflows that can cut costs.
In this video from PuppetConf 2013, Luke Kanies from Puppet Labs discusses how this annual user conference is sparking innovation in the datacenter.
We’re proud to offer a one-of-a-kind event that brings together a rich community of sysadmins, open source enthusiasts and Puppet Labs partners,” said Luke Kanies, CEO and founder of Puppet Labs. “What binds this gathering together is the desire to learn more from each other about the technologies and best practices that are enabling DevOps, cloud automation and continuous delivery.”
Today at VMworld Booth #302 in San Francisco, CA, TwinStrata announced the release of the latest version of CloudArray which allows users to remotely access critical data. Not only does this version dramatically simplify disaster recovery, it also integrates OpenStack platforms.
In May, TwinStrata introduced CloudArray Disaster Recovery as a Service (DRaaS), which provides on-demand disaster recovery for VMware environments. CloudArray 4.7 further extends that capability, enabling organizations to more easily test their disaster recovery plans by providing access to in-cloud, production snapshots of their data from a secondary CloudArray. As a result, customers can conduct fire drills without shutting down their primary site, streamlining the process and reducing the impact of such tests on day-to-day operations.
Today Intel revealed details about its next-generation Intel Atom processor C2000 product family (codenamed “Avoton” and “Rangeley”), as well as outlined its roadmap of next-generation 14nm products for 2014 and beyond. This second generation of Intel’s 64-bit SoCs is expected to become available later this year and will be based on the company’s 22nm process technology and the innovative Silvermont microarchitecture. It will feature up to eight cores with integrated Ethernet and support for up to 64GB of memory.
Datacenters are entering a new era of rapid service delivery,” said Diane Bryant, senior vice president and general manager of the Datacenter and Connected Systems Group at Intel. “Across network, storage and servers we continue to see significant opportunities for growth. In many cases, it requires a new approach to deliver the scale and efficiency required, and today we are unveiling the near and long-term actions to enable this transformation.”
Bryant highlighted Intel’s Rack Scale Architecture (RSA), an advanced design that promises to dramatically increase the utilization and flexibility of the datacenter to deliver new services. As an early adopter, Rackspace Hosting today announced the deployment of new server racks powered by Intel® Xeon processors, Intel Ethernet controllers, and storage accelerated by Intel Solid State Drives.
The new products are expected to deliver up to four times the energy efficiency and up to seven times more performance than the first generation Intel Atom processor-based server SoCs introduced in December last year. Intel has been sampling the new Intel Atom processor server product family to customers since April and has already more than doubled the number of system designs compared to the previous generation. Read the Full Story.
In this video from ISC’13, Dr. Oliver Tennert from Transtec describes how the company’s HPC Cloud Services and Remote Visualization solutions are empowering customers with flexibility for their computing workloads.
We have a lot of customers from the scientific as well as financial or telecommunications areas, where analyzing huge amounts of data in the shortest period of time possible is a critical factor of success,” said Dr. Oliver Tennert, Director Technology Management and HPC Solutions at Transtec.
In this video from the AWS Summit 2013 in New York, Jafar Shameem and David Pellerin from Amazon present: Best Practices for HPC in the Cloud.
More and more, the scalable on-demand infrastructure provided by AWS is being used by researchers, scientists and engineers in Life Sciences, Finance and Engineering to solve bigger problems, answer complex questions and run larger simulations. In this session we start by talking about the supercomputing class performance and high performance storage available to the scientists and engineers at their fingertips. We will go over examples of how startups are innovating and large enterprises are extending their HPC environments. Finally, we walk through some of the common questions that come up as organizations start leveraging AWS for their high performance computing needs.
In this video from the 2013 HPC User Forum, Burak Yenier presents: The HPC Experiement – Paving the way to HPC as a Service.
For the 2nd Round of the HPC experiment, we will apply the cloud computing service model to workloads on remote Cluster Computing resources in the areas of HPC, Computer Aided Engineering, and the Life Sciences.
The need for business application-layer security remains universal, and largely unanswered by IaaS and Cloud vendors,” said CohesiveFT CEO Patrick Kerpan. ”By combining VNS3 Overlay SDN with IaaS provider infrastructure security features, our customers are able to create and control a multidimensional security solution.”
In this video from Moabcon 2013, Robert Clyde and Chad Harrington from Adaptive Computing discuss the company’s recent announcement that Adaptive has been named as a “Cool Vendor” in Cloud Management by Gartner.
We believe to be recognized as a ‘Cool Vendor’ by Gartner for our cloud management technology is confirmation of our pioneering work in policy-based optimization for this space,” said Robert Clyde, CEO of Adaptive Computing. “Our Moab Cloud Suite allows enterprise IT leaders and cloud architects to maximize cloud return on investment through cost savings in capacity and management complexity. Moab’s ability to perform ongoing service optimization ensures organizations achieve both agility and service performance with their private cloud.”
The software-defined datacenter vision took the industry by storm in 2012. It represents a prescriptive model that brings the benefits of virtualization to the rest of the datacenter. Expect to see the move towards a software-defined datacenter accelerate in 2013. Networking and infrastructure security represent some of the stickiest issues when it comes to the drive to a more agile data center. And because of this strong customer interest in SDDCs, you’ll also see more networking vendors and startups modify their roadmaps to steer towards a software-defined networking strategy.