Category Archives: Hardware

Network Design for the Modern Data Center

imagesAs new technologies evolve, Datacenter infrastructure is becoming more complex. It has led to incompatible framework and consoles throughout network, storage and server. If you are looking for more simplicity and flexibility then you can go for modular design, which also lets IT architects make changes to building blocks when required.

New traffic patterns call for new designs

It’s time you let go of the conventional tree structure to keep up with data center traffic. Any-to-any storage/server mesh means traffic doesn’t have to move from north to south before moving from east to west. Specialized storage switches like Fiber channels can be used by companies in order to link storage devices and servers. To make the most of efficiencies of scale, you can think of consolidating your storage networks with that of your data center. It also ensures that number of siloed networks that require a lot of maintenance can be reduced too.

Simplification

Manual provisioning of data center is becoming tricky, but automating it with the tools can be difficult as well. That’s because errors creep up on account of network complexity. Three or four-tier network structure has a lot of potential errors that just can’t be accounted for. Assessing how traffic flows in each switch and how it can make a difference to packet delay and loss might be a tough ask as well. There are new challenges with cloud computing and virtualization too, which is why your network should evolve and rise up to them. Otherwise you are faced with more problems than solutions.

Commodity Hardware has its benefits

Low-cost commodity hardware running distributed software came into the picture as Google enhanced its web search and cloud services. This strategy ensures that you can scale fast without having to make huge investments. Data centers of the yore have to shell huge amounts to upgrade their software after every few years. But commodity hardware offers them the same advantages as cloud providers. You have a distributed software layer, which means resources from all clusters of commodity nodes can be abstracted. As a result you get an aggregate capacity that’s better than the most powerful monolithic approaches.

Enhanced flexibility

Compartmentalizing unique technology capabilities into various silos only makes things harder for management. Another problem with that is in every separate silo you have to manage the scale out operations individually. These are just some of the issues with regular data centers, which are not easily scalable besides being more rigid as well. In today’s times there have been several advancements in technology and it has necessitated frequent updating of skills amongst siloed teams. In fact it’s imperative for them in order to manage their responsibilities. That’s why silo based infrastructure is becoming increasingly difficult to manage.

Hybrid Clouds

Public clouds have their merits because they offer storage, resources and Internet-accessible compute for different users. That’s the reason they have become integral to businesses’ IT strategy. You can pick applications that work well in public clouds; for example, infrastructure as a service. It’s particularly true for applications with unpredictable demands that work better because of the global elasticity they get. Their ability to offer self-service resources means public clouds work well for developers of applications who require quick access to computing and storage abilities too.

Focus on service continuity

User expectations have changed with consumerization and your disaster strategies cannot only be reactive. Unauthorized cloud-based services seem tempting in case there are any interruptions to contend with. That’s why it’s important for admin to make sure there is 100% availability, which can ensure continuity in service. Focusing on recovery when problems arise is just not good enough. That leads to data centers that have to be re-architected. You also have to keep round trip times low and have a lot of bandwidth as well. Applications architectures can be passed on through different sites and data centers, which allow them to scale globally. Moreover their up-time is increased and they perform more efficiently.

End users can be empowered

Data centers have to be more reliable today and modernizing them will ensure that you can keep up with the demands of consumerization. Compute-intensive VDI systems and existing virtualized enterprise applications can also be dealt with a lot better.

Let Software Drive

Data centers today have to have latest software capabilities, but the problem is that they are rigid and often run by field-programmable gate arrays (FPGAs) or application-specific integrated circuits (ASICs). It means admin can have new services without adding hardware, which offers flexibility while you save costs. Scalability, up-times are improved too.

Conclusion

Enterprises have to stay competitive to adapt to latest changes in business environment. Data computing and storage capacity should be increased while having the provision of adding new capabilities easily.

About the author

Ramya Raju is a freelance writer/web designer from India. His web site is Datacenters.pro.

A3Cube Develops Military-Grade Network Interface Controller

a3cubeThis week A3CUBE announced that it has teamed with electronic manufacturing firm AirBorn Inc. to develop an “unbreakable” network interface controller called the RONNIEE RIO.

We leveraged our experience in the defense and aerospace industries to develop a connector where extreme speed does not come at the expense of availability and robustness,” said Emad Soubh, Director of New Product Development of AirBorn. “Working with A3CUBE, we were able to produce a high-density, high-bandwidth, high-reliability interconnect adapter card ready for the challenges of the next-generation data center, cloud computing platforms, analytics, storage and converged architectures.”

RONNIEE RIO is available in a low-profile PCIe card form factor designed to guarantee maximum availability, military-grade reliability and low-latency capabilities of peer-to-peer operations, RDMA and remote CPUs load and store operations.

Virtustream Takes a New Approach to Take on the Giants of Cloud Computing

Over at Network World, Brandon Butler, reports that Virtustream–while not as big as Microsoft and AWS–has carved out a nice niche in the Infrastructure-as-a-Service (IaaS) world and takes a unique consultancy approach.

“Co-CEO and CTO Kevin Reid describes it like this: You can’t just walk into a bank and deposit $100,000; the financial institution would ask questions, making sure the money is not laundered or gained from some illicit activity. Similarly, Virtustream doesn’t just allow customers to swipe a credit card and get access to hundreds of thousands of virtual machines holding sensitive data of its large enterprise customers. “We want to know our customers,” says Reid, who used to manage a consulting firm that was bought by Capgemini before working at Virtustream. “We run more of what could be considered a community cloud, or a country club cloud. None of the workloads in our cloud are unknown to us – we know where they came from.”

Read the Full Story.

Atos Acquires Bull in Bold Move to Bolster Cloud Computing Capabilities

Over at Binary Tribune, the staff has reported that IT services provider, Atos, has purchased Bull for $844 million to bring them to the HPC market as well as shore up its cloud and Big Data offerings.

“The Chief Executive Officer of Atos SE – Mr. Thierry Breton, who was also part of the Bull’s team in the period from 1993 to 1997, said in the statement, which was cited by Bloomberg: “Bull’s highly recognized teams in advanced technologies such as high computing power, data analytics management, and cybersecurity ideally complement Atos’ large scale operations.”

Read the Full Story.

Softchoice First in North America to Launch End-to-End FlexPod Solution

Softchoice, based in Toronto, Canada, has announced a top-to-bottom approach to FlexPod services via what it calls its FlexPod Accelerator+ offering.

“Given the widespread popularity and success of FlexPod, we saw a need for companies to fast-track their adoption of the solution,” says Aaron Brooks, Director of Innovation at Softchoice. “FlexPod Accelerator+ helps clients build a converged infrastructure that increases IT’s ability to quickly scale and support the deployment of applications and data, and prepares them for a hybrid infrastructure.”

Read the Press Release.

Codero Announces Fourth Data Center Now Open in Dallas-Fort Worth

Codero Hosting has announced the opening of its flagship data center in Dallas-Fort Worth, TX. The hybrid hosting services will allow customers unprecedented performance and provisioning and will provide full redundancy and enormous scalability.

“The new data center is all about serving customers requiring world-class performance, reliability and scalability running on the most advanced networking technology infrastructure. Whether it’s bare metal dedicated servers, public cloud, private cloud, or our patented On-Demand Hybrid Cloud technology, our expanded data center footprint supports it all,” said Robert Autenrieth, COO of Codero Hosting. “DFW is the hub of connectivity for U.S. bandwidth and offers the industry’s latest technology – everything from power and cooling to density, as well as a variety of choices in bandwidth providers, flexibility in labor pool and unbeatable power costs. All these factors combined to make it a natural choice for our fourth data center in the U.S.”

Read the Press Release.

ShopKeep Garners $25 Million in Venture Funding to Help Small Business POS Needs

ShopKeep, the small business Software-as-a-Service (SaaS) POS specialist, has raised $25 million in its third round of funding. As the competition heats up in the space, the company looks to use the largess to improve its software and go after new markets.

“With the rapid growth of Shopkeep and competitors like Boston-based Leaf, which raised $20 million earlier this year, it’s clear that cloud computing is really making an impact on the business technology industry. By building software which runs remotely, rather than on a servers located in a store, these startups can dramatically reduce costs, opening up new markets and erasing much of the maintenance and service fees that drive a large portion of legacy firms’ revenue.

Read the Full Story.

Why the Main Frame Will Probably Never Truly Be Replaced by Cloud Computing

Over at Wired, Tom Bice writes that while the cloud has certainly made its impact on the computing world, the traditional main frame is still an excellent option for reliability, security and even scalability.

“The mainframe is not nearly as trendy as today’s hot topics like Big Data or the cloud, but it continues to serve as the central nervous system of major industries like finance and healthcare, which is something the public cloud has yet to achieve. Over the years, the mainframe has adapted with each new wave of technology to maintain its place at the center of many computing environments. At the same time today’s mainstream virtualization and security approaches have been part of the mainframe platform for decades.

Read the Full Story.

 

Seagate Announces 6TB Hard Disk Drive Targeted at the Cloud

Over at GreatResponder, Maria Dehn writes that hard disk giant, Seagate, has launched a 6TB disk aimed at reducing the bottlenecks that often happen in cloud computing.

“The announcement was made in the wake of the exponentially growing demand of the hard disk drive space and performance in the cloud computing services both private and public clouds. Seagate has designed and developed the most efficient disk drives whose performance is about 25% higher than the highest performing disk drives in the marketplace. It was further explained about the importance of this disk in the domain of cloud computing services that the company has developed this disk that offers industry grade security, self encrypting drive or SED feature, and instant secure erase ISE features.

Read the Full Story.

 

HP is Looking for a Cloud Application Engineer in our Job of the Week II

Hewlett-Packard in Palo Alto, CA is looking for a Cloud Application Engineer in our Job of the Week II.

“HP Networking Software Engineers play lead roles in multi-discipline teams working on new networking products and solutions. This includes active involvement in product feature definition, hardware feature requirements, SW development and test, customer documentation, and on-going product support. Projects typically involve coordination with internal and external development teams, often in other geographies. Enabling others is as important as personal contribution.

Are you paying too much for your job ads? Not only do we offer ads for a fraction of what the other guys charge, our inside-Cloud Job Board is powered by SimplyHIred , the world’s largest job search engine.

IBM CEO Rometty on What’s in Store for the Future

Over at ZDNet, Larry Dignan reports on IBM CEO Ginni Rometty’s annual letter to shareholders. Rometty’s words definitely pointed in the direction of less hardware and more software and services in the cloud to bring Big Blue where shareholders expect it to be.

“Rometty’s comments won’t be surprising to people familiar with IBM, but the subtext to shareholders revolved around the company’s transition and how it’ll take some time for businesses like cognitive computing to outpace slowing growth in hardware. For shareholders, IBM is paying them to have some patience via dividends, but the company’s last earnings conference call surfaced some analyst angst over the lack of growth even as Big Blue hits earnings projections.

Read the Full Story.

HP is Looking for a Sr. Cloud Application Engineer in our Job of the Week

Hewlett-Packard in Palo Alto, CA is looking for a Sr. Cloud Application Engineer in our Job of the Week.

“HP Networking Software Engineers play lead roles in multi-discipline teams working on new networking products and solutions. This includes active involvement in product feature definition, hardware feature requirements, SW development and test, customer documentation, and on-going product support. Projects typically involve coordination with internal and external development teams, often in other geographies. Enabling others is as important as personal contribution.
Are you paying too much for your job ads? Not only do we offer ads for a fraction of what the other guys charge, our inside-Cloud Job Board is powered by SimplyHIred , the world’s largest job search engine.

Carpathia Launches Federal Advisory Council to Strengthen Cloud Computing Delivery to Federal Agencies

Carpathia takes on the big challenge of identifying and addressing federal cloud computing security and compliance issues.

“Cloudyn is providing different options to compare, and optimize the cost incurred due to the cloud computing services offered by major cloud based service providers – AWS and Google. Earlier, the company provided support for AWS services – but later on – it started support for Google computing services. Now, the company is eyeing on a broad range of cloud based services offered by numerous cloud service providers both in the public and private domain of cloud computing services.

Read the Press Release.

Silicon Valley Bank and Farnam Street Financial Gives Codero $8 Million in Funding

Codero has announced an $8 million round of funding from Silicon Valley Bank and Farnam Street Financial to expand its world wide data center footprint.

“We have outpaced our industry’s growth, expanding faster than other hosting and cloud providers due to our commitment to providing customers with unparalleled performance, expertise, support and value,” said Emil Sayegh, president and CEO of Codero Hosting. “The support of SVB and Farnam Street Financial helps us accelerate our growth and capitalize on our market success.”

Read the Press Release.

IBM Brings Watson to the Cloud for Supercomputing and Data Analysis

Over at MailOnline, the staff reports that IBM is bringing the supercomputer and Jeopardy winner, Watson, to the cloud to be used by various users. The company is investing $1Billion in housing the computer in New York offices and is giving financial, banking and health industries access to it.

IBM has transformed Watson from a quiz-show winner, into a commercial cognitive computing breakthrough that is helping businesses engage customers, healthcare organizations personalize patient care, and entrepreneurs build businesses,’ said Michael Rhodin, who will head the new Watson Group. IBM CEO Ginni Rometty said that Watson is built for a world where big data is transforming every industry and every profession. ‘Watson does more than find the needle in the haystack,’ Rometty said in remarks released ahead of the company’s Thursday presentation. ‘It understands the haystack. It understands context.’

Read the Full Story.

IBM Turns to Green Computing When it Comes to the Cloud

Over at VentureBeat, Jordan Nevet writes that IBM looks to lower carbon emissions in the cloud as a way to not only be environmentally conscious but to be more competitive in the space.

“The new collaboration with the Trinity researchers resulted in a set of algorithms named Stratus. Carbon dioxide production, electricity cost, and the time it takes to move and crunch data all factor into the researchers’ experimental model, which was based on Amazon Web Services’ popular EC2 public-cloud service. As a result of the work, the researchers managed to drop carbon emissions by 21 percent, IEEE Spectrum reported.

Read the Full Story.

Intel and 64-bit ARM Processors in Server Arms Race

Over at the EE Herald, the staff reports on the intense competition that ARM 64 bit processor core for servers has bought to Intel’s and AMD’s dominance.

“With this trend of availability of ARM 64 bit processor core for servers, Intel is now facing a competition from around half a dozen of chip companies who are designing server chips based on ARM 64-bit processor core. The 64-bit ARM processor cores compete with Intel’s server processor chips mainly on the power consumption and the size. This is turning out to be interesting race. It’s like Intel versus group of ARM based server chip vendors. In this there is also a startup Calexda found to design exclusively server chips based on ARM 64 bit arch.

Read the Full Story.

Amazon Web Services Takes on Big Data with Kinesis

Over at InfoWorld, Mikael Ricknäs writes about Amazon’s latest offering in data analysis for enterprise called Kinesis. The service, now in public beta, is designed to process massive amounts of real-time data giving companies tons of scalability in provisioning and deployment.

“Amazon sees a number of use cases for Kinesis; the service can collect data generated by an application and make it available for identification of slow queries, page views or resource utilization. Kinesis can also collect and analyze financial information in real-time or help game developers see how the players are interacting with their game and each other.

Read the Full Story.