Rayboy Insider Search | Blog - Rayboy Insider Search
Rayboy IS data storage recruiting, information storage recruiting, cloud storage recruiter, virtual storage recruiter, jordan rayboy, data storage jobs, cloud storage jobs, big data jobs, big data recruiter, virtual storage jobs.
data storage recruiter, information storage recruiting, cloud storage recruiter, virtual storage recruiter, jordan rayboy, data storage jobs, cloud storage jobs, big data jobs, big data recruiter, virtual storage jobs
1035
paged,page-template,page-template-blog-template3,page-template-blog-template3-php,page,page-id-1035,page-parent,paged-5,page-paged-5,

Blog

 

What Happens Now? Hitachi Data Systems Acquires Sepaton

21 Aug 2014, Posted by Jeska Rayboy in Blog

More consolidation happening in the storage industry, as Hitachi Data Systems acquires Sepaton. George Crump, an IT analyst whose firm focuses on data storage and virtualization, wrote an interesting article outlining why he believes this is a win-win for both sides:

Hitachi Data Systems (HDS) announced they had acquired Massachusetts-based Sepaton, an established manufacturer of purpose built backup appliances (PBBAs) that use advanced de-duplication to shorten backup times and minimize backup appliance “sprawl”. The company will become a wholly-owned subsidiary of Hitachi Data Systems, which is a division of Hitachi Ltd, of Japan.

Who is Sepaton?aquire

Founded in 2001, Sepaton was one of the early entrants into the disk-based, de-duplication backup market and originally focused on replacing tape-based backup systems (Sepaton’s name is actually “No Tapes” spelled backwards). But as disk backup and de-deduplication became more mainstream, Sepaton rightly shifted their focus to the advantages of their data reduction technology, building a base of some 3000 customers.

Leveraging their ‘DeltaScale’ technology, Sepaton’s PBBAs deliver some of the fastest backup and recovery performance on the market (up to 80TB per hour) in a modular, scalable, architecture. Using byte-level de-duplication Sepaton’s systems provide some of the highest, most consistent data reduction ratios regardless of data type, enabling multiple-PB, single-system capacities.

 Did Sepaton need to do this?

Sepaton participates in the fiercely competitive purpose-built backup appliance market. They have had the advantage of focusing on enterprise-level customers with a highly scalable, high performance feature set that typically appeals to that market. Their challenge, similar to any startup or small company selling to the enterprise, is building the credibility to effectively compete. While they may have had a product that some considered better suited to the enterprise, they were at a distinct disadvantage when going up against the likes of EMC.

They also faced the reality that many of their partners eventually became competitors. For example, HP was an early advocate and OEM of Sepaton’s, but now competes directly with their StoreOnce technology. The advantage of being part of HDS is that Sepaton gets instant credibility in the market and access to HDS’s resources, channel and sales organization.

Why did HDS do this?

For their part, HDS had no serious offering in the disk backup appliance market while most of their competitors did; including HP, IBM, EMC, Dell and even Oracle. HDS does have an enterprise sales organization and providing them with a quality disk backup appliance that is differentiated from their competition should be an immediate benefit. And Sepaton does create some synergies with HDS’s existing product line. HDS has also been providing the hardware platform for Sepaton’s S2100, with their AMS2100 SAS RAID-6 based storage system.

READ MORE.

Everyone’s well aware that the explosive growth of data is one of the biggest challenges facing organizations of all sizes across nearly every industry. A growing number of cloud service providers (CSPs) are recognizing that they can provide more than just simple storage and economical compute power to their customers who are buried in data. They also can help them manage their Big Data processing needs on an ongoing basis.big-data-318x211

Big Data wouldn’t have gotten so much attention over the past few years if it weren’t for the availability of cloud services that enable organizations to do something with all the data. Organizations have always faced challenges acquiring, analyzing and acting on data. Third-party data processing services have been a lucrative business for many years because most organizations haven’t had and didn’t want to invest in internal systems and staff to collect, collate and interpret the data themselves.

Read More.

Network security has always been a near-impossible task, but the cloud era is ushering in a fundamentally new model that truly renders network security an oxymoron. How so, you ask?NetworkSecurity

In the past, organizations built and controlled their own networks. Because IT could control the flow of traffic inbound and outbound, the nodes on the network, and the users, they also controlled the network security architecture. IT was responsible for where and how to place firewalls, VPNs, IDS and IPS, load balancers, web application firewalls, and other security devices. In short, when you owned the network, you also owned securing the network.

Today, with more organizations moving to the cloud, a new approach is necessary. Three fundamental differences are driving this change:

  • Cloud providers own the network
  • Traffic flows in the cloud much differently. Interdependencies between applications and services, both internal and external, are exploding
  • Network security has historically been delivered through appliances

Read More.

SMBs Tie Cloud Computing To Increased Revenue

06 Aug 2014, Posted by admin in Blog

wordcloud5Research by Oxford Economics and Windstream Communications has found that many small and midsized businesses have a strong appetite for cloud computing and tie it to increased revenues, even though they often don’t have large IT departments to support them getting into the cloud.

Oxford Economics is the research outfit frequently tapped by members of the Fortune 500 or Federal Reserve Board to draw a picture of what’s happening in different parts of the world economy. At one time, it was made up primarily of economists from Oxford University in the UK, although time has diminished that link. To conduct its “Path To Value” cloud survey, it solicited feedback in May from 350 business executives in all regions of the US; 33% were CEOs, CTOs, or COOs, while the other 67% held other executive positions.

Read More.

If government IT professionals aren’t getting much sleep these days, it’s likely because they’re more worried than ever about catastrophic cyber-security breaches.gov

In InformationWeek’s 2014 Federal Government IT Priorities Survey, 70% of respondents said that cyber- and information security programs are “extremely important” at their agencies, making IT security the highest government IT priority. Another 24% said IT security is at least fairly important. Only 3% said security is “not important at all.”

The survey also demonstrated that security is intensifying as the top government IT priority. In last year’s survey, 67% of respondents stated that information security is extremely important.

Read More.

red-hat-logo-XPSORedHat Inc. today released a new 1.2 version of its Inktank Ceph Enterprise software, featuring erasure coding, cache tiering and updated tools to manage and monitor the distributed object storage cluster.

The release marks the first product update since Red Hat acquired Inktank Storage Inc. in May for about $175 million in cash. Targeted at cloud, backups and archives, Inktank Ceph Enterprise (ICE) combines open source Ceph software for object and block storage, Calamari monitoring and management tools, and product support services.

Red Hat’s ICE 1.2 software-defined storage brings the commercially supported product in line with the latest Firefly release of open source Ceph storage software, and two key new features — erasure coding and cache tiering — are already generating interest.

Read More.

Google I/O: Hello Dataflow, Goodbye MapReduce

28 Jul 2014, Posted by admin in Blog

Google I/O this year was overwhelmingly dominated by consumer technology, the end user interface, and extension of the Android universe into a new class of mobile devices, the computer you wear on your wrist.Google-IO-2014

At the same time, there were one or two enterprise-scale data handling and cloud computing gems scattered among all the end user announcements.

One was Cloud Dataflow, introduced at the San Francisco event during a keynote presentation Wednesday. When it comes to handling large amounts of unstructured data, one of Google’s original contributions to the field was MapReduce. When combined with a distributed file system, it became a fundamental new type of data sorting, analyzing, and storage mechanism of the era: Hadoop.

Read More.

In his keynote at Spark Summit 2014 in San Francisco today, Databricks CEO Ion Stoica unveiled Databricks Cloud, a cloud platform built around the Apache Spark open source processing engine for big data.spark%20summit%202014

Spark, which got its v 1.0 release just one month ago, is a cluster computing framework designed to sit on top of Hadoop Distributed File System (HDFS) in place of Hadoop MapReduce. With support for in-memory cluster computing, Spark can achieve performance up to 100x faster than Hadoop MapReduce in memory or 10x faster on disk.

Spark can be an excellent compute engine for data processing workflows, advanced analytics, stream processing and business intelligence/visual analytics. But Spark clusters can be difficult beasts, Stoica says. Databricks hopes to change all that with its hosted Databricks Cloud platform as a turnkey solution.

Read More.

A project launched by CloudFlare, a provider of website performance and security services, allows organizations engaged in news gathering, civil society and political or artistic speech to use the company’s distributed denial-of-service (DDoS) protection technology for free.cloudflarelogo

The goal of the project, dubbed Galileo, is to protect freedom of expression on the Web by helping sites with public interest information from being censored through online attacks, according to the San Francisco-based company.

“If a website participating in Project Galileo comes under attack, CloudFlare will extend full protection to ensure the site stays online — no matter its location, no matter its content,” the Project Galileo website says.

Read More.

Amazon, Google Spar Over SSDs In Cloud

14 Jul 2014, Posted by admin in Blog

Amazon Web Services is making solid state disks the standard storage for its Elastic Block Store service used with running instances, and is setting the price to compete with Google Compute Engine.google_vs_amazon

Solid state was available previously on Amazon’s EC2, but it tended to be associated with specialized server types designed to provide data management and high transaction throughput. The storage Amazon announced on Tuesday is general-purpose storage volumes based on SSDs, priced at $.10 per GB per month.

If general-purpose SSDs don’t provide a high enough input/output rate, customers can purchase additional capacity for $.125 per GB per month for each additional 1000 IOPS provisioned times. The cost is reduced by the share of the month in which they’re actually used — for example, if they are used for half the month, the bill would be 50% of what would otherwise be a month’s total.

Read More.