PhoenixNAP Brings Scalability to Metadata Search

Scale-out object storage holds data while Intel® Optane™ DC persistent memory keeps a cache for accelerated performance.

Data centers and the rising numbers of connected devices generate a huge amount of metadata. Companies need an effective way to store and search it, to help them manage their infrastructure and detect security breaches. Using phoenixNAP Elasticsearch* Service, companies can now store their data in scale-out object storage instead of memory and keep a cache in Intel® Optane™ DC persistent memory to accelerate performance.

Challenge

  • Storing and analyzing the growing volumes of metadata can be challenging, yet it is necessary for managing the IT estate and its security.
  • Logs may be distributed across different cloud platforms, making it difficult to build a holistic picture of a particular event or user journey.
  • Elasticsearch provides a solution, but the clustering requires ongoing management and data is typically kept in flash storage, which may be prohibitively expensive as data volumes rise.

Solution

  • PhoenixNAP Elasticsearch Service enables companies to easily deploy Elasticsearch to search across data in different cloud locations.
  • Using Vizion.AI’s technology, the service enables affordable object storage to be used for 90 percent of the data.
  • The other 10 percent is a cache of the hottest data, which is stored in Intel Optane DC persistent memory, providing near-memory speeds at affordable large capacities.

Results

  • Using Intel Optane DC persistent memory for the cache cut latency by 80 percent and accelerated indexing by 3x compared to hosting the solution in a hyperscale cloud environment.1
  • PhoenixNAP customers can now analyze the metadata stored in phoenixNAP’s data centers without needing to validate the data sovereignty of a new provider.
  • The cloud service provider (CSP) can also sell the service to new customers that need an easier way to make sense of their metadata.

Managing huge volumes of metadata
By 2025, the Internet of Things (IoT) will be generating 79.4 zettabytes of data a year, according to IDC.2 That’s roughly 79.4 billion terabytes. Along with all the data people want comes metadata: data about the data, such as logs. That will play a role in this data growth and will need to be managed and analyzed. This will intensify a problem that companies already face: Many digital assets, including servers and applications, are generating logs that they must analyze to manage their IT performance and help to protect its security.

These logs might be distributed across different cloud platforms, and finding any single data point can mean huge volumes of data need to be searched. Often multiple data points will need to be correlated. If there is a security incident, for example, companies might need to quickly trace the activities of a user across all the logs within a particular time period. The longer that takes, the greater the risk to the business.

Many companies are looking for a cloud-native platform that will enable them to not only store, but also to search and analyze, their growing metadata logs. Elasticsearch is an open source search engine that could be used, but it typically requires the data to be stored on flash storage media, which may be unsustainably expensive as data volumes grow.

Additionally, deployment and management of Elasticsearch can be demanding. Customers might need to launch six virtual machines before they can get their first data in and will need to monitor data volumes so they can delete data or expand the cluster when it fills.

Putting the data into a hyperscale cloud environment may be cheaper, but it still requires ongoing cluster management by the user and does not take advantage of low-cost object storage.

Cloud customers are looking for a solution that overcomes these limitations, to easily and quickly analyze their metadata, while taking advantage of the economics of open source software.

Introducing phoenixNAP Elasticsearch* Service
CSP phoenixNAP worked with Vizion.AI and Intel to launch the phoenixNAP Elasticsearch Service. This service enables companies to easily deploy Elasticsearch to analyze data across their multi-cloud environment. Only the hottest 10 percent of data needs to be stored in fast storage: the rest can be kept in the cloud using object storage, with Vizion.AI’s solution taking care of compression, deduplication, encryption, and transport to and from the cloud. Using the cloud for the bulk of the metadata can dramatically cut the cost, compared to keeping it in flash storage.

Customers order the solution through a portal and the set-up and management is automated, resolving one of the pain points with Elasticsearch. Because the solution is based on a microservice rather than a managed infrastructure pool, customers no longer need to monitor their data volumes. The object storage on the back-end scales to accommodate data as it comes in.

Figure 1 shows a simplified architecture of the solution. Customer workloads run in containers and are orchestrated using Kubernetes*. Because the infrastructure is shared across hundreds of clients, there are significant economies of scale, compared to a company setting up its own Elasticsearch cluster.

Figure 1. The phoenixNAP Elasticsearch* Service enables object storage to be used for scale-out storage of metadata which can be searched and analyzed using Elasticsearch. The solution is based on the 2nd Generation Intel® Xeon® Scalable processor family with Intel® Optane™ DC persistent memory.

Vizion.AI’s parent company Panzura* provides an intermediary layer that can translate between data center storage protocols, and cloud-native object storage. This enables Elasticsearch (and other applications) running in the data center to access cloud storage without modification, with the in-memory cache helping to deliver high performance.

The underlying server hardware is powered by the 2nd Generation Intel® Xeon® Scalable processor family with Intel Optane DC persistent memory. Intel Optane DC persistent memory enables CSPs to unlock a unique combination of affordable large capacities, with near memory performance. Physically, Intel Optane DC persistent memory is compatible with DRAM and plugs into the same DIMM slots. The persistent memory is used for caching the hottest 10 percent of data.

“The nice thing about using Intel Optane DC persistent memory is that it’s not disruptive to app deployment. We look at the world from a Kubernetes perspective, and persistent memory looks to us just like another storage resource. We get efficiencies out of the box, without having to change any code. Having our solution accelerated by Intel Optane DC persistent memory enables us to achieve cost efficiencies too, compared to using a hyperscale cloud provider.”— Geoff Tudor, vice president and general manager of Vizion.AI

The virtualization layer is based on VMware vSphere* 6.7, which supports Intel Optane DC persistent memory.

Vizion.AI measured the performance of the infrastructure using persistent memory and compared it to the speed of its software running in a public cloud service. The company found that the phoenixNAP implementation was 3x faster at indexing and digitizing documents.1. There was also an 80 percent reduction in latency, which is particularly important when using any identified issues to trigger an incident response in real time.1

“The nice thing about using Intel Optane DC persistent memory is that it’s not disruptive to app deployment,” said Geoff Tudor, vice president, and general manager of Vizion.AI. “We look at the world from a Kubernetes perspective, and persistent memory looks to us just like another storage resource. We get efficiencies out of the box, without having to change any code. Having our solution accelerated by Intel Optane DC persistent memory enables us to achieve cost efficiencies too, compared to using a hyperscale cloud provider.”

With several hundred customer containers on each server, improvements in the indexing of new content and latency in real-time search help avoid contention on the server and improve the customer experience.

Intel Is an Ally
Intel has a close working relationship with phoenixNAP, including helping the company to develop and market new services. To accelerate the launch of phoenixNAP Elasticsearch Service, Intel provided phoenixNAP with early access to Intel Optane DC persistent memory, and the 2nd gen Intel Xeon Scalable processor, which is required to use it. Intel was on hand to offer support with implementing the solution, and with fine-tuning it to improve performance.

Differentiating with Elasticsearch*
The cloud market is intensely competitive, so phoenixNAP differentiates from hyperscale providers by offering a cloud environment that is optimized for particular applications and enhanced with value-added services.

The launch of the new Elasticsearch service enables phoenixNAP to provide more value to its existing customers, by giving them new ways to manage their data within the phoenixNAP cloud. For customers subject to regulation, that means there’s no need to validate the data sovereignty or security of a new solution provider.

The new service can also help to attract new business. Because the solution works across cloud environments, it does not require the bulk of the data to be stored in phoenixNAP’s data center. New customers may choose to migrate data to phoenixNAP, or may prefer to take advantage of the ease of deployment phoenixNAP offers as they access third-party cloud storage locations.

Vizion.AI and phoenixNAP are working together on marketing the solution to build their joint customer base.

Lessons Learned
There are several lessons from phoenixNAP and Vizion.AI’s experience launching the new service.

  • “Matching the workload to an optimized infrastructure can deliver huge savings,” said Geoff Tudor. “Using Intel® Optane™ DC persistent memory, I can get more processing done than I could on an equivalent hardware platform without persistent memory.”
  • Code modifications are not always required to benefit from Intel Optane DC persistent memory. VMware vSphere* 6.7 is compatible with persistent memory, for example, out of the box.
  • Intel works with cloud service providers (CSPs) to help them to create and launch new services, including the new phoenixNAP Elasticsearch* Service.
Spotlight on PhoenixNAP
Founded in 2009, phoenixNAP is a global IT services provider offering cloud, dedicated server, colocation, and infrastructure as a service (IaaS) technology solutions. PhoenixNAP is a Premier Service Provider in the VMware vCloud Air* Network Program and is a Payment Card Industry Data Security Standard (PCI DSS) Validated Service Provider. Its flagship facility in Phoenix, Arizona, is Service Organization Controls (SOC) Type 1 and SOC Type 2 audited.
 
Technical Components of the Solution
  • Elasticsearch*. Elasticsearch is open source software that provides a distributed, multitenant-capable full text search engine.
  • Panzura* Storage Layer. The Panzura Storage Layer provides an intermediary between data center storage protocols and cloud storage protocols, enabling data center applications to use cloud storage without modification.
  • 2nd Generation Intel® Xeon® Scalable processor. The 2nd gen Intel Xeon Scalable processor provides the foundation for a powerful data center platform that creates a leap in agility and scalability. Disruptive by design, this innovative processor sets a new level of platform convergence and capabilities across compute, storage, memory, network, and security. Enterprises and cloud and communications service providers can now drive forward their most ambitious digital initiatives with a feature-rich, highly versatile platform.
  • Intel® Optane™ DC persistent memory. Intel Optane DC persistent memory is a new class of memory that brings greater capacity to the cores – terabytes instead of gigabytes per platform – and is accessible over the memory bus. This revolutionary technology delivers a unique combination of affordable large capacity and support for data persistence. It is supported by 2nd gen Intel Xeon Scalable processors.

Explore Related Intel® Products

Intel® Xeon® Scalable Processors

Drive actionable insight, count on hardware-based security, and deploy dynamic service delivery with Intel® Xeon® Scalable processors.

Learn more

Intel® Optane™ DC Persistent Memory

Extract more actionable insights from data – from cloud and databases, to in-memory analytics, and content delivery networks.

Learn more

법적 고지 및 면책 사항

인텔® 기술의 특징과 이점은 시스템 구성에 따라 달라지며 지원되는 하드웨어, 소프트웨어 또는 서비스 활성화가 필요할 수 있습니다. 성능은 시스템 구성에 따라 달라질 수 있습니다. 어떠한 컴퓨터 시스템도 절대적으로 안전할 수는 없습니다. 시스템 제조업체 또는 판매점에 문의하거나 https://www.intel.co.kr 에서 자세한 내용을 확인하십시오. // 성능 테스트에 사용된 소프트웨어 및 워크로드는 인텔® 마이크로프로세서에만 적합하도록 최적화되었을 수 있습니다. SYSmark* 및 MobileMark* 와 같은 성능 테스트는 특정 컴퓨터 시스템, 구성 요소, 소프트웨어, 운영 및 기능을 사용하여 측정합니다. 해당 요소 중 하나라도 바뀌면 결과가 달라질 수 있습니다. 고려 중인 제품을 제대로 평가하려면 다른 제품과 결합하여 사용할 경우 해당 제품의 성능을 포함한 기타 정보 및 성능 테스트를 참고해야 합니다. 정확한 내용은 https://www.intel.co.kr/benchmarks 를 참조하십시오.//성능 결과는 구성에 표시된 날짜의 테스트를 기반으로 하며 공개된 모든 보안 업데이트를 반영하지 않았을 수도 있습니다. 자세한 내용은 공개된 구성 정보를 참조하십시오. 어떤 제품 또는 구성 요소도 절대적으로 안전할 수는 없습니다. // 비용 절감 시나리오는 인텔® 기반 제품이 특정 상황 및 구성 하에서 미래의 비용에 미치는 영향과 절감 효과를 예시하기 위한 목적으로 제시되어 있습니다. 상황에 따라 다르게 적용될 수 있습니다. 인텔은 일체의 비용 또는 비용 절감에 대한 보증을 하지 않습니다. // 인텔은 본 문서에 인용된 타사 벤치마크 데이터 또는 웹 사이트를 통제하거나 감사하지 않습니다. 인용된 웹 사이트를 직접 방문하여 해당 데이터가 정확한지 확인하시기 바랍니다. // 일부 테스트 결과는 인텔 내부 분석, 아키텍처 시뮬레이션 또는 모델링을 바탕으로 얻은 추정치이며, 참조용으로만 제공됩니다. 시스템 하드웨어, 소프트웨어 또는 구성의 차이점은 실제 성능에 영향을 줄 수 있습니다.

제품 및 성능 정보

1

성능 테스트에 사용된 소프트웨어 및 워크로드는 인텔® 마이크로프로세서에만 적합하도록 최적화되었을 수 있습니다. 구성: 최대 3배 인덱싱 및 캐시 지연 시간 80% 단축. 2019년 3월 현재 Elasticsearch에 대한 PhoonNAP 및 Panzura 테스트 기반: 인텔® 제온® Gold 6230 프로세서, 총 메모리 256 GB RAM, 1.5TB of 인텔® Optane™ DC 영구 메모리, 하이퍼스레딩: 활성화됨, 터보: 활성화됨, ucode: 0x043, OS: (‘centos-release-7-5.1804.el7.centos.x86_64’), 커널: (3.10.0-862) vs. AWS i3xlarge (Intel) 인스턴스, Elasticsearch, 메모리: 30.5GB, 하이퍼바이저: KVM, 스토리지 유형: EBS 최적화, 디스크 볼륨: 160GB, 총 스토리지: 960GB, Elasticsearch version: 6.3.