Benchmarks and Research

Benchmarks and Research

During the recent years, our R&D engineers performed multiple benchmarks of emerging technologies (such as NoSQL DBs, Hadoop, Cloud Foundry PaaS, etc.). Some of our studies were published by, NetworkWorld, ComputerWorld, TechWorld, and other tech magazines. Download these white papers to get the results of our findings on performance, supported features, optimization, etc.

This 11-page technical paper features a high-level description of how to manage Cloud Foundry deployments—distributed across multiple data centers—using Vault, Concourse CI, and BOSH. Providing a reference architecture, the study explores how to create repeatable and secure Cloud Foundry deployments with related services using state-of-the-art tools available in open source. Highlighting each of the component and pipeline involved into the workflow, the document also overviews the backup and recovery procedure, as well as updates rollout. You will also learn about the pros and cons of some alternatives available.

This 15-page benchmark demonstrates performance results of the most popular Ruby frameworks, template engines, Rack application servers, and Ruby ORM tools. All the Ruby frameworks and tools were tested in the production mode and with disabled logging to enable equal conditions.

This architect’s guide embraces the four major stages of Cloud Foundry adoption within an enterprise: CF evaluation, a proof of concept, POC assessment, and a PaaS rollout. Choosing an IaaS, configuring back-ups, integration, and having all the technology stack onboard is only a half of the way; you’ll also have to introduce cultural changes to get the best of the platform. Read this 60-page ultimate guide to learn what you need for a successful CF implementation—during all of the four adoption stages.

This 15-page technical research explores the major pros and cons of microservices, compares them to monoliths, and demonstrates how a PaaS can help to overcome existing challenges. It includes 6 architecture diagrams, 3 comparative tables, and examples of running and scaling microservices applications on the Cloud Foundry PaaS. In addition, it provides a detailed comparison of how to operate microservices-based systems on an IaaS vs. PaaS and what benefits are available with each of these infrastructure options. Reference architecture and source code of a sample microservices application are also included.

This 20-page technical research embodies performance evaluation of distributed training with TensorFlow under two scenarios: a multi-node and multi-GPU infrastructure configuration. The benchmark was carried out using the Inception architecture as a neural network model and the Camelyon16 data as a training set. To test training scalability of distributed TensorFlow running on an Amazon EC2 cluster, the g2.2xlarge and g2.8xlarge instance types were employed.

This 25-page technical report evaluates performance of a sharded MongoDB cluster deployed over NetApp E-Series data storage. With 16 diagrams, the paper provides performance test results for SSD cache enabled / disabled and all-SSD scenarios, as well as analyses behavior on recovery after disk failure.

This 15-page research compares five popular deep learning tools—Caffe, Deeplearning4j, TensorFlow, Theano, and Torch—in terms of training performance and accuracy. When testing the frameworks, a fully connected neural network architecture was used for classifying digits from the MNIST data set. The paper also provides information on how speed and accuracy are affected by changes in the network “depth” and “width” (the data is available for the Tanh and ReLU activation functions).

This 20-page guide provides step-by-step instructions on how to use Kibana—an open-source analytics / visualization framework—on the IBM Bluemix platform. In detail, the technical paper describes how to bind the Elasticsearch service to your app, configure Kibana, deploy it to Bluemix, and use for visualizing Elasticsearch data.

This 45-page technical paper compares two popular NoSQL offerings—MongoDB and Couchbase Server—exploring how architectural differences of these systems affect availability, scalability, and performance. The report also provides a detailed overview of topology, replication, caching, partitioning, memory, etc.

This 11-page research overviews NetApp E-Series storage systems in terms of performance, scalability, and availability—while running Couchbase Server. The paper compares NetApp E-Series to commodity servers with internal-attached storage (DAS), including performance and reliability characteristics under a number of workloads generated by a client-like benchmarking app. The paper also features a comparative table on DAS and E-Series performance, scalability, and availability.

This 11-page benchmark compares 6 major Redis-as-a-Service products available on AWS: Redis Cloud Standard, Redis Cloud Cluster, ElastiCache, openredis, RedisGreen, and Redis To Go. While most performance tests focus on single get/set operations, this technical research demonstrates Redis behavior under 3 workload scenarios (simple, complex, and combined). The paper contains 3 comparative tables and 8 diagrams that show performance of each provider in close to real-life use cases.

This 15-page technical study compares five popular Ruby frameworks in terms of performance. The research paper includes performance results under four workloads: views (Slim), MySQL (Sequel), views and MySQL, and no views/DB. For each framework, similar test applications were built to find out how much time it takes to process 1,000 requests.

This report provides an in-depth analysis of the leading NoSQL systems: Cassandra, MongoDB, and Couchbase. Unlike other comparisons that focus only on one or two dimensions, this research approaches the databases from 20+ angles, including performance, scalability, availability, ease of installation, maintenance, data consistency, fault tolerance, replication, recovery, etc. With 29 diagrams, this paper also features a scoring framework/template for evaluating and comparing NoSQL data stores for your particular use case—depending on the weight of each criterion.

Nebula is a turn-key solution for distributed data center infrastructures that can help you to shorten deployment process to hours, not days. This 8-page tech study features a step-by-step guide on how to deploy the Cloud Foundry PaaS on Nebula. It includes detailed reference architecture for CF on Nebula, as well as infrastructure configuration, VM sizing recommendations, and the list of components necessary for a successful installation.

HP Moonshot is a low-power server designed for distributed computing and processing big data. This 12-page guide will help you to reap all the benefits of the Cloud Foundry PaaS—a platform for delivering cloud apps—deployed on top of Moonshot. You will learn how to install MaaS, assign roles to servers, deploy OpenStack, configure a new environment, set up routing, and provide high availability of apps on Cloud Foundry.

This 12-page research paper provides a scoring framework for evaluating API automation tools across 19+ technical criteria. It also includes a table that compares the functionality available in five popular API platforms: Apache UserGrid, WSO2 API Manager, Cumulocity, MuleSoft API Gateway, and StrongLoop Server. In addition, the study features step-by-step deployment guides for UserGrid and WSO2.

Among all open source projects in the category known as Platform-as-a-Service, OpenShift and Cloud Foundry have amassed the strongest development communities. Having similar functionality and goals, both make it possible to write code in a variety of languages and deploy applications to public or private clouds. Both are evolving extremely fast. Still, few, if any, in-depth comparisons of OpenShift and Cloud Foundry exist. The purpose of this research is to provide a high-level overview of Cloud Foundry and OpenShift, side by side. With 2 tables and 2 diagrams, it describes their features, supported languages, frameworks, tools, architecture, history, etc.

This paper questions if it is possible to boost performance of Hadoop calculations—using graphics processing units (GPU)—by up to 200 times. Exploring the bottlenecks of data flow between CPU, GPU, HDD, and memory, the study investigates the actual performance improvement that can be achieved. The article also reveals real-life performance results achieved by different projects and suggests a list of tools / libraries to use.

This 65-page benchmark compares the major Hadoop distributions: Cloudera, MapR, and Hortonworks. With 83 diagrams, it demonstrates how clusters of different size perform under 7 types of workloads. Download this study to learn which of the Hadoop distributions better suits your project needs and how to overcome limitations that may slow down your deployments.

Building a recommendation engine for a large online store involves a number of challenges, especially if you have to deal with huge data sets. In this white paper, our data scientist compares two approaches for implementing a Hadoop-based movie recommendation engine. Download the document to learn how generating association rules differs from clustering data, view the results of both implementations, and explore three ways to optimize the quality of movie recommendations.

Today’s Web applications deal with massive data sets that require high-performance systems for processing and analysis. However, your information becomes even more valuable, if you can efficiently visualize it. This research compares three popular but significantly different JavaScript libraries and evaluates how they handle massive data sets and real-time visualization.

For some time, there were no solutions from Microsoft for processing big data in cloud environments. Recently, Microsoft launched the Hadoop on Windows Azure service to make it possible to distribute the load and speed up computations using Windows Azure. This article gives an overview of two out-of-the-box ways of processing big data with Hadoop on Windows Azure—Hive querying and JavaScript implementations—and compares their performance.

The variety of NoSQL databases makes it difficult to select the best tool for a particular case. The unbiased investigations are rare—tests are often conducted by NoSQL vendors themselves. This benchmarking research is our vendor-independent analysis of four main NoSQL databases, based on performance measured under six different workloads.

Cloud computing remains one of the hottest topics in IT today given the promise of greatly improved efficiencies, significant cost savings, scalable infrastructure, high performance, and secured data storage. However, information on popular cloud systems is usually biased and full of advertising buzz, making it difficult to choose a cloud platform. This vendor-independent white paper contains a product-by-product comparison of the most popular cloud solutions (along with tips on bug-fixing) to help you select the best-fit product.

Customers Speak

  • "We highly recommend Altoros to rapidly build complex applications using cutting edge technologies. Again, great job!"

    Christopher Adorna, Sony Sony-logo-vector

© 2001 - 2017 Altoros