Disco MapReduce icon

Disco MapReduce

Disco is an open-source MapReduce framework developed by Nokia for distributed computing of large data sets on clusters of commodity hardware. It includes features like fault tolerance, automatic parallelization, and job monitoring.

What is Disco MapReduce?

Disco is an open-source MapReduce framework originally developed by Nokia for distributing the computing workloads of extremely large data sets across clusters of commodity hardware. It is designed to be scalable, fault-tolerant and easy to use.

Some key features of Disco MapReduce include:

  • Automatic parallelization and distribution of MapReduce jobs
  • Fault tolerance - automatic retry of failed jobs
  • Support for different storage systems like HDFS, Amazon S3
  • Web-based job monitoring and control interface
  • Lightweight Python programming interface
  • Batch-oriented and stream-oriented MapReduce interfaces

Disco can handle very large data sets in the order of petabytes and scale to thousands of nodes. It has been used at Nokia for data-intensive processing use cases like clickstream analysis, data mining and machine learning.

Overall, Disco MapReduce provides a good open-source alternative to commercial solutions like Amazon EMR, with additional flexibility to run Disco on private cloud infrastructure.

The Best Disco MapReduce Alternatives

Top Apps like Disco MapReduce

Amazon Kinesis, Apache Hadoop, Apache Flink, Apache Spark, dispy are some alternatives to Disco MapReduce.

Amazon Kinesis

Amazon Kinesis is a cloud-based managed service offered by Amazon Web Services (AWS) to allow for real-time streaming data ingestion and processing. It is designed to easily ingest and process high volumes of streaming data from multiple sources simultaneously, making it well-suited for real-time analytics and big data workloads.Some...

Apache Hadoop

Apache Hadoop is an open source software framework for distributed storage and distributed processing of very large data sets on computer clusters. Hadoop was created by the Apache Software Foundation and is written in Java.Some key capabilities and features of Hadoop include:Massive scale - Hadoop enables distributed processing...

Apache Flink

Apache Flink is an open-source stream processing framework developed by the Apache Software Foundation. It is designed to perform high-throughput and low-latency data processing over unbounded and bounded data streams.Some key capabilities and features of Apache Flink include:Stateful stream processing - Flink maintains state across stream events and...

Apache Spark

Apache Spark is an open-source distributed general-purpose cluster-computing framework designed for large-scale data processing and analytics. Some key points about Apache Spark:It provides a fast and general engine for large-scale data processing that runs workloads 100x faster than Hadoop MapReduce in memory, or 10x faster on disk.It supports...

Dispy

Dispy is an open-source distributed and parallel computing framework for Python. It allows easy distribution of Python computations across multiple processors and computers. Some key features:It can distribute Python functions and scripts across nodes/processors for parallel executionIt has support for computation intensive jobs as well as interactive controlIt...