Book a Demo
Book a Demo

Quick-Start MMCloud Nextflow JuiceFS

Sateesh Peri

Quick-Start Guide: Deploying Nextflow with JuiceFS on MMCloud

Introduction to JuiceFS

JuiceFS is an open-source, high-performance distributed file system designed specifically for cloud environments. It offers unique features, such as:

  • Separation of Data and Metadata: JuiceFS stores files in chunks within object storage like Amazon S3, while metadata can be stored in various databases, including Redis.
  • Performance: Achieves millisecond-level latency and nearly unlimited throughput, depending on the object storage scale.
  • Easy Integration with MMCloud: MMCloud provides pre-configured nextflow head node templates with JuiceFS setup, simplifying deployment.
  • Comparison with S3FS: For a detailed comparison between JuiceFS and S3FS, see JuiceFS vs. S3FS. JuiceFS typically offers better performance and scalability.

Pre-requisites for Using JuiceFS with Nextflow

Before you begin, ensure you meet the following prerequisites:

  1. Setup a Security Group:

    • A security group is essential for controlling traffic to and from your resources.
    • Inbound rules should include:
      • SSH over TCP on port 22 for secure shell access.
      • HTTPS over TCP on port 443 for secure web traffic.
      • Custom-TCP over TCP on port 6868, which is used by the Redis server in this setup.
    • Path: AWS EC2 console -> Network & Security -> Security Groups
    • Security Group Setup
  2. Create a New S3 Bucket:

    • Cloud storage is required for JuiceFS's backend storage.
    • Create a new S3 bucket and note its region, as this will be needed later.
    • Path: AWS S3 console -> Create Bucket
    • S3 Bucket Creation

Deployment Steps for Individual Users on MMCloud

Note: The following steps are for users managing their own head node. For instructions on setting up a shared head node, refer to the Multi-user Head Node Deployment Guide.

Float Login

Ensure you are using the latest version of float CLI:

sudo float release sync

Login to your MMCloud opcenter:

float login -a <opcenter-ip-address> -u <user>

After entering your password, verify that you see Login succeeded!

Float Secret

Set your AWS credentials as secrets in float:


To verify the secrets:

float secret ls

Expected output:

|          NAME     |

Deploy Nextflow Head Node

Deploy the Nextflow head node using the nextflow:jfs template:

float submit -n <head-node-name> \
--template nextflow:jfs \
--securityGroup <sg-00000000000a> \
-e BUCKET=https://<bucket-name>.s3.<bucket-region> \

Note: Replace <head-node-name>, <security-group>, <bucket-name>, and <bucket-region> with your specific details. The nextflow:jfs template comes pre-configured with JFS setup.

Overriding Template Defaults

Customizing CPU and Memory

To override default CPU and memory settings:

--overwriteTemplate "*" -c <number-of-cpus> -m <memory-in-gb>

Example: To set 8 CPUs and 32GB memory

--overwriteTemplate "*" -c 8 -m 32

Specifying a Subnet

For deploying in a specific AWS subnet:

--overwriteTemplate "*" --subnet <SUBNET-ID>

Incremental Snapshot Feature (From v2.4)

Enables faster checkpointing and requires larger storage:

--overwriteTemplate "*" --dumpMode incremental


  • Supports larger workloads
  • Lower impact on job running time
  • No need for periodic snapshot interval configuration


  • Requires larger storage for delta saves
  • Final snapshot is necessary for restore

Checking Head Node Deployment Status

float list

Example Output:

|          ID           |      NAME      |            WORKING HOST            |  USER   |  STATUS   |     SUBMIT TIME      |  DURATION  |    COST     |
| NlygkM1dA0qIucncPwjgD | jfs-head-1     | (2Core4GB/OnDemand) | sateesh | Executing | 2023-11-02T17:40:06Z | 6m54s      | 0.0049 USD  |

SSH into Head Node

  • Locate the public IP address of the head node in the Working Host column.
  • Retrieve the SSH key from Float's secret manager:
float secret get <job-id>_SSHKEY > <head-node-name>-ssh.key

Note: If you encounter a Resource not found error, wait a few more minutes for the head node and SSH key to initialize.

  • Set the appropriate permissions for the SSH key:
chmod 600 <head-node-name>-ssh.key

SSH to Nextflow JFS Head Node

SSH into the Nextflow head node using the provided SSH key, username, and the head node's public IP address:

ssh -i <head-node-name>-ssh.key <user>@<head-node-public-ip-address>

Note: Use the username root to login as admin

MMC nf-float configuration

Editing the configuration file

  1. Copy the template and edit the configuration file:
cp mmcloud.config.template mmc-jfs.config
vi mmc-jfs.config

Note: If you're new to using vi, check out this Beginner's Guide to Vi for basic instructions.

  1. Modify the mmc-jfs.config file to include the OpCenter IP address, credentials, and the PRIVATE IP address of the Nextflow head node (<head-node-private-ip>). You can find the private IP address on the OpCenter GUI.

    Head Node Private IP Address
plugins {
  id 'nf-float'

workDir = '/mnt/jfs/nextflow/'

process {
    executor      = 'float'
    errorStrategy = 'retry'

float {
    address = '<opcenter-ip-address>'
    username = '<user>'
    password = '<password>'
    commonExtra ='  --dataVolume [opts=" --cache-dir /mnt/jfs_cache "]jfs://<head-node-private-ip>:6868/1:/mnt/jfs --dataVolume [size=120]:/mnt/jfs_cache'

aws {
  client {
    maxConnections = 20
    connectionTimeout = 300000
  accessKey = '<bucket_access_key>'
  secretKey = '<bucket_secret_key>'

Using Tmux

Start a tmux session named nextflow:

tmux new -s nextflow

To attach to an existing tmux session:

tmux attach -t nextflow

Tip: If you're new to tmux, here's a handy Tmux Cheat Sheet.

Nextflow Version Check

Check the Nextflow version and update if necessary:

nextflow -v

Example Output:

nextflow version

Optional: Export Tower Token

If needed, export your Tower access token:

export TOWER_ACCESS_TOKEN=<token>

Launch Nextflow

Launch a Nextflow or nf-core/<pipeline> by providing the MMC config file:

nextflow run nf-core/<pipeline> \
-profile test_full \
-c mmc-jfs.config \
--outdir s3://nextflow-work-dir/<pipeline> \

Float Summary

Users can retrieve a summary report for a specific Nextflow run using the unique workflow name generated for each run

float costs -f tags=nextflow-io-run-name:stupefied-jones

Example output

Job Number: 409
  Succeeded:    400 ( 97.80%)
  Failed:         9 (  2.20%)
  Running:        0 (  0.00%)
Cloud Resource:
  vCPU:            2922 Core(s)
  Core Hours:       708.97
  Memory:         16031 GB
  Storage:        51408 GB
  Extra Storage:   3850 GB
Host Count:       421
Floating Count:    13
Costs:       16.1840 USD
  Compute:       14.6383 USD
  Storage:       0.7978 USD
  Extra Storage: 0.7479 USD
Savings:     17.4906 USD

This summary provides an overview of resource utilization and costs associated with a particular Nextflow run, aiding in budget management and efficiency analysis for large-scale computational workflows.


Q: Why does nf-core/rnaseq fail at the QUALIMAP step?

A: The QUALIMAP process in nf-core/rnaseq often fails due to its high-frequency, small-size write requests, leading to timeouts. Enabling -o writeback_cache consolidates these requests and improves performance significantly. However, it turns sequential writes into random writes, affecting sequential write performance. Use this setting only in scenarios with intensive random writes.

Add the following in the process {} scope of your config:

     extra ='  --dataVolume [opts=" --cache-dir /mnt/jfs_cache  -o writeback_cache"]jfs://<head-node-private-ip>:6868/1:/mnt/jfs --dataVolume [size=120]:/mnt/jfs_cache'

Additional Reading

Data Volumes

For jobs that generate file system I/O, specifying data volumes is essential. The OpCenter supports a variety of data volume types. Learn more about configuring data volumes in MMCloud in the MMCloud Data Volumes Guide.

Allow / Deny Instance Types

Configure which instance types are allowed or denied in your setup.

Allow/Deny Instance Types