Multi-Instance
CloudShark Enterprise supports a flexible deployment architecture that allows customers to scale and deploy any number of instances. With this unlimited model, you can deploy multiple CloudShark nodes to achieve high availability, load balancing, and simplified upgrade workflows.
This guide describes a clustered multi-instance deployment configuration providing:
- High Availability: Eliminate single points of failure with redundant instances
- Scalability: Add nodes to handle increased demand
- Flexible Maintenance: Upgrade CloudShark with minimal downtime
Architecture
A multi-instance CloudShark deployment for clustering or disaster recovery requires sharing specific storage locations and network services across all nodes, depending on the deployment type.
Storage
| Location | Purpose | NFS | S3 |
|---|---|---|---|
/usr/cloudshark/data |
Capture files | ✔ | ✔ |
/usr/cloudshark/etc |
Config files | ✔ | ✖ |
Data in the above locations of the CloudShark file system must be accessible to all instances using shared storage.
NFS
Network File System (NFS) allows multiple CloudShark instances to share the same capture files and configuration. This is the recommended approach for multi-instance deployments.
See the NFS Storage documentation for more information and configuration details.
S3-Compatible
S3 storage can only be used for capture files (/usr/cloudshark/data). The
config file location (/usr/cloudshark/etc) must use NFS or local storage due
to S3 filesystem limitations.
CloudShark can store capture files in Amazon S3 or S3-compatible storage services using Mountpoint for Amazon S3. This provides virtually unlimited storage capacity and built-in redundancy.
See the S3 Storage documentation for detailed configuration instructions.
Network Services
| Service | Port/Protocol | Purpose |
|---|---|---|
| MariaDB/MySQL | 3306/tcp | User accounts, capture metadata, and settings |
| Redis | 6379/tcp | Distributed caching |
| Memcached | 11211/tcp | Session storage |
These services can be self-hosted on a dedicated server or provided by cloud services such as Amazon RDS, ElastiCache, or equivalent offerings from other providers.
External service connections are configured in
/usr/cloudshark/etc/services.conf. In a multi-instance deployment using
shared NFS storage, this file is shared across all nodes and only needs to be
configured once, after shared storage is configured.
The packetviewer and suricata services run locally on each CloudShark
instance and should remain set to localhost.
Deployment Types
CloudShark supports multiple deployment types using more than one CloudShark instance. The best approach depends if your goal is high availability, horizontal scaling, or simply having a backup ready.
Choose from the following multi-instance deployments that best fit your availability requirements and infrastructure complexity.
Cluster
A clustered deployment connects multiple CloudShark instances to shared network storage for capture files and configuration, with all nodes connecting to shared database and caching services. This type of deployment is the most complex but it is essential to production environments requiring high availability and horizontal scaling.
Once storage, external services, and the load balancer are created and configured, create as many nodes as you require and add them to the load balancer.
┌─────────────────┐
│ Load Balancer │
└────────┬────────┘
│
┌─────────────────┼─────────────────┐
│ │ │
┌──────┴──────┐ ┌──────┴──────┐ ┌──────┴──────┐
│ CloudShark │ │ CloudShark │ │ CloudShark │
│ Node 1 │ │ Node 2 │ │ Node 3 │
└──────┬──────┘ └──────┬──────┘ └──────┬──────┘
│ │ │
└─────────────────┼─────────────────┘
│
┌──────────────┬──────────────┼──────────┬──────────┐
│ │ │ │ │
┌────┴───────┐ ┌────┴───────┐ ┌────┴────┐ ┌───┴───┐ ┌────┴─────┐
│ S3 │ │ NFS │ │ MariaDB │ │ Redis │ │Memcached │
│ (captures) │ │ (config) │ │ │ │ │ │ │
│ │ │ (captures) │ │ │ │ │ │ │
└────────────┘ └────────────┘ └─────────┘ └───────┘ └──────────┘
Network Service Configuration
To configure CloudShark to use the required external services for a clustered
deployment, edit the /usr/cloudshark/etc/services.conf configuration file and
update the memcache, database and redis sections to point to your
external services. For example:
[memcache]
host = "memcached.internal:11211"
[database]
adapter = "mysql"
database = "cloudshark"
user = "cloudshark"
password = "your-secure-password"
host = "database.internal"
port = 3306
[redis]
host = "redis.internal"
After modifying /usr/cloudshark/etc/services.conf, restart CloudShark:
systemctl reset-failed cloudshark-\* && systemctl restart cloudshark-full
Load Balancer Configuration
CloudShark should work with any HTTP/HTTPS load balancer. Configure your load balancer with the following settings:
| Setting | Value |
|---|---|
| Protocol | HTTPS (TCP port 443) |
| Health Check | GET /monitor returns 200 OK |
| Session persistence | Required |
| Backend protocol | HTTPS |
Disaster Recovery
To maintain a standby instance that can take over if the primary fails, an active-passive failover model can be deployed. This model requires a shared database and shared storage for capture files and configuration.
┌────────────────┐
│ Floating │
│ IP/DNS address │
└────────┬───────┘
│
┌───────────┴───────────┐
│ │
┌──────┴──────┐ ┌──────┴──────┐
│ CloudShark │ │ CloudShark │
│ Primary │ │ Standby │
└──────┬──────┘ └──────┬──────┘
│ │
└───────────┼───────────┘
│
┌───────────────┼───────────────┐
│ │ │
┌────┴───────┐ ┌────┴───────┐ ┌────┴────┐
│ S3 │ │ NFS │ │ MariaDB │
│ (captures) │ │ (captures) │ │ │
│ │ │ (config) │ │ │
└────────────┘ └────────────┘ └─────────┘
The disaster recovery model does not require shared Redis or Memcached instances and can instead use the local services that are installed with CloudShark. One side effect of this model is that users will be logged out and will have to log back in once the standby starts servicing traffic.
Network Service Configuration
To configure CloudShark to use the required shared database service for a
disaster recovery deployment, edit the /usr/cloudshark/etc/services.conf
configuration file and update the database section to point to your external
database. For example:
[database]
adapter = "mysql"
database = "cloudshark"
user = "cloudshark"
password = "your-secure-password"
host = "database.internal"
port = 3306
After modifying /usr/cloudshark/etc/services.conf, restart CloudShark:
systemctl reset-failed cloudshark-\* && systemctl restart cloudshark-full
Failover Procedure
If the primary node fails, to redirect traffic to the standby node:
- Update the IP address or DNS record to point to the standby node
- Users will need to log in again since session data is not shared
Adding a Node
Once storage, external services and the load balancer or floating IP/DNS are configured, perform the following steps to create a new CloudShark node that can be added to a cluster or used as a primary/secondary node when configuring disaster recovery:
- Install CloudShark
- Configure shared storage
- Configure external services
- Install CloudShark license
- Verify that the new node is operating
Validation
While the following list is not exhaustive, performing the following validations is recommended before running a new node in production.
- Login to CloudShark as a non-admin user
- Verify any existing captures are displayed in the capture index
- Upload a sample.pcapng
- Open the capture and verify that it opens
- Run the Zeek Logs analysis tool
- Run the Threat Assessment analysis tool
- From the capture index, perform a Deep Search for
frame
If any issues arise during verification, please contact QA Cafe Support for assistance.
Upgrades
With multiple CloudShark instances, most upgrades can be performed by replacing nodes rather than upgrading them in place. This approach avoids in-place upgrade complexity while other nodes continue service traffic. To upgrade a multi-instance deployment of CloudShark perform the following steps:
- Deploy new instances running the latest version
- Configure them to use the same shared storage and services
- Verify that new instances work correctly
- Transition traffic to the new instances:
- For load balanced deployments: Add new instances to the load balancer
- For active-passive deployments: Update your floating IP or DNS information.
- Decommission old instances
Considerations
-
License Management: Your CloudShark license works across all instances without registration or host ID restrictions. Download the license from the QA Cafe Customer Lounge and install it on each instance.
-
External Authentication: When using SAML or OAuth authentication with multiple instances, ensure your identity provider is configured to accept redirect URLs from all CloudShark instances.
-
Database Consistency: When using shared storage or database replication, ensure all nodes have a consistent view of the data to prevent conflicts.
-
Network Bandwidth: Multi-instance deployments that share storage require adequate network bandwidth between CloudShark nodes and storage systems, especially for large capture files.
Getting Help
For assistance planning or implementing a multi-instance CloudShark deployment, contact QA Cafe Support. We can help you design an architecture that meets your organization’s specific requirements for availability, scalability, and disaster recovery.