Changes to the default storage directory should be made before any captures are uploaded to your CloudShark system. Existing capture files will need to be imported into CloudShark again following any storage path changes. This procedure should only be done during your initial configuration.

Updating the Storage Directory

We’ll call ours /data. Grant ownership to the cloudshark user and group:

mkdir /data
chown cloudshark.cloudshark /data

Do not adjust the directory location by manually editing the cloudshark.conf file. This is not supported.

You must create a symbolic link inside the /usr/cloudshark/data directory to your new storage directory.

mv /usr/cloudshark/data /usr/cloudshark/data.original
ln -s /data /usr/cloudshark/data

This replaces the CloudShark uploads directory immediately. No restart is required.

To make sure that this storage location is available before the CloudShark service starts you can add this as a requirement to the service. First run systemctl edit cloudshark.service and add a line similar to the one below:


Replace /data in the line above with the directory your CloudShark instance is storing it’s captures. Then run systemctl daemon-reload to apply the change and systemctl cat cloudshark.service to verify. The output of the cat command should look similar to:

# /usr/lib/systemd/system/cloudshark.service
Description=Top level CloudShark process monitor mariadb.service memcached.service

ExecStartPre=/usr/bin/rm -rf /run/blkid/
ExecStartPre=/bin/sh -c "/usr/sbin/blkid $(cat /usr/cloudshark/etc/blkid_dev)"
ExecStart=/usr/cloudshark/ruby/bin/ruby /usr/cloudshark/ruby/bin/god -c


# /etc/systemd/system/cloudshark.service.d/override.conf

External Storage

A common scenario is to mount a remote file system on a CloudShark host, to improve the size, speed, and robustness of the system storage. The section above details how to change the default storage directory for the system’s capture files.

NFS does not support the inotify system call, which allows a system to be alerted of file system changes. Because the CloudShark Autoimporter relies on this system call to operate, it is not possible to mount a remote NFS share as an AutoImport target. Files that end up published to the remote NFS share do not trigger the event and thus, no files are imported.

An NFS file system which is exported from CloudShark does not have this limitation because the file events are still processed by the same system.