Archive for the ‘Admin Tools and Tips’ Category

FileCloud Best Practices: How to Use Two-Factor Authentication with External Accounts – Automatically

This is the next blog post in our FileCloud Best Practices series. We previously covered how to use external accounts for private, secure sharing. We have also reviewed how to automate the maintenance of external accounts.

Depending on your FileCloud license, your system can enable multi-factor authentication (MFA) with External Accounts. To do this automatically, each new account created needs to be added to a particular External Account policy group.

We will go through all the steps needed to configure this.

Create an External User Policy Group

The first thing we need to do is create a new User Policy group that will be used to configure MFA for all External Users.

Go to: Admin UI > Settings > Policies > New Policy

We will name this group “External Users” for this example, but you can use any other name.

Once created, edit the policy group and change the MFA settings to your preference (for example, Email).

Create a Workflow to Add New External User Accounts to the “External Users” Policy Group

Now we will create the workflow to move every new External User Account to the Policy user group we recently completed.

Go to: Admin UI > Workflows > New Workflow and select “Add Workflow.” We will create a new workflow with the condition being “If a new user is created.”

Required parameters:

"user_access_level": "USER_ACCOUNT_LIMITED_ACCESS"

In the dropdown for the “THEN” Action, select “Set user policy”

Required Parameters:

"policy_name": "External Users,"

Finally, name the new workflow. This can be whatever helps you remember the purpose of the workflow.

Now the system is configured to automatically add new external users to the External Users Policy, which applies two-factor authentication.

See It In Action

Log in to the user UI, create a new share, and add a new user account.

Note: If you want to learn how to enable the feature to automatically create new accounts when creating a new share, please visit our previous blog post, FileCloud Best Practices: How to Use Private Shares and External User Accounts.

Once completed, you can log in to your admin portal and check the new user account. Go to Settings > Policies and view the policy users section; you will see the recently created account added to the policy group:

You can also verify if your workflow is working by reviewing the audit records and filter using the Workflow name:

This completes our three-part series on sharing files privately with External Users.

Note: MFA for External Users is part of the Advanced License; if you don’t have this feature and are interested in trying it or seeing pricing, please reach out to your CSM representative or contact us.


Article written by Daniel Alarcon, Technical Support Manager

Edited by Katie Gerhardt, Junior Product Marketing Manager


Use FileCloud ServerSync to Migrate Local File Server Data (NFS/SMB) to S3 Cloud Storage

Public and private clouds are great tools to enable anywhere, anytime access to files and records. However, many organizations and businesses still need their on-premises network storage, which provides more options for admin security, control, and data sovereignty.

Across the public and private sector, these organizations are turning to hybrid solutions to leverage the benefits of both cloud and on-prem infrastructure. Not only do these hybrid solutions provide more flexibility for remote employees, they also ensure organizations are able to meet privacy and security requirements, while facilitating collaboration between internal and external partners and teams.

However, every organization has different requirements when it comes to divvying up the data over a hybrid environment and may also be using different tools and technologies to host their data.

Common Infrastructure Components for a Hybrid Cloud Environment

The IT infrastructure involved will inform the constraints and possibilities of a hybrid environment. Different cloud services provide integrations with various on-prem tools or technologies, and the specific organization and management of data will also influence which solutions are deployed.

Linux NFS Server

Network File Sharing (NFS) is a protocol that allows you to share directories and files with other Linux clients over a network. Shared directories are typically created on a file server, running the NFS server component. Users add files to them, which are then shared with other users who have access to the folder.

An NFS file share is mounted on a client machine, making it available just like folders the user created locally. NFS is particularly useful when disk space is limited, and users need to exchange public data between client computers.

SAMBA (SMB) Server

Samba is an open-source implementation of the Server Message Block (SMB) protocol. It allows the network data access between Windows, Linux, UNIX, and other operating systems, by enabling access to Windows-based file and printer shares. Samba’s use of SMB allows it to appear as a Windows server to Windows clients. It has the added advantage of being accessible by Linux, Unix and Mac users.

S3 Storage

Amazon S3 or Amazon Simple Storage Service is a service offered by Amazon Web Services (AWS) that provides object storage through a web service interface.

Amazon S3 can store any type of object, which allows uses like storage for Internet applications, backups, disaster recovery, data archives, data lakes for analytics, and hybrid cloud storage.

FileCloud’s ServerSync Provides an Enterprise-Grade Bridge Between On-Prem and Cloud Environments

Most organizations already have data they maintain and store, which means selected data must be able to move between local file servers and the cloud when creating or maintaining a hybrid environment.

S3 storage can be set up as a local disk drive via LAN, which enables users to move data between the S3 storage and the local server. However, there is no enterprise-scale solution to handle this sort of migration.

FileCloud’s ServerSync provides the answer, as an enterprise-grade, hybrid solution to help manage content and records across on-prem and cloud infrastructure.

FileCloud ServerSync

FileCloud ServerSync synchronizes files and permissions stored in on-premises Windows/Linux file servers to the cloud. It maintains copies of files and permissions in sync between the cloud and on-prem storage. This synchronization enables a hybrid cloud approach with traditional LAN access, even when users are off-site or remote.

Infographic depicting FileCloud ServerSync functionality
FileCloud ServerSync syncing SMB share data between headquarter and branch offices.

Use Cases for FileCloud ServerSync

FileCloud ServerSync can be used for multiple scenarios, including (but not limited to):

  • Sync data between headquarters and branch offices.
  • Sync data from local NFS/SMB shares to remote NFS/SMB shares.
  • Sync data from local NFS/SMB shares to S3 storage for data archival.
  • Sync data from local NFS/SMB shares to S3 storage for cloud access.
  • Sync data and NTFS permissions from a local SMB share to a remote FileCloud server.
Infographic depicting FileCloud ServerSync with NFS Server
Typical architecture to sync data between local NFS/SAMBA shares with S3 storage using FileCloud ServerSync.

How to Move Data from Local NFS/SAMBA Servers to S3 Storage with FileCloud ServerSync

In this setup, we have a local NFS/SAMBA server configured in Linux used as a local data repository. It is accessible locally inside the company network from different types of clients (Windows, Linux, and Mac) that interact with the data stored there.

The data is moved to S3 cloud storage using FileCloud ServerSync for archival. The data will sync between the local file servers and the S3 storage and is accessible externally through the FileCloud web interface.

Benefits of Using FileCloud ServerSync

  • Archival of NFS/SMB data into S3 low-cost storage Through FileCloud ServerSync.
  • Synchronization between the Local NFS/SAMBA servers and S3 storage using FileCloud ServerSync.
  • Cloud Access to the data using FileCloud interface.
  • Classification of the synchronized Data within FileCloud using Smart classification.
  • Ability to share data securely through the FileCloud interface with external parties.


Interested in trying FileCloud for your own migration to a hybrid environment? You can check out the FileCloud Tour or sign up for a free trial today!


Article written by Wail Bouziane, Solutions Architect Team Lead

Edited by Katie Gerhardt, Junior Product Marketing Manager


FileCloud Best Practices: How to Maintain External User Accounts

Following our blog post about secure file sharing with external accounts, this blog post will cover how to maintain those accounts (remove/delete after a custom number of days).

Enable Automatic Deletion/Disabling of External Accounts

In the previous “Best Practices” post, we outlined how to automatically create external accounts. We also explored the benefits of automatic account creation and how this process improves your security and your internal and external user experience.

Those accounts will stay on your FileCloud server even after the shares have expired. FileCloud offers unlimited external accounts, so you don’t need to worry about exceeding a certain limit. However, these accounts can accumulate over time and become messy to manage.

You can remove them manually, but there is a better way: configure a “Workflow” action to remove them periodically.

Create a Workflow to Disable/Delete External Accounts Automatically

Log in to your admin portal and create a “New” admin workflow:

Manage External Accounts with Workflows

Choose the condition “If a user’s last login is older than….”

Select Workflow Condition

Define the Workflow Parameters

Define workflow parameters

In this example, we are setting the following parameters:

“last_login_days_ago”:”60″ -> If the user hasn’t logged in the last “60” days.

“day_interval”: “1” -> How often do we want this workflow to execute? We are configuring it to run every day: “1”

“user_account_type”:”USER_ACCOUNT_LIMITED_ACCESS” -> Restrict the workflow to only execute on External accounts.

Set the Automated Action: Delete the Account

In this next step, set the action that will be executed when the parameters are met. To delete the account, simply select “Delete user account.” However, as you can see from the screenshot below, there are a variety of options you can take to suit your operational needs.

Select workflow action

Define Notification Rules

Now you can define the notification rules, which can include sending an email to an admin and/or informing the user that the account is being deleted (set to the option to “1” if desired)

Define notification rules in workflow

Name the Workflow

Name workflow

This workflow will run daily and remove external accounts that haven’t logged in within the last “60” days. You can set the number of days you prefer and the action you want; for example, disable the account, change user status, notify the user that their account will expire soon, etc.


With this automation in place, you can maximize the benefits of secure file sharing by automating the creation and maintenance of external user accounts. It’s one solution within FileCloud among many that contribute to a more efficient and secure content collaboration platform. In the next blog post, we’ll cover how to set up external accounts with two-factor authentication (2FA) to maximize security and prevent unauthorized access to your FileCloud environment.


Article written by Daniel Alarcon, Technical Support Manager | Edited by Katie Gerhardt, Junior Product Marketing Manager



Configure Solr HA with Pacemaker and Corosync in FileCloud

FileCloud is a hyper-secure file storage, sharing and collaboration platform that provides a powerful set of tools for admins and users to manage their data. This includes High Availability (HA) architecture support and content management functionalities, specifically content search via FileCloud’s Solr integration.

Solr is an open-source content indexing and search application developed and distributed by Apache. This application is included with FileCloud installations.

Pacemaker and Corosync are open-source software solutions maintained by ClusterLabs. These solutions provide cluster management capabilities to client servers. Specifically, Pacemaker is a resource manager tool used on computer clusters for HA architecture, whereas Corosync supports cluster membership and messaging.

By configuring Solr HA in FileCloud with Pacemaker and Corosync, the admin can strengthen redundancy configurations, improve overall resiliency of backend software components, including quorate and resource-driven clusters, and provide fine-tuned management capabilities within and between nodes.

This step-by-step guide will outline how to manually configure Solr HA with Pacemaker and Corosync in FileCloud.

Software Components

solr01 – Solr host – cluster member

solr02 – Solr host – cluster member

solr03 – quorum-device – quorum for cluster

solr-ha – proxy-ha host

NFSShare – NFS resource which can be mounted on solr01 and solr02

The example laid out in this blog post uses CentOS 7 (CentOS Linux release 7.9.2009 (Core)).

The installation instructions for Pacemaker and Corosync clusters remain the same, regardless of the Linux distributor (Ubuntu, Fedora, RedHat, or Debian).

Installation and Configuration Instructions

Step 1: Prepare the Cluster

Install all available patches using the following command:

Command(as root):

yum update

After installing the necessary patches, reboot the system. This step must be completed for all three hosts: solr01, solr02, and solr03.

Then, the package that provides necessary nfs-client subsystems must be installed.

command(as root):

yum install -y nfs-utils

Next, wget must be installed.

command(as root):

yum install -y wget

Step 2: Install Solr and Prepare the Cluster Environment

Installing Solr in your FileCloud instance is (naturally) a critical part of configuring Solr HA. As indicated above Solr can be broken down into specific Solr hosts that are members of a cluster. These hosts must be individually configured.

Prepare Clean OS

Beginning with Solr01, prepare a clean Linux-based OS (such as the example we are using, Centos7). You may also use other operating systems according to your preference.

Download FileCloud

On the clean OS, download the FileCloud installation script: (official installation script).

If any issues arise related to the REMI repo, the alternative can be used:

Create a Folder

Create the following folder:  /opt/solrfcdata

Run the Command

Command(as root):

mkdir /opt/solrfcdata

Mount the NFS Filesystem

The NFS filesystem should be mounted under the following:

Command(as root):

mount -t nfs ip_nfs_server:/path/to/nfs_resource /opt/solrfcdata

Start Solr Installation

Next, start the solr component installation from using FileCloud installation script:

command(as root):

sh ./

Follow the instructions until reaching the selection screen.

Select the “solr” option and click “enter.” The installation process may take a few minutes. Wait for confirmation that installation has been completed.

Bind Solrd to External Interface

Host: solr01, solr02

Solrd will, by default, try to bind to the localhost only. Modify the file so that solr binds to the external interface.

Modify the following file: /opt/solr/server/etc/jetty-http.xml

Change the following line in the file.

Original Line:

<Set name="host"><Property name="" default="" /></Set>

New Line:

<Set name="host"><Property name="" default="" /></Set>

Change System Daemon Control to System

Solr was started with the FileCloud installation. Before proceeding, stop the Solr service.

Host: solr01, solr02

command(as root):

/etc/init.d/solr stop

Remove the following file: /etc/init,d/solr

command(as root):

rm /etc/init.d/solr

Create a new file:

command(as root):

touch /etc/systemd/system/solrd.service

Edit this new file and copy the contents specified below to this file:

command(as root):

vi /etc/systemd/system/solrd.service

Copied Content:

### Beginning of File ###
Description=Apache SOLR
ExecStart=/opt/solr/bin/solr start
ExecStop=/opt/solr/bin/solr stop
### End of File ###

Save the file before continuing.

Verify New Service Definition is Working

Host: solr01, solr02

command(as root):

systemctl daemon-reload
systemctl stop solrd

It should not return any errors. Start the service:

command(as root):

systemctl start solrd
systemctl status solrd

Expected Output:

Remove Folder Contents

Folder: /opt/solrfcdata

Host: solr02


 command(as root):

systemctl stop solrd
rm -rf /opt/solrfcdata/*

Update Firewall Rules

Complete this step whenever needed, as in the below example on CentOS.

Host: solr01, solr02

command(as root):

firewall-cmd --permanent --add-port 8983/tcp
firewall-cmd --reload

With these steps completed, the Solr installation has been carried out to successfully prepare the environment for HA clusters.

Step 3: Set Up Pacemaker

Host: solr01, solr02, solr03

Edit /etc/hosts File

Add the entries for all 3 cluster nodes, so that the file reads as follows:

coresponding_ip    solr01
coresponding_ip    solr02
coresponding_ip    solr03


File: cat /etc/hosts      localhost localhost.localdomain localhost4 localhost4.localdomain4
::1                 localhost localhost.localdomain localhost6 localhost6.localdomain6 solr01 solr02 solr03

Install Cluster Packages

hosts: solr01 and solr02

command(as root):

yum -y install pacemaker pcs corosync-qdevice sbd

Enable and Start the Main Cluster Daemon

hosts: solr01 and solr02

command(as root):

systemctl start pcsd
systemctl enable pcsd

Update Passwords for the Cluster User

hosts: solr01, solr02

Set the same password for all hosts for the hacluster user.

command(as root):

passwd hacluster

Provide the hacluster user with the login credentials, as these will be necessary in later steps.

Open Network Traffic on Firewall

hosts: solr01 and solr02

command(as root):

firewall-cmd --add-service=high-availability –permanent
firewall-cmd --reload

Authorize Cluster Nodes

hosts: Solr01

command(as root):

pcs cluster auth solr01 solr02

Username: hacluster

Password: “secret_password” set in the previous step.

Expected Output:

solr01          Authorized
solr02          Authorized

Create Initial Cluster Instance

hosts: solr01

command(as root):

pcs cluster setup --name solr_cluster solr01 solr02

Start and Enable Cluster Instance

hosts: solr01

command(as root):

pcs cluster start --all
pcs cluster enable --all

Step 4: Set Up QDevice – Quorum Node

Install Software Required for Quorum-only Cluster Node

Install the required software on solr03 (quorum-only cluster node).

Host: solr03

command(as root):

yum install pcs corosync-qnetd

Start and Enable the PCSD Daemon

Host: solr03

command(as root):

systemctl enable pcsd.service
systemctl start pcsd.service

Configure QDevice (Quorum Mechanism)

Host: solr03

command(as root):

pcs qdevice setup model net --enable –start

Open Firewall Traffic

Open the firewall traffic (if required – below example on CentOS)

Host: solr03

command(as root):

firewall-cmd --permanent --add-service=high-availability
firewall-cmd --add-service=high-availability

Set the Password for HA Cluster User

Set the password for the hacluster user on solr03.

Host: solr03

command(as root):

passwd hacluster

Provide the password to the HA cluster user. This password should be the same password used for solr01 and solr02.

Authenticate QDevice Host in the Cluster

Host: solr01

command(as root):

pcs cluster auth solr03

Username: hacluster


Add Quorum Device to the Cluster and Verify

Host: solr01

command(as root):

pcs quorum device add model net host=solr03 algorithm=lms


Host: solr01

command(as root):

pcs quorum status

Expected Output:

Quorum information
Date:             Wed Aug  3 10:27:26 2022
Quorum provider:  corosync_votequorum
Nodes:            2
Node ID:          1
Ring ID:          2/9
Quorate:          Yes

Votequorum information
Expected votes:   3
Highest expected: 3
Total votes:      3
Quorum:           2 
Flags:            Quorate Qdevice

Membership information
    Nodeid      Votes    Qdevice Name
         2          1    A,V,NMW solr02
         1          1    A,V,NMW solr01 (local)
         0          1            Qdevice

Step 5: Install Soft-Watchdog

The module softwatchdog should load automatically after rebooting the system.

Host: solr01, solr02

command(as root):

echo softdog > /etc/modules-load.d/watchdog.conf

Reboot solr01 and solr02 to Activate Watchdog

Host: solr01, solr02

command(as root):


Carry out the reboots in sequence:

  • reboot solr01 and wait until it comes back
  • reboot solr02

Step 6: Enable SBD Mechanism in the Cluster

Enable sbd

Host: solr01, solr02

command(as root):

pcs stonith sbd enable

Restart Cluster so pkt 1 Takes Effect

Host: solr01

command(as root):

pcs cluster stop --all
pcs cluster start --all

Verify the SBD Mechanism

Host: solr01

command(as root):

pcs stonith sbd status

Expected Output:

<node name>: <installed> | <enabled> | <running>
solr01: YES | YES | YES
solr02: YES | YES | YES

Step 7: Create Cluster Resources

Create Cluster Resource with NFSMount

Host: solr01

command(as root):

pcs resource create NFSMount Filesystem device= directory=/opt/solrfcdata fstype=nfs --group solr


The parameter device should point to the nfs server and nfs share being used in the configuration.


Host: solr01

command(as root):

pcs status

Expected Output:

Cluster name: solr_cluster
Stack: corosync
Current DC: solr01 (version 1.1.23-1.el7_9.1-9acf116022) - partition with quorum
Last updated: Wed Aug  3 12:22:36 2022
Last change: Wed Aug  3 12:20:35 2022 by root via cibadmin on solr01

2 nodes configured
1 resource instance configured

Online: [ solr01 solr02 ]

Full list of resources:
Resource Group: solr
     NFSMount   (ocf::heartbeat:Filesystem):    Started solr01

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled
  sbd: active/enabled

Change the Recovery Strategy for the NFSMount Resource

Host: solr01

command(as root):

pcs resource update NFSMount meta on-fail=fence

Create Cluster Resource – solrd

Host: solr01

command(as root):

pcs resource create solrd systemd:solrd --group solr


Host: solr01

command(as root):

pcs status

Expected Output:

Cluster name: solr_cluster
Stack: corosync
Current DC: solr01 (version 1.1.23-1.el7_9.1-9acf116022) - partition with quorum
Last updated: Wed Aug  3 12:25:45 2022
Last change: Wed Aug  3 12:25:22 2022 by root via cibadmin on solr01

2 nodes configured
2 resource instances configured

Online: [ solr01 solr02 ]

Full list of resources:

 Resource Group: solr
     NFSMount   (ocf::heartbeat:Filesystem):    Started solr01
     solrd      (systemd:solrd):        Started solr02

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled
  sbd: active/enabled

Set Additional Cluster Parameters

Host: solr01

command(as root):

pcs property set stonith-watchdog-timeout=36
pcs property set no-quorum-policy=suicide

Step 8: Configure haproxy on Dedicated Host

Install haproxy on Clean OS

Our example uses CentOS.

Host: solr-ha

command(as root):

yum install -y haproxy

Configure the haproxy

Configure the haproxy to redirect to the active solr node.

Host: solr-ha

backup file: /etc/haproxy/haproxy.cfg

command(as root):

mv /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg_bck

Create an Empty File

File: /etc/haproxy/haproxy.cfg

Add Content

Add the content below into the empty file.

#### beginning of /etc/haproxy/haproxy.cfg ###
    log local2
    chroot      /var/lib/haproxy
    pidfile     /var/run/
    maxconn     4000
    user        haproxy
    group       haproxy
    stats socket /var/lib/haproxy/stats
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

frontend solr_front *:8983
        default_backend solr_back

backend static
    balance     roundrobin
    server      static check

backend solr_back
        server solr01   solr01:8983 check
        server solr02   solr02:8983 check
#### beginning of /etc/haproxy/haproxy.cfg ###

Ensure that parameters solr01/solr02 point to the full DNS name or to the IP of the cluster nodes.

Start haproxy

Host: solr-ha


systemctl enable haproxy
systemctl start haproxy

Solr service will be available on host solr-ha on port 8983 – independent of where it is really running, either on solr01 or solr02.


Congratulations! If you followed these step-by-step instructions, you will have successfully configured Solr with high availability along with Pacemaker and Corosync. This configuration will serve to improve redundancy and security for your critical data.

For any questions on Solr or High-Availability architecture, schedule a consultation or configuration support session.


Article written by Marek Frueauff, Solutions Architect

Edited by Katie Gerhardt, Junior Product Marketing Manager


Appendix – Glossary of Terms

Below are the key terms used in this article, listed in alphabetical order.

Term Definition
Cluster The group of servers or other IT systems, which primary purpose is to realize similar or exactly the same function to achieve one or both of the outcome’s: High Availability or Load Balance.
Cluster Quorum Server or other system that is part of the cluster and performs a particular role: verify which production cluster nodes (servers) can be communicated and their health status. In cluster members are missing, the cluster quorum system decides if the remaining servers can operate and continue providing services or if it should be treated as unhealthy. There is a risk of the split brain situation. The main purpose of the cluster quorum system is to avoid the split brain scenario.
Corosync Corosync is a typical part of High Availability architecture set up in Linux or Unix systems and usually exists alongside pacemaker. Corosync is the communication engine responsible for keeping cluster nodes (servers) in sync state.
Firewall Software or hardware which can inspect and manipulate based on the multiple rules the network traffic. The modern firewalls implementations can operate on multiple network layers (usually from 3 to 7) including the network frame content inspection.
Firewall-cmd The modern Linux build in firewall software implementation.
nfs Network File System – is the filesystem which by design is network related. It is common method to share file resources in the unix environment. Due to very long history related to this technology it has been implemented almost on all possible operating systems and became very popular and commonly used.
Pacemaker Open-source software involved in cluster resource management and part of a typical High Availability setup on Linux systems to provide modern functions and cluster management resources.
Proxy Software or hardware solution that provides a gateway between two networks separated by design. A proxy is usually installed between the public Internet and a local network and allows some communications between those network segments based on predefined rules. A proxy can also be used for other purposes, like load balancing: for example redirecting incoming connections from one network to multiple hosts in another network segment.
Proxy-HA The specific implementation of the proxy mechanism to provide High Availability service, which is usually correlated with a single host (server). In our example proxy-ha is used to verify where services are currently running (on which cluster servers) and redirect all incoming requests to the active node.
Resource Group A logical organization unit within the pacemaker cluster implementation that enables control of the dependencies between particular resources managed by the cluster. For example, the nfs server that shares files must be started after the filesystem where the files resists and additionally on the same cluster node (server) – this control can be easily achieved using Resource Groups.
QDevice The software implementation of the quorum functionality in the pacemaker cluster setup. This kind of functionality is being installed on the cluster host, which will perform the quorum role only, and will never provide any other services.
SBD Stonith Block Device by design this the implementation of the additional communication and stonith mechanism on top of shared block device between cluster nodes (servers). In some cases, sbd can be used in the diskless mode (as in our example). To operate in this mode, the watchdog mechanism needs to be enabled/installed.
Solr Advanced and open-source search and indexing system maintained and developed by Apache. This mechanism is a part of the standard FileCloud installation.
Split Brain Very dangerous scenario in all cluster environments in which a node or nodes loses the ability to communicate with the rest of the node population due to an environment malfunction (most often due to lost network connectivity). In this situation, a separated node may “think” that it is the “last man standing” and calls up all cluster resources to begin providing all services. This resource demand is repeated by all cluster nodes, leading to disagreement on which node should remain active and which services the cluster should provide. Each cluster implementation has multiple built-in mechanisms to prevent this situation, which can easily lead to data corruption. One such mechanism is stonith, which is activated as soon as the node is loses its “quorate” status –indicating a high probability that the node is not visible by the rest of the environment.
Stonith Shut The Other Node in The Head is a mechanism that allows an immediate restart (without any shut down procedure) of any node in the cluster. This mechanism is extremely important to prevent potential data corruption by the wrong cluster node behavior.
SystemV The name of the former Linux approach to starting and stopping system services (daemons).
SystemD The name of the modern Linux approach to starting and stopping system services (daemons) and much more. Each modern Linux distribution now uses systemd as the main mechanism to manage system services.
Watchdog The software or hardware mechanism that works like a delayed bomb detonator. The watchdog is periodically pinged by the system (approximately every 5 seconds) to reset the countdown procedure. If the countdown reaches 0, watchdog will reset the operating system immediately. Watchdog is used with Pacemaker in clusters to ensure that nodes remain recognized within the cluster community. In the event of a lost connection (which is the typical reason behind the Split Brain scenario), Watchdog enables an immediate reboot of the node.



Create an SSL Certificate in 5 Easy Steps

SSL certificates are a routine security recommendation when it comes to hosting data on a server. Specifically, SSL certificates enable end-to-end encryption for web servers when it comes to data transfers with HTTP protocol. This security is typically displayed by changing a URL from http to https. An icon such as a padlock may also be used to visually indicate that the site or server is secure.

FileCloud is a content collaboration solution that can either be self-hosted on private infrastructure or hosted by us. For self-hosted instances, FileCloud recommends installing and maintaining an active SSL certificate. This is a significant measure you can take to provide greater security for your data.

This blog post will cover how to purchase, configure, and verify an SSL from a trusted third-party provider in five easy steps.

Step 1: Generate the CSR for the SSL Certificate

A CSR or certificate signing request is generated on the server where the SSL certificate will be installed.  The CSR is created by the Certificate Authority and contains the following information:

  • Legal name of the business or organization
  • Domain name
  • Identification for the person or unit responsible for managing the certificate
  • Geographic location (city, state, and country)
  • Email address

For these step-by-step instructions, we are using the domain name for demonstrative purposes.

To generate a CSR, run the below command in the windows CMD or Linux Shell:

  openssl req -new -newkey rsa:4096 -nodes -keyout example.key -out example.csr

Enter the required information to generate the CSR for the SSL:

  • Country Name (2 letter code): [AU]
  • State or Province Name (full name): [Some-State]
  • Locality Name (e.g., city, county): []
  • Organization Name (e.g., company): [Internet Widgits Pty Ltd]
  • Organizational Unit Name (e.g., section, division, department): []
  • Common Name (e.g., server FQDN or YOUR name): []

If you are generating a CSR for a wildcard certificate, then the common name should be *

Step 2: Purchase an SSL Certificate from the Desired Vendor

In this tutorial, we are purchasing a wildcard COMODO SSL from You can apply the same steps to any SSL vendor.

Log in to the platform of your selected vendor, then purchase the SSL.

Step 3: Configure and Verify the SSL

Click on “Configure SSL” and submit the CSR generated earlier.

Click on “Continue.”

After this step, you will see the information from the CSR and verify that the CSR mentioned is correct.

Choose an SSL approval method with one of the two methods:

  1. Add CNAME in the DNS record of the domain that requires an SSL
  2. Email approval

In this case, we are choosing email approval and the web server should be Apache.

Enter the admin email and confirm the admin email by re-entering it again as in the screenshot below:

Step 4: Complete Verification

If you chose email verification instead of DNS verification, you will be redirected to the SSL provider site to enter the confirmation email address.

Once you have completed the verification steps with the SSL vendor, you will receive an email confirmation for the SSL. This email serves as the verification and confirms the domain is under your control. There will be a link for the verification in the email; click the link and enter the verification code in the directed space.

Step 5: Download the SSL Certificate

After verification is complete, download the SSL certificate from the SSL vendor. Alternatively, an email may be sent to the admin email address with SSL certificates attached.


Once the SSL has been downloaded, install the SSL certificates on your FileCloud server by following our documentation, depending on your operating system: Windows | Linux. You can also check out this blog post, which provides a specific step-by-step example of how to configure and install a wildcard “Lets Encrypt” SSL Certificate with Ubuntu 20.04 LTS on a multi-tenant site.

With an SSL certificate in place, you can rest assured knowing your data in transit is encrypted, which creates another layer of protection for your important files and processes.


Article written by Nandakumar Chitra Suresh, Technical Support Lead | Edited by Katie Gerhardt, Junior Product Marketing Manager


Connect Your SFTP to FileCloud

What is SFTP?

SFTP stands for Secure File Transfer Protocol; it is a secured version of the File Transfer Protocol or FTP, which is itself part of the Secure Shell or SSH Protocol. As their names imply, these protocols are used to easily transfer data and access permissions over an SSH data stream.

As vulnerabilities were discovered and access points exploited, SFTP was developed from FTP protocols, ensuring the availability of a secure connection that can be encrypted to transfer files within and between local and remote systems. Files can be transferred using WinSCP and SFTP clients.

FileCloud is a fine-tuned, enterprise-grade file sharing, sync, and storage solution. Admins and users can leverage granular sharing permissions and user/group policies to protect their data and efficiently collaborate on files.

Considering the existing file sharing solutions within FileCloud and the hyper-secure features that are built into the platform, SFTP/SCP protocols are not directly supported by the FileCloud platform.

However, for clients and consumers who wish to use SFTP with FileCloud, the Solution Experts team has prepared instructions on how to access and leverage SFTP resources using a Linux-based FileCloud on-prem server.

Step 1: Set Up the Connection

Host Name (IP address): The Full Domain Qualified Name or IP address of the SFTP server you are going to connect to.

Username: used to access the SFTP resources

Password: used to access the SFTP resources

The user used for mounting the SFTP resource must have Read/Write permission to the resource.

Step 2: Verify Your Information

It is important to verify the details of your software so that you can choose the appropriate installation. Install the relevant SFTP client for your operating system. Windows, Mac, and Linux users can use the following solutions or another of their choice.

Connect to the SFTP server using the client and your collected credentials. The example below is using the WinSCP solution:

Press the “Login” button:

If your login process is successful, switch to the Linux server where FileCloud is installed.

Step 3: Prepare the Server

Ensure that the following packages are installed on your server. All operations are performed as root user.


[root@server01 ~]# yum install -y fuse-sshfs sshpass


[root@server02 ~]# apt install -y sshfs sshpass

Step 4: Prepare the Folder Structure

Create a folder: /NetworkShares


[root@server02 ~]# mkdir /NetworkShares

Then create a folder for the SFTP mount point:


[root@server02 ~]# mkdir /NetworkShares/sftp

Check the folder owner for the newly created folders to ensure they are owned by the Apache running user.


[root@server01 ~]# chown apache /NetworkShares -R


[root@server02 ~]# chown www-data /NetworkShares -R

Step 5: Perform a Manual Mount

Acquire the Apache UID:


[root@server01 ~]# id apache

uid=48(apache) gid=48(apache) groups=48(apache)


[root@s02 ~]# id www-data
uid=33(www-data) gid=33(www-data) groups=33(www-data)

Establish the manual test mount:


[root@s01 ~]# sshfs -o allow_other,idmap=user,uid=48  testsftp@  /NetworkShares/sftp/

Enter the password for testsftp@

The UID value here should be the UID of the apache/www-data user, though this depends on the Linux distribution.

Ensure the mount has been established:


[root@s01 /]# mount |grep sftp

The output should be similar to this result:

testsftp@ on /NetworkShares/sftp type fuse.sshfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)

Check if resources are accessible:


[root@s01 /]# ls -al /NetworkShares/sftp

The file listing should be presented as seen below. All file owners should be Apache or www-data user (depending on the Linux distribution).

Step 6: Set Up Automatic SFTP Resource Mount After Server Reboot

Create a file containing a secret SFTP user password:


[root@s01 /]# echo ‘You_Super_Secret_Password’ > /root/sftp.pass

Create the /etc/fstab entry:

sshfs#user@sftp_server:/ NetworkSharessftp fuse ssh_command=sshpass\040-f\040/root/sftp.pass\040ssh,_netdev,rw,allow_other,reconnect,user,kernel_cache,auto_cache,uid=48,allow_other 0 0

The UID value here should match the UID of the Apache/www-data user, depending on the Linux distribution. (This should be one line in the fstab file, though it may be wrapped due to terminal settings.)

Perform a test command:


[root@s01 /]# mount -a



[root@s01 /]# mount |grep sftp

On the output, you should see your mounted SFTP resource.

Step 7: Expose the Resource in FileCloud

Login to the FileCloud admin panel.

Go to the “Network Folders” option and click the “Add” button.

Choose “Local Area Network”, then “Next.”

Enter the name of the Network Folder and click “Next.”

Select “Normal mount” and click “Next.”

Enter the path to the mounted SFTP resource (/NetworkShares/sftp) and click “Next.” The path is case-sensitive!

Select “Use assigned permissions” then “Create share.”

Assign a user or group to this share, and click “Finish.”

The shared path will be displayed in the list. You can always manage permissions by clicking on the edit icon:

When users assigned to this share path log in to their FileCloud, they will be able to see and access the Network Folder.


FileCloud is a powerful, hyper-secure content collaboration platform (CCP) with a wide range of features, integrations, and customization options. FileCloud’s mission revolves around creating software that customers love to use, which means supporting the tools and protocols customers prefer, including SFTP.

With these step-by-step instructions, you can integrate your FileCloud environment with your SFTP shares, so you can collaborate with internal and external teams. Use your established folders, permissions, and processes while discovering how FileCloud can support your security, governance, and collaboration goals.


Article written by Marek Frueauff (Solutions Architect) and Katie Gerhardt (Junior Product Marketing Manager)


FileCloud Best Practices: How to Use Private Shares and External User Accounts

One of the most frequent use cases of FileCloud is “sharing files securely with an external user.” By default, FileCloud enables all types of shares (public, public with password-protection, and private shares), with a focus on security and collaboration.

In this article, we will review the recommended configuration to securely share files to external users and use External Accounts (free user accounts) to improve traceability and auditing.

Types of Shares in FileCloud

FileCloud gives you the option to use public and private shares; in essence, you can do the following:

  1. Share a public link.
  2. Share a public link with a password.
  3. Allow selected users or groups to access the link (private).

Share a Public Link

The default share option in FileCloud is to share a public link. This will allow anyone with the link to view, download, or upload (depending on your choice).

Allow Selected Users or Groups

The third option is to share a link to a selected list of users or groups. These users can be external users, and their accounts can be created while creating the share.

You can “Invite users” with this type of share and create their accounts on the fly in the background. First though, you need to configure some settings to enable the account creation option.

Configure FileCloud to Create User Accounts with Shares

To create accounts when creating a new share, the following settings need to be applied in the admin portal:

Adjust the Admin settings to allow the creation of new accounts for external users.

Log in to your admin portal and go to Settings > Admin, and set the following values:

Allow Account Signups -> True

This will allow accounts to be created automatically in the background.

Automatic Account Approval -> 3

This configures the system so that “Limited” or external accounts are the default account to be created in the background.

Note: External User Accounts don’t count towards your license; you can create as many as you need. These accounts have a few limitations: they can only be accessed via the web browser (no applications), and you can only share files with external user accounts from the User UI, not from the Admin UI (for example, Team Folders cannot be shared).

Allow accounts to be created when creating a new share.

In your admin portal, go to Settings > Policies and edit the “Global Default Policy.” Then go to “User Policy” and change the following values:

Disable Invitations to New Users -> No

This configures the system to “send” invitations to new users. (Default Value: No)

Create account on new user shares -> Yes

This configures the system to “allow” the creation of new accounts “when” creating a new share. (Default Value: No).

Changing these settings will allow accounts to be created in the background when creating the share.

Configure FileCloud to Only Create Private Shares

Now that external accounts are allowed to be created in the background, the next step is to restrict the type of shares that can be made. For that, go to Settings > Policies > Edit the Global Default Policy. Then select the “General” tab.

Share mode -> Allow Private Shares Only

This configures the system to only allow the creation of private shares.

How Sharing Works After Configuration Changes

After the configuration changes are made, when you create a new share, this is the result:

The option to “Allow selected users or groups” is selected by default, with the options to “Allow anyone with the link” and “Allow anyone with link and password” disabled.

Note: The ability to invite users and enable “Private Shares Only” is a setting based on Policy Group. This means that you can apply these restrictions to a subset of your users and still allow other groups of users to create different types of shares.

Add an External Account to the Share

To invite a new user, you need to click on the “Invite Users” button; this will open the invite window; write the email address of the external user you want to add, then click on the “Invite” button below the email address. You can add multiple new users in the same way; once completed, click on the “Add Users to this Share” button.

Once you have added all the emails necessary to your share, you can check the sharing permissions desired for the users in the original share link box.

Now, those two accounts have been created as “Limited User Accounts” in the background; you can confirm these external accounts have been created by visiting the Admin UI > Users section.

The External User Experience

After adding the account to the share, the External User will receive two emails. If you checked the “send email” box when adding them, they only receive one.

Welcome to FileCloud Email

The first email they receive is the Welcome to FileCloud! Email. This email includes the Server URL, user email, and login password.

Shared Files Notification Email

This email includes the name of the “Full User” that has shared files with the “External User.” Additionally, it consists of the Folder Name (if you share a single file, they will get the single file name) and the share link URL to directly click on it.

Once the external user logs in, they will gain access to the shared content.

Improve Traceability and Auditing with External Accounts

Following our example, the external user uploads a PDF file.

The Full User can view shared document and folder details, including “Activity,” which shows who uploaded the file, to which folder, and when. Without an External User account, this file information would show as uploaded by “ANONYMOUS.” With an external account, the file information includes the user’s information.

If you click on the “i” icon to the right of the username, you can view details like the IP address, date, and time of when the file was uploaded.

Collectively, external user accounts provide more information about your external shares and help you identify when a user uploads/downloads or takes any action on shared content.

In following blog posts, we will discuss how to maintain these External User accounts automatically and enable 2FA.


Article written by Daniel Alarcon and Katie Gerhardt



Continuously Improving FileCloud – 21.3.6 Release

FileCloud’s Commitment

FileCloud’s mission is “to build a hyper-secure content collaboration and processes platform that customers love to use.”

Part of making software that customers love is investing in quality assessment and continuous improvement. It’s a cohesive and collaborative process, roping in engineering, QA, sales, marketing, and leadership teams.

We also depend on our clients and users, who provide amazing feedback not only on opportunities for improvement but also desired features and functionalities.

These elements of the software journey are captured in our stated values:

  • Be Customer Centric – Without our customers, FileCloud wouldn’t exist. That’s why they’re always our top priority.
  • Get Work Done – We achieve great results through our resourcefulness, hard work, and drive for perfection.
  • Innovate with Global Mindset – We have a vibrant mix of cultures and ideas that constantly encourage growth and innovation.

Release Details

There are a few exciting developments in the pipeline for our upcoming 22.1 release, including highly requested functionalities.

In the meantime, FileCloud has been putting in a lot of work behind the scenes to harden security and functionality across the server, Sync and Drive clients, and ServerSync.

The 21.3.6 release in July included many improvements for the FileCloud server, including streamlining recycle bin deletion, optimizing processing by cutting out feedback loops, removing visibility on password entries, and ensuring the functionality of user workflows.

The Sync and Drive apps have also been improved. Issues with login and password processes in FileCloud Sync were resolved, and the centralized configuration option for selective sync was reinforced. In the Drive app, the file locking function was optimized.

You can review all the improvements we’ve made by visiting the 21.3.6 Release Notes.



Migrating VMs Between ESXI Servers Using SCP Command

FileCloud customers may choose to use a virtual machine (VM) in an ESXI server. At times, ESXI servers may be decommissioned, requiring a migration. When FileCloud is hosted on one ESXI server, it can be moved to another using this method. This is generally a bare metal migration.

Yet migrating VMware ESXI servers has always been difficult, at times even requiring the use of a third-party paid application. In this blog, we discuss a simple method to transfer VMs using the basic SCP command. We also ensure that the transferred VM disks are configured in thin provisioning.

Follow the steps below to migrate the ESXi servers:

Enable SSH Service on Source and Destination ESXI Servers

To enable the SSH service, log in to the web interfaces for your ESXI servers. Then click on Host at the top right. Click Actions -> Services -> Enable Secure Shell (SSH) (if it is not already enabled).

Enable SSH Client Service on Source ESXI Server.

Log in to the SSH of the source ESXI server using the putty tool. You may need to run the below commands:

esxcli network firewall ruleset list --ruleset-id sshClient

Check if the SSH client service is enabled. If disabled, the command will return a result of ‘False’. If a ‘False’ response is returned, run this next command. If ‘False’ is not the returned response, proceed to the next step!

esxcli network firewall ruleset set --ruleset-id sshClient --enabled=true

Copy the VM from Source to Destination

Before running the below commands, make sure the VM that will be migrated is turned off in the source ESXI server.

Connect to your source ESXI server using putty or your favorite SSH client (depending on Windows or Mac OS).

Navigate to your datastore where your guest VM resides. By default, it will show as below.

cd /vmfs/volumes/datastore1/

Next, migrate the data to the proper datastore path in the Destination VM.

Afterward, execute the below command in the source ESXI server:

scp -rv /vmfs/volumes/datastore1/VM_NAME root@xx.xx.xx.xx:/vmfs/volumes/datastore1/

Press ‘Enter.’ You should be prompted for a password – then the migration process will begin. The time to complete the transfer depends on the network speed between the ESXI servers.

Convert Thick Provisioning to Thin Provisioning

Log in to your SSH console of the destination server. Then, navigate to the datastore path where the new VM data will be migrated from the old server.

cd /vmfs/volumes/datastore1/ VM_NAME

Run the below command to clone the VMDK to a thin provisioned disk using vmkfstools

vmkfstools -i VM_NAME.vmdk -d thin VM_NAME -thin.vmdk.

After the cloning is complete, list the files in the directory and verify that two files were created:

VM_NAME.vmdk and VM_NAME -thin.vmdk.

Rename the old flat file to a different name (e.g., mv VM_NAME-flat.vmdk VM_NAME-flat.vmdk.old)

Rename the new flat file to a different name (e.g., mv VM_NAME-thin-flat.vmdk VM_NAME-flat.vmdk)

Register the Migrated VM on the ESXI Host

Log in to the web interface of the destination ESXI server where the VM was migrated from the source server.

Click on Virtual Machines –> Create/Register VM

Select ‘Register an Existing Virtual Machine.’ Then select one or more virtual machines, a datastore, or a directory. Select the folder of the VM Guest you moved to the new server. Click: Select –> Next –> Finish

Once you turn on the migrated VM in the destination ESXI server for the first time, you will be prompted to answer if you moved or copied the guest machine. Leave the default “I Copied It” and click “Answer.”

If the migration was completed without any errors, the VMs should start in the new host.


Article written by Nandakumar Chitra Suresh and Katie Gerhardt



Installing an SSL Certificate on an ESXI Server

In the latest version of the ESXI server, the web UI is only available for managing the existing virtual machines (VMs) or creating new VMs. By default, the SSL certificate that comes with ESXI is a self-signed certificate, which is not accepted by most browsers. In this case, we are using ESXI version 6.7, with the URL dubbed and an expired SSL certificate. We are going to replace it with a new SSL certificate.

Login to the ESXI Web UI

To install the new SSL, we will need to log in to the ESXI web UI and enable SSH access. We can use the Mozilla web browser, which will help us log in to the UI by accepting the risk associated with an expired SSL.

Install SSL Certificate-ESXI Server

Start the SSH Service

To start the SSH service, log in to the ESXI server with root credentials, then click on Manage –> Services –> Start TSM-SSH service.

Install SSL Certificate-ESXI Server

Locate Your Certificates

Navigate to the dir /etc/vmware/ssl

[root@vmxi:/etc/vmware/ssl] pwd

We will need to update the rui.crt and rui.key files by adding your new SSL and Chain certificates to file rui.crt (SSL certificate and Chain certificate in that order). Then you will add your SSL private key to the rui.key file.

Safety First

Before making any changes though, make a backup of the existing certificate and key.

cp /etc/vmware/ssl/rui.crt /etc/vmware/ssl/rui.crt_old
cp /etc/vmware/ssl/rui.key /etc/vmware/ssl/rui.crt_key

Update Certificates and Restart

Then, using the vi editor, replace the SSL and key certificate.

cat /dev/null > /etc/vmware/ssl/rui.crt
vi /etc/vmware/ssl/rui.crt
cat /dev/null > /etc/vmware/ssl/ rui.key
vi /etc/vmware/ssl/ rui.key

After making the changes, you will need to restart the hosted service using the below commands:

[root@vmxi:/etc/vmware/ssl]  /etc/init.d/hostd restart
watchdog-hostd: Terminating watchdog process with PID 5528316
hostd stopped.
hostd started.
[root@vmxi:/etc/vmware/ssl]  /etc/init.d/hostd status
hostd is running.

Now if we look at the browser, we can see the new SSL certificate is in effect.

Install SSL Certificate - ESXI Server


FileCloud is a powerful content collaboration platform that integrates with your favorite tools and programs. That includes cloud storage services, Microsoft and Google apps, online editing tools like OnlyOffice and Collabora, Zapier, Salesforce, and more. Set up APIs to fine-tune file and user operations and learn more about available features in FileCloud University. You can also reach out to our best-in-class support team through the customer portal for any questions regarding your FileCloud environment.


Article written by Nandakumar Chitra Suresh and edited by Katie Gerhardt