Archive for the ‘Admin Tools and Tips’ Category

Installing an SSL Certificate on an ESXI Server

In the latest version of the ESXI server, the web UI is only available for managing the existing virtual machines (VMs) or creating new VMs. By default, the SSL certificate that comes with ESXI is a self-signed certificate, which is not accepted by most browsers. In this case, we are using ESXI version 6.7, with the URL dubbed and an expired SSL certificate. We are going to replace it with a new SSL certificate.

Login to the ESXI Web UI

To install the new SSL, we will need to log in to the ESXI web UI and enable SSH access. We can use the Mozilla web browser, which will help us log in to the UI by accepting the risk associated with an expired SSL.

Install SSL Certificate-ESXI Server

Start the SSH Service

To start the SSH service, log in to the ESXI server with root credentials, then click on Manage –> Services –> Start TSM-SSH service.

Install SSL Certificate-ESXI Server

Locate Your Certificates

Navigate to the dir /etc/vmware/ssl

[root@vmxi:/etc/vmware/ssl] pwd

We will need to update the rui.crt and rui.key files by adding your new SSL and Chain certificates to file rui.crt (SSL certificate and Chain certificate in that order). Then you will add your SSL private key to the rui.key file.

Safety First

Before making any changes though, make a backup of the existing certificate and key.

cp /etc/vmware/ssl/rui.crt /etc/vmware/ssl/rui.crt_old
cp /etc/vmware/ssl/rui.key /etc/vmware/ssl/rui.crt_key

Update Certificates and Restart

Then, using the vi editor, replace the SSL and key certificate.

cat /dev/null > /etc/vmware/ssl/rui.crt
vi /etc/vmware/ssl/rui.crt
cat /dev/null > /etc/vmware/ssl/ rui.key
vi /etc/vmware/ssl/ rui.key

After making the changes, you will need to restart the hosted service using the below commands:

[root@vmxi:/etc/vmware/ssl]  /etc/init.d/hostd restart
watchdog-hostd: Terminating watchdog process with PID 5528316
hostd stopped.
hostd started.
[root@vmxi:/etc/vmware/ssl]  /etc/init.d/hostd status
hostd is running.

Now if we look at the browser, we can see the new SSL certificate is in effect.

Install SSL Certificate - ESXI Server


FileCloud is a powerful content collaboration platform that integrates with your favorite tools and programs. That includes cloud storage services, Microsoft and Google apps, online editing tools like OnlyOffice and Collabora, Zapier, Salesforce, and more. Set up APIs to fine-tune file and user operations and learn more about available features in FileCloud University. You can also reach out to our best-in-class support team through the customer portal for any questions regarding your FileCloud environment.


Article written by Nandakumar Chitra Suresh and edited by Katie Gerhardt


Enable FIPS Encryption in FileCloud

enable FIPS in FileCloud

FileCloud officially supports FIPS mode with CentOS 7.x version. This post explains how to enable FIPS encryption in your FileCloud installation.

Important Note – 

Please make sure you have the FIPS component enabled in your FileCloud license. If you do not have the component, please contact our sales team at for further help in adding the component to your license.

Step 1: Enable Dracut Modules

To enable FIPS encryption, you must first enable Dracut modules in CentOS; this can be installed by running the below commands:

yum install dracut-fips
yum install dracut-fips-aesni
dracut -v -f

It should yield the following results:

FIPS certification - enable dracut modules in CentOS

Step 2: Add the FIPS flag to the Grub Configuration

Once the Dracut module is configured, the next step is to add the FIPS flag to the grub configuration. To make the necessary changes, modify this file /etc/default/grub by adding fips=1 to GRUB_CMDLINE_LINUX.

GRUB_CMDLINE_LINUX=”crashkernel=auto rhgb quiet fips=1″

GRUB_CMDLINE_LINUX=”crashkernel=auto rhgb quiet fips=1 boot=UUID=34c96d6b-a43c-fec3-a2a6-e6593c977550″ #if /boot is on a different partition use blkid of the boot partition 

Step 3: Regenerate the Grub Configuration

After modifying the grub configuration, we will need to regenerate the grub configuration using the below command:

grub2-mkconfig -o /etc/grub2.cfg

If prelinking is installed in the server, you must first disable prelinking by modifying this file – /etc/sysconfig/prelink – and setting PRELINKING=no

Step 4: Reboot the Server

After the above changes are made, reboot the server and check this file – cat /proc/sys/crypto/fips_enabled – to ensure FIPS is enabled.

[root@cnfc ~]# cat /proc/sys/cryto/fips_enabled

Step 5: Install FileCloud

The next step is to install FileCloud.

yum install wget
wget && bash

Install FileCloud with the above script and configure the components required depending on your use case. Once completed, your FileCloud server will run under the FIPS mode.

Alternative Options

You can also download and install a FIPS-enabled OpenSSL.
NOTE: This is only needed if safelogic modules are required. Once FIPS mode is enabled, CentOS installs FIPS-enabled packages by default.

yum install unzip
unzip -q -d /root/fipsopenssl
rpm -Uvh –nodeps /root/fipsopenssl/*.rpm

We also recommend enabling strong ciphers and TLS 1.2/TLS 1.3 in your Apache SSL configuration:

#SSLProtocol all -SSLv2 -SSLv3
SSLProtocol -all +TLSv1.2 +TLSv1.3
#SSLCipherSuite HIGH:!aNULL:!MD5


For greater security and governance over your data, FileCloud supports FIPS encryption. With this step-by-step process, you can now enable FIPS on your own FileCloud installation (provided it is available with your license.) For additional support or clarification, please get in touch with our support team at


Article written by Nandakumar Chitra Suresh



Import Users to AD via PowerShell

Integrating FileCloud with your existing Active Directory (AD) can make setup much easier, faster, and secure. Users don’t need to worry about creating new accounts or credentials, and IT admins can efficiently manage assets across networks and monitor security.

Maybe you’re ready to go with FileCloud, but you don’t have an Active Directory set up yet. If your user base is large enough, if you have certain security thresholds, or if your organization uses a wide variety of applications, it makes sense to establish your AD first. Then you will have a single database to manage user access across your network.

Here we describe how to import users into an AD using PowerShell:

Single User Import

SamAccountName :  jdoe2

Name:  John2 Doe

DisplayName:  John2 Doe

Surname:  john2

GivenName:  John2

Email:  fc@company.ur1


Password:  test@1234562

To import a user with the above details to the AD, the below command can be used.

New-ADUser -PassThru -Path OU=Users,OU=US,DC=ns,DC=fctestin,DC=com -AccountPassword (ConvertTo-SecureString test@1234562 -AsPlainText -Force) -CannotChangePassword $False -DisplayName "John2 Doe" -GivenName John2 -Name "John2 Doe" -SamAccountName jdoe2 -Surname john2 -email fc@company.ur1 -UserPrincipalName

Bulk User Import

To bulk import users, you must first add those users and some detail to a CSV file.  Then use a PowerShell script to read those values from the CSV file and import them to AD.

Add user details to a CSV file as shown in the screenshot below:

Power Shell Script

In the script below, values from the CSV file are assigned to variables. We then use these variables in the New-ADUser command to import each user.

Import-Module ActiveDirectory


$NewUsersList=Import-CSV "aduser.csv"

ForEach ($User in $NewUsersList) {







New-ADUser -PassThru -Path "OU=Users,OU=US,DC=ns,DC=fctestin,DC=com" -AccountPassword (ConvertTo-SecureString test@1234562 -AsPlainText -Force) -CannotChangePassword $False -DisplayName $fullname -GivenName $givenname -Name $fullname -SamAccountName $samaccountname -Surname $sn -email $useremail -UserPrincipalName $userprincipalname


NOTE: In that CSV file, you can add more columns like Company, Department, telephone number, etc. You can then assign values to those variables that can be used with the New-ADUser command.

Executing the Script

  • Save the script into a notepad and save it as “AD import.ps1”
  • Open the PowerShell and change the directory to the location of the script and execute the below command:
& '.\AD import.ps1' -delimiter ","

Here, the delimiter is given as a comma. If you open the CSV file in notepad++, you can see that fields will be separated by commas.

Other Useful Commands

  1. To get the total number of users in a group:
(Get-ADGroup "Test import" -Properties *).Member.Count

Here, Test import is the group name. If the group name has a space in between, it should be enclosed in quotes.

  1. To add all users from an OU to a group
Get-ADUser -SearchBase ` OU=Users,OU=US,DC=ns,DC=fctestin,DC=com ' -Filter * | ForEach-Object {Add-ADGroupMember -Identity `Test import' -Members $_ }

Here, Test import is the group name. If the group name has a space in between, it should be enclosed in quotes.


Now that you have an AD set up, you can explore all the exciting integrations and security benefits For more information on how you can integrate FileCloud within your existing IT infrastructure, check out FileCloud’s Extensibility. You can also reach out to the Support Team through your Admin dashboard or explore other tools and features in FileCloud University.


Article written by Sanu Varkey


Migrating Storage Between Regions

Migrating Storage: AWS S3 vs Wasabi

FileCloud supports S3 compatible storage such as Wasabi storage; however, migrating from one Wasabi bucket to another in a different region is not possible, unlike AWS S3 storage. This blog will help you migrate the managed storage in your FileCloud system from one location to another.

Usually, the best method to perform an S3-to-S3 migration is with the help of the AWS CLI tool. However, Wasabi restricts the use of the AWS CLI tool migration if both the buckets are in different regions due to architecture issues within Wasabi.

In this post, we will review how to migrate a FileCloud server running in Ubuntu 18.04 LTS, where the server and Wasabi storage is in Amsterdam, to London.

Transfer storage from different buckets across regions

Step 1: Setting up the Environment

Set up the new server and install the latest version of FileCloud on it. In our case, we are installing a new FileCloud instance on Ubuntu 20.04 LTS.

Step 2: Running the Required Services

Stop all the services in Region 1 except MongoDB.

Step 3: Exporting Data

Mount additional disk space to export the data in Region 1.

In our test case here, the servers are hosted in linode server. We have created a temp disk space of 1 TB and then mounted on Region 1. Using our export method mentioned in the below documentation, we can export all the data into the temp disk which we created for Region 1.

:/WWWROOT/resources/tools/fileutils$ sudo php ./exportfs.php -d /cloudexport/ -u all -p / -r realRun

The temporary storage is mounted to /cloudexport

Step 4: Transferring the Exported Data

In Region 2, we must ensure that we have a temporary disk attached similar to the specs in Region 1 and that it is mounted to /cloudexport

To transfer data between two regions, we prefer to use rsync client over ssh. Run the below command on the Region 1 server:

rsync -avz /cloudexport root@

Replace the IP with the public IP of Region 2. Then wait until the rsync is completed.

Step 5: Transferring the Database from Region 1 to Region 2

To transfer the MongoDB data, we can take a mongodump from Region 1, transfer it using rsync (as in Step 4), and then perform mongorestore in Region 2.

The below commands should be executed in the same order to complete the DB migration:

mongodump –out /root/db-dumps

rsync -avz /root/db-dumps roo@

mongorestore –noIndexRestore /root/db-dumps

Step 6: Seeding the Exported Data into a New Server

To seed the exported data, we can use the documentation here:

sudo php ./seed.php -h default -p /cloudexport -i -r

After the data is completed, please restart all services and make sure the data is copied across properly before making the DNS switch to the new server.


The above documentation is tested on a standard FileCloud installation with the default site. For multitenant setups, the commands need to change accordingly. We recommend getting in touch with our support team at for any clarifications.


Article written by Nandakumar Chitra Suresh



Upgrade Your FileCloud Cluster and MongoDB with Offline Upgrade Tool

This blog post explains how to upgrade the FileCloud High Availability cluster using the FileCloud Offline Upgrade tool for Linux. At the moment, the FileCloud Offline Upgrade tool only supports CentOS7 and RHEL7 machines.

Offline Upgrade Tool download links:





Reviewing the Architecture

In this scenario, let us consider the architecture. The FileCloud architecture below consists of:

  • 2 x web servers
  • 3 x MongoDB servers
  • 1 x Solr server

Update FileCloud Cluster - 9 Server Cluster Example

The example used throughout this how-to blog post is based on FileCloud 20.1, where MongoDB runs on 3.6. Starting from 21.1, we will have to upgrade the MongoDB clusters manually, prior to Web node upgrades.


Upgrading FileCloud’s MongoDB Servers

We described how to upgrade MongoDB servers for Windows and Linux in a previous blog post. Here, we describe steps to upgrade MongoDB with the FileCloud offline upgrade too.

Step 1: Download the Upgrade Tool and Create a Path

First, download mongodb_upgrader_40_rpm.tgz and mongodb_upgrader_42_rpm.tgz into the MongoDB servers. You will need to implement these upgrades step by step.

mongodb_upgrader_40_rpm.tgz is MongoDB 4.0
mongodb_upgrader_42_rpm.tgz is MongoDB 4.2

Step 2: Create a Directory and Path

Create a directory as below in any path; $path can be any path location

mkdir -p $path/mongo40
mkdir -p $path/mongo42

tar -xzvf mongodb_upgrader_40_rpm.tgz -C $path/mongo40
tar -xzvf mongodb_upgrader_42_rpm.tgz -C $path/mongo42

Step 3: Set Feature Compatibility to 3.6

mongo --host {IP address of Primary}  --eval "db.adminCommand( { setFeatureCompatibilityVersion: '3.6' } )"

Step 4: Upgrade Secondary Nodes to 4.0

service mongod stop
cd $path/mongo40
rpm -Uvh *.rpm

Step 5: Stepdown current primary as secondary


Step 6: Upgrade the last server to 4.0

Step 7: Set Feature Compatibility to 4.0 in the current Primary Server

mongo --host {IP address of Primary} "db.adminCommand( { setFeatureCompatibilityVersion: '4.0' } )"

Step 8: Upgrade Secondary Nodes

Upgrade secondary nodes from 4.0 to 4.2, one by one, using the below commands or by running as a script

cd $path/mongo42
service mongod stop
rpm -Uvh *.rpm

Step 9: Stepdown current primary as secondary


Step 10: Upgrade Server to 4.2

Run command in Step 6 to upgrade the last server to 4.2

Step 11: Set Feature Compatibility to 4.2

In the current Primary Server, apply the following to update the feature compatibility to 4.2:

mongo --host {IP address of Primary} "db.adminCommand( { setFeatureCompatibilityVersion: '4.2' } )"


Upgrading FileCloud’s web and Solr servers

Download the offline_rpm_upgrader.tgz to both the web and Solr servers.

tar -xzvf offline_rpm_upgrader.tgz

Run the in the web nodes (you can skip the MongoDB upgrade option in as we will upgrade MongoDB servers manually prior to web nodes)

For Solr nodes, select the option Solr server and skip the web server and Solr.



Please note that this blog post is written based on the sample architecture mentioned at the start of the post. If you have different architecture, please feel free to reach out for any clarifications at


Article written by Nandakumar Chitra Suresh


Securing Your Filecloud Installation with a Wildcard Letsencrypt SSL Certificate

For this blog post, we will delve into the steps necessary to secure a FileCloud installation with a wildcard “Lets Encrypt” SSL Certificate and  Ubuntu 20.04 LTS on a multi-tenant site.

Install Certbot Package

To obtain the Let’s Encrypt SSL certificate, we will be required to install a Certbot package in the Ubuntu 20.04 LTS machine. This package can be installed from one of the default Ubuntu package repositories. The below command can help install the necessary packages.

apt install certbot python3-certbot-apache -y

Generate SSL Certificate

After the installation is complete, run the below command to generate the SSL certificate. This process is managed by the Apache plugin that comes with the certbot. In this case, we are going to install a wildcard certificate for the domain Since this is a wildcard certificate, we will need to manually generate the certificate using the certbot command. The command we are using is below:

root@fcsrv:~# certbot certonly –server –manual –preferred-challenges dns -d ‘*’

Confirm (or Deny) Logging of IP Address

After running this command, it will ask to confirm if the machine IP can be logged for the SSL generation purpose. In this demo, we have selected Yes.

– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –

NOTE: The IP of this machine will be publicly logged as having requested this

certificate. If you’re running certbot in manual mode on a machine that is not

your server, please ensure you’re okay with that.


Are you OK with your IP being logged?

– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –

(Y)es/(N)o: Y


Then it will ask us to create a TXT record against the domain for which we need to have the SSL issued:

– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –

Please deploy a DNS TXT record under the name with the following value:




Before continuing, verify the record is deployed.

– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –

For security reasons, we have masked the record. After the verification is completed, the SSL can be found at


Configure Changes and Create Virtual Host Entry

The next step is to make the required changes in /etc/apache2/sites-available/default-ssl.conf. Since this is a multi-tenant installation, we must first create a separate virtual host entry. Below is the virtual host entry we created in the file default-ssl.conf:

<VirtualHost *:443>

# Admin email, Server Name (domain name) and any aliases
ServerAdmin xxx@xxxxxx

# Index file and Document Root (where the public files are located)
DirectoryIndex index.php

DocumentRoot /var/www/html
<Directory /var/www/html>
Options Indexes FollowSymLinks MultiViews
AllowOverride All
Order allow,deny
allow from all

ErrorLog ${APACHE_LOG_DIR}/error.log

# Possible values include: debug, info, notice, warn, error, crit,
# alert, emerg.
LogLevel warn
CustomLog ${APACHE_LOG_DIR}/access.log combined

SSLEngine On
SSLCertificateFile /etc/letsencrypt/live/
SSLCertificateChainFile /etc/letsencrypt/live/
SSLCertificateKeyFile /etc/letsencrypt/live/
SSLProtocol all -SSLv3 -TLSv1 -TLSv1.1
SSLHonorCipherOrder on


Run Configuration Test

After making the changes, it is advised to run an apache config test to make sure everything is configured correctly. The expected output should be:

root@fcsrv:/etc/apache2/sites-enabled# apachectl -t

Syntax OK

Restart the apache service and use any SSL verification site to make sure your SSL certificate has been installed correctly. For additional support, please contact our FileCloud Support Team.


Article written by Nandakumar Chitra Suresh


Access your FileCloud Community Edition Server using NoIP

If you’re currently using Tonido and considering switching to FileCloud Community Edition, there is one key difference to note: Tonido uses a relay server to access your local server.

For example, <username> is your external URL; this means that you will need to use a third-party service to redirect from a custom URL to your local IP address. You can use commercial or free services to accomplish this.

Some available options include:

Redirect Your Tonido URL

This article will explain how to accomplish this using a Freemium service like NoIP.


  • Server Computer Running FileCloud Community Server
  • com Account
  • Your Public IP Address
  • Port Forwarding in Your Router
  • External URL Access
  • Set Up Automatic Updates for Local IP Changes

Install FileCloud Server

FileCloud provides installation guides for Windows and Linux operating systems. Select the right one for your computer below.



NoIP account

If you don’t have a account, please create one here.

Public IP address

Several options can help you identify your public IP address. For our tutorial, we will use this website:

When you open this website from your local home network, you will see something like this:

screenshot of website WhatIsMyIPAddress

Please take note of your IP4 Address as we will use this shortly.

Set Up Port Forwarding in Your Router

To access your FileCloud Community Edition server outside of your local network, you will need to create a rule in your router to redirect traffic from your public IP address to the local IP address of your server.

The instructions may vary depending on your router brand. You can check the guide from NoIP to help you set up the port forwarding for a comprehensive list of router brands (D-Link, Netgear, Linksys, Asus, TP-Link, etc.)

Create and Configure External URL Access

Once you have completed the above steps, it is time to create your hostname in NoIP. Go to dynamic DNS and create a new hostname.

screenshot of website NoIP to create a hostname

You can choose your preferred hostname and direct a DNS Host (A) to your IPc4 Address.

Screenshot of NoIP - Direct Your DNS

After waiting an average of 30 minutes, your DNS entry should be ready. Now you can access your FileCloud Community Edition Server from anywhere using the URL you chose, including via mobile application and web access.

Set Up Automatic Updates for the IP Address

To make updating the IP address simple, NoIP offers an application you can install on your server or any computer that runs from your local network. The application will monitor if the public IP is updated; whenever the IP address changes, the app will automatically update your DNS entry in your account. You can find and download the application from NoIP.

Once you install the application, log in to your NoIP account and select the hostname. You will see the following:

screenshot of NoIP website, configuring automatic updates for DNS changes

Now, whenever your local IP address is updated, your DNS entry will also be updated, ensuring you never lose access to your FileCloud Community Edition Server.

Article written by Daniel Alarcon

Moving from Tonido to FileCloud Community Edition

Tonido has been in maintenance mode for the last few years, which means only bug fixes have been delivered and no new features are available.

Since FileCloud 19.3, a new license model is available: FileCloud Community Edition. If you’re looking for a secure alternative to access your files, this edition is your best option.

6 Key Differences to Note Before Moving:

  1. FileCloud Community Edition has a license cost of $10/year (proceeds donated to charity).
  2. There is no relay server for Community Edition, which means that <yourserver> address has no similar option in Community Edition.
  3. FileCloud desktop and mobile applications are more advanced and feature-proof than Tonido; syncing files across devices is a better experience.
  4. Significant features from FileCloud (Business-oriented licenses) are available in the Community Edition, like:
    1. Storage from Disk, Network Shares, Amazon S3, Azure, Alibaba Cloud, Wasabi, EMC ECS
    2. Access to NTFS permissions storage and the Drive desktop application
    3. Unlimited versioning and recycle bin support, advanced sharing options for external users, file change notification
    4. And much more!
  5. Some features from Tonido are not available in FileCloud, like remote desktop, torrent, DLNA media server, money manager, thots, and other user-specific apps.
  6. Most importantly, active development is present on FileCloud Community Edition, including Forum support by FileCloud support engineers, bug fixes, and new features.


We recommend a dedicated machine for a computer that will run FileCloud Community Edition, either a VM or an old server in your home network.

  • OS Specs:
    1. Windows Server 2012 R2, 2016, 2019
    2. Ubuntu 18.04, 20.04, CentOS 7, RHEL 7
  • HW Specs:
    1. Intel quad-core CPU
    2. 16 GB of RAM or higher
    3. storage (physical disk, SAN, NAS, etc.)

What is Missing From a Regular FileCloud Server?

Most of the features listed below are enterprise-grade/business-oriented and are not often required by a personal or small business installation.

  • Data Governance (including Smart DLP, Smart Classification, Retention Policies, and the Governance Dashboard)
  • Single Sign-On (SSO)
  • Solr Search
  • Server Sync
  • Third-party integrations (AV, Salesforce, SIEM, etc.)
  • SMS Authentication
  • Multi-tenancy

Should You Switch from Tonido to FileCloud Community Edition?

See the comparison table below to help you decide if you should switch from Tonido to FileCloud CE:

comparison chart between Tonido and FileCloud Community Edition
General Recommendations:

If you currently use Tonido for media consumption, there are alternatives available that will help you achieve this in a better way (Kodi, Emby, Plex, etc.)

On the other hand, if you use Tonido to sync across devices, share files, and use it for more work-oriented tasks, consider switching to FileCloud Community Edition. It’s only $10/year for a license and gives you access to a robust file-sharing system with a strong Return-on-Investment.

Article written by Daniel Alarcon

Filecloud Ubuntu OS Upgraded!

Steps to Upgrade Ubuntu 16.04 to 18.04 LTS

Ubuntu 16.04 recently reached EOL, and some of the packages are no longer available in the repository. This creates an interesting challenge because those packages are necessary to run the upgrade.

To upgrade an Ubuntu instance from 16.04 LTS to 18.04 LTS (where the Filecloud server is running less than 21.1.x), follow the steps below:

Prior to running Ubuntu 16.04 LTS, you will need to back up the Filecloud server, as well as the  /var/www/html and /var/lib/mongodb paths. The chance of deleting this information during the OS upgrade is very high.

Run Backups:

  1. cp -rvf /var/www/html /var/www/html_backup
  2. cp -rvf /var/lib/mongodb /var/lib/mongodb_bkup

Perform Ubuntu Package Update:

Once the backups are complete, the next step is to perform the Ubuntu package update.

  1. apt-get update -y && apt-get upgrade -y
  2. apt-get dist-upgrade -y
  3. apt-get autoremove -y
  4. sudo reboot
  5. do-release-upgrade

NOTE: Select all the default options when prompted. Toward the end of the upgrade, you will need to restart your computer.

After Updating, Reinstall Packages:

After the upgrade is complete, you will need to reinstall certain packages, as the upgrade will have deleted them. To reinstall apache and php, please follow the steps below:

  1. LC_ALL=C.UTF-8 sudo add-apt-repository ppa:ondrej/php -y
  2. add-apt-repository ppa:ondrej/apache2 -y
  3. apt-get install unzip curl rsync python -y
  4. apt-get install apache2 build-essential libssl-dev pkg-config memcached -y
  5. apt-get install php7.2 php7.2-cli php7.2-common php7.2-dev php-pear php-dev php-zmq php7.2-zmq php7.2-json php7.2-opcache php7.2-mbstring php7.2-zip php7.2-memcache php7.2-xml php7.2-bcmath libapache2-mod-php7.2 php7.2-gd php7.2-curl php7.2-ldap php7.2-gmp php7.2-intl libreadline-dev php-pecl-http memcached php7.2-raphf php7.2-propro php7.2-mongodb php7.2-zmq -y
  6. a2enmod php7.2
  7. a2enmod headers
  8. a2enmod ssl
  9. apt-get -y install libmcrypt-dev
  10. cat <(echo “”) | pecl install mcrypt-1.0.2 2>&1
  11. service apache2 status
  12. Retrieve files from the backup (the OS Ubuntu upgrade from v16 to v18 will have removed them)
  13. rsync -avz /var/www/html_backup/ /var/www/html/
  14. chown www-data:www-data /var/www/html -Rf


Is your FileCloud version greater or lesser than 20.2? Install the cronjob using these commands:
Greater than 20.2 echo “*/5 * * * * php / /var/www/html/src/Scripts/cron.php” | crontab -u www-data –
20.2 or less echo “*/5 * * * * php /var/www/html/core/framework/cron.php” | crontab -u www-data –


If the above cronjob command fails, please follow the below method to troubleshoot the cronjob. Then run the command again.

  1. Check if /etc/cron.allow ; if www-data is present:
  2. vim /etc/cron.allow // Add “www-data” if not present.
  3. crontab -e -u www-data // Make sure that the crontab editor pulls up if it does it will work. exit editor.
  4. crontab -u www-data -l


Now we will set up the PHP CLI.

  1. sudo update-alternatives –set php /usr/bin/php7.2
  2. sudo update-alternatives –set phar /usr/bin/phar7.2
  3. sudo update-alternatives –set phar.phar /usr/bin/phar.phar7.2
  4. sudo update-alternatives –set phpize /usr/bin/phpize7.2
  5. sudo update-alternatives –set php-config /usr/bin/php-config7.2

Run the below command:

php -v // Make sure it shows the version to confirm it is working.
php -m // Make sure it shows the modules to confirm it is working

The expected output should be:

php -v

PHP (cli) (built: Jul  1 2021 16:06:47) ( NTS )

Copyright (c) 1997-2018 The PHP Group

Zend Engine v3.2.0, Copyright (c) 1998-2018 Zend Technologies with the ionCube PHP Loader (enabled) + Intrusion Protection from (unconfigured) v10.3.2, Copyright (c) 2002-2018, by ionCube Ltd.
with Zend OPcache, Copyright (c) 1999-2018, by Zend Technologies

php -m

[PHP Modules]


















ionCube Loader


































Zend OPcache





[Zend Modules]

Zend OPcache

the ionCube PHP Loader (enabled) + Intrusion Protection from (unconfigured)


Upgrade FileCloud:

Once Filecloud is reopened, you can run the Filecloud upgrade with this command:

  • Enter: filecloudcp -u

If this command does not work, use:

  • wget && bash



With the Ubuntu OS updated, FileCloud can work even better than before! The FileCloud support team is also available to provide assistance or answer questions.


Article written by Nandakumar Chitrasuresh

User-Based Management of Team Folder Permissions

Last month, we discussed how Folder Permissions work and how admins can grant or deny special permissions without needing to share multiple subfolders.

Today, we will discuss how an admin can grant “Manage” permissions to a user. This feature enables users to directly adjust folder permissions in Team Folders.

Today we will cover the following:

  1. Grant “Manage” permissions to a user from the Team Folder section.
  2. User-based management of permissions from the Front-End UI.

Grant “Manage” permissions to a user from the Team Folder section.

For our demo today, we will focus on a Team Folder called “Finance.”

We will now share the Finance folder with the Finance User Group:

In our Finance Group, we have a demo user called “damonphillips,” to whom we will grant “Manage” permissions:

To give security permissions to the user in the Finance Team Folder is to grant all permissions, including “Manage” in the Permissions section:

Add the user with all permissions:

Optional – enable user to view/edit shares.

If you want to give permission to the new Folder Manager to view/edit the “shares” from that Team Folder, you need to “Manage” the Team Folder share options. Go to “Misc” and grant “Allow Manage” option:

User-based management of permissions from the Front-End UI.

When our user now logs into the Front-End UI and then navigates to the Finance Team Folder, they will now see a “Security” tab in the sidebar on the right:

Since the Finance Group is already part of the Team Folder, the Team Folder manager can now change permissions for the rest of the team for a folder or a file.

For example, suppose the manager wants a specific member from the Finance group to have read-only access to the “Test Folder 2”. In that case, this user’s permissions can be edited by selecting “Test Folder 2”. Then click the “Manage Security” button in the “Security” tab.

The manager would then see the following screen:

From here, the manager can add users and edit permissions; for example, we will grant read-only permissions to a user:

This permissions update can be verified by going to the “Check Access” Tab and “Check user access” for the email account of the user.

Through this feature, admins can delegate Team Folder “Manage” permissions to managers. Managers can then create custom permissions for the contents of a Team Folder.

More information on managing team folder permissions can be found in the FileCloud documentation “Set Granular Permissions on Team Folders.” If you have any questions about Folder Permissions or any other FileCloud functionalities, please reach out to CodeLathe Support.

Article written by Daniel Alarcon