Archive for the ‘Security’ Category

Cybersecurity Trends in 2022

In an increasingly online world, cybersecurity has become more critical than ever. This is particularly true for companies and organizations that handle personal or sensitive data of consumers and citizens.

Why is Cybersecurity so Important?

With the huge prevalence of remote work during the COVID-19 pandemic, businesses and organizations are increasingly doing their work online. No matter if all business is done in the cloud or completed on a company’s VPN, this method of working needs to take in a whole new consideration of cybersecurity. Is your client’s personal information secure? Have your employees been trained in common phishing and social engineering attacks?

man in front of screen with cybersecurity icons

Increasingly, clients and organizations look into a company’s cybersecurity protections to determine if they want to give them their business. In fact, Gartner reports that, “By 2025, 60% of organizations will use cybersecurity risk as a primary determinant in conducting third-party transactions and business engagements.”

Not only do businesses need top cybersecurity strategies to keep their own organizations secure, they also need it to attract and retain clients.

Of course, cybersecurity changes year by year, so it’s important that companies focus on it and make sure the tools and software they use have top security features and options. To that end, let’s look at some top cybersecurity trends for this year.

Top Five Cybersecurity Trends for 2022

Ransomware

Ransoms have less to do with kidnapping now and more to do with cybersecurity. Hackers are creating malware that threatens to publish private information or permanently encrypt important data unless they’re paid a ransom to remove the malware.

Many hackers now use RaaS (Ransomware as a Service)—ransomware that’s already been created to perpetuate attacks more easily.

Ransomware is being used in large attacks too, like it was for the Colonial Pipeline attack in 2021. The pipeline supplies gas to about 50% of the East Coast of the US and caused panic buying along with spikes in gas prices. Colonial had to pay $4.4 million to have the ransomware removed. Attacks like this will only become more prevalent as hackers become more sophisticated and go after bigger and bigger targets.

Internet of things

The IoT (Internet of Things) is an aspect of cybersecurity many people don’t consider, but in our increasingly tech-focused world, the IoT applies to the physical “things” in our lives filled with sensors and software that communicate and send data online. These “things” can be anything from the smart devices that turn on your lights and music to smart-driving cars. IoT will only increase in everyone’s daily life and make us more reliant on the internet and our devices. What many people don’t realize is that all these devices can be hacked as well. Devices and the companies that create them need to focus on increasing their cybersecurity as well.

Attacks on the Cloud

Increasingly, companies are using the cloud to store their data and files. At a time when WFH is here to stay, the cloud is an important tool that allows employees to access data and files from anywhere at any time. However, hackers are also taking note of the increased reliance on the cloud, which means they’re increasing their attacks on it as well.

Phishing/Social Engineering

Phishing and social engineering use employees against their own companies by sending malicious links and messages to employees to try to gain access to their passwords or devices. These techniques have been around for years, but they are consistently one of the top ways hackers gain access. Many believe that these schemes will only become more targeted and sophisticated, so it’s important that companies have training in place to teach their employees what to look for.

Increasing Regulations

We’ve talked a lot about the ways in which hackers are becoming more sophisticated and problematic. Countries are trying to tackle these emerging issues by enacting laws to increase cybersecurity protections. Regulations like the GDPR (a data protection law) requires certain security protections for EU residents/citizens’ information. Failure to comply with this regulation and the many others like it can result in huge fines for companies, possibly even civil or criminal charges, if they don’t take cybersecurity seriously.

Thankfully, companies and organizations are not alone when it comes to protecting their data.

FileCloud as a Hyper-Secure Solution

FileCloud is a file storage and sharing tool that allows companies to keep track of and protect their data.

Security has always been a top priority for FileCloud, and with the increase in hackers and malicious software, FileCloud understands that now is the time for a hyper-secure file sharing and storage tool that companies can still use with ease.

FileCloud’s Compliance Center helps organizations achieve and maintain compliance with ITAR, HIPAA, and GDPR tabs that provide best practices and easy-to-enact rules.

In addition, FileCloud has many excellent security and compliance options like:

  • Robust DLP, content governance, and permissions
  • Content Classification Engine (CCE) and custom metadata
  • Antivirus and ransomware protection (along with the option to enable detection of files with encrypted payloads to block and warn when ransomware enters the system)
  • Digital rights management
  • Granular folder permissions
  • 256-bit AES SSL encryption at rest
  • SSL/TLS protocols for data in transit
  • Active Directory integration
  • Two-factor authentication

Cybersecurity is not something companies and organizations can ignore or put on the back burner anymore. The trends show that hackers are only getting more sophisticated and malicious.

However, it is possible to keep your company or organization secure and compliant by using a hyper-secure file sharing and storage solution like FileCloud. FileCloud helps protect your data so that you can continue focusing on important work, knowing that you (and your clients) are secure and compliant.

3-2-1 Backup Strategy – Part 3: External Backup via Cloud Service

Now that we have covered backing up your computer and mobile devices locally using an external hard drive or a NAS, we can now set up our final security measure: backing up data via third-party cloud service.

There are many options available:

  • Drive
  • OneDrive
  • iCloud
  • Box
  • FileCloud
  • NextCloud
  • AWS

And many more…

Choosing the Best Fit

While you have many options available with a wide variety of prices, I recommend carefully evaluating your storage needs and privacy concerns to determine the best service for you.

Drive, OneDrive, iCloud, and Box are among the least expensive options. However, these options don’t offer precise user control over where files are stored or granular settings for privacy and reliability. For those who prioritize autonomy, security, and flexibility, a service like FileCloud or NextCloud may be more suitable.

To maximize privacy, running your own on-premises server with a cloud service (Digital Ocean, AWS, Google Cloud, Azure, etc.) is an ideal solution. You can fully control your data storage, but this control comes with a price. Running your own server is often the most expensive option. With that expense comes the ability to configure several layers of access barriers and encryption standards.

For example, you can run FileCloud on-premises on AWS, set up your S3 Bucket, and apply asymmetric encryption standards to your saved files. With these layers in place, if for any reason someone gains access to your S3 Bucket, your files will not be readily available–they would need the encryption keys or direct access to your FileCloud Server to decrypt the data.

The Cost of Running a Server

For users running their own FileCloud Server, we can apply some basic parameters to compare pricing, using calculators provided by cloud service platforms.

  1. Medium size computer
  2. Linux OS (Ubuntu)
  3. Additional 500 GB disk
  4. FileCloud Community Edition (sold separately)

For this example, we used the Google Cloud calculator:

Screenshot of Google Cloud calculator

With our initial parameters, the calculator has estimated that it would cost around $45 per month to run an independent server.

However, if your company already runs a FileCloud server or pays for FileCloud Online, you can take advantage of FileCloud’s backup option at no additional cost. This feature uses your existing cloud space; you can rest easy knowing your critical data is backed up in the event of a disaster (mobile device gone, computer gone, hard drive gone).

Backing Up Hard Drive Data to FileCloud

FileCloud includes an end-point backup solution that can help you back up specific folders from your local computer (like your hard drive) to your FileCloud account.

To add your local hard drive to FileCloud, open the Sync application and select “Backups” under the Configuration menu:

Screenshot of Backups Tab in FileCloud

From the “Backups” tab, you can select a folder, including that of your connected hard drive device:

screenshot of FileCloud Sync app, add a folder

FileCloud will then sync content found in the designated folder to your user account. I recommend enabling the “Email notification after the backup completes” option. This way, you can track when and how often your files are backed up.

screenshot of FileCloud Sync - backed up folders

Once configured, the FileCloud Sync application will back up your hard drive data to your FileCloud backups folder. This also completes your external source for backups, which cover the basics of your 3-2-1 backup strategy.

Article written by Daniel Alarcon

Security Monthly: Company Data in the Cloud

This article is the first entry in the Security Monthly series, where we will discuss recent and important events regarding security incidents, data protection, notable attacks, and related topics. To kick off our series, we will cover four attacks that demonstrate different aspects of how modern security breaches are operated.

Critical Infrastructure Needs to be Cyber-proof

There has been an increasing trend of critical infrastructure (emergency call centers, grid line controls, power plants, etc.) migrating service operations to the cloud. This migration leaves certain infrastructures vulnerable to cyberattacks. A European Union study highlights the need for a more organized approach toward securing critical infrastructure, similar to what is seen in technology companies. The report shows that a systemic approach to protecting institutions and organizations critical to a larger population must be considered from the ideation phase. Cybersecurity considerations thus become operational requirements – it is a crucial part of any business or endeavor.

With cloud adoption rising, the associated risk of being attacked is also increasing. There are many types of issues in software that can be exploited by hackers – as developer tooling and experience rises, so does the number of new developers and hackers. Armed with knowledge of which attack is most popular, we can better prepare for a security incident.

The list of top ten important vulnerabilities for 2021 is available on the OWASP website, along with in-depth analysis and context behind each of the vulnerabilities depicted below and the methodology behind how this list was calculated.

Fig 1. OWASP Top 10 Vulnerabilities Shift 2017 to 2021

Consequences for Poor Cybersecurity

With the need to protect critical infrastructure comes the need to immunize infrastructure (or at least have a backup plan) against the most typical vulnerabilities. Broken access control can lead to disaster scenarios such as losing control over nuclear reactors or leaking millions of credit card information or a billion users’ passwords online. All these attacks exploited one or more of the known, popular vulnerabilities.

In this introductory article, we will take a look at some of the more popular and recently talked about attacks from around the world. First, we will review the recent attack mitigated by Azure Cloud. We’ll follow with another Microsoft company, LinkedIn, which fell victim to an attack that leaked 700 million users’ data, only two months after a breach that leaked 500 million users.

We will then examine a leak of 1.1 billion users’ information from Alibaba, where a malicious actor was scraping the platform’s data containing sensitive information over a period of eight months. The last piece will show an infrastructure attack on npm (Node Package Manager) by publishing a package with crypto-mining malware.

The need to protect critical systems will become more prevalent in the systems that engineers create. Consider the current possibility: an attack on your local home server running your IoT doorbells can lock you out of your home; imagine what can happen if a nuclear power plant is hacked.

We hope to never know.

Azure Cloud Mitigates 2.4 Tbps DDoS Attack

Graph showing bandwidth spikes registered by Azure during 2021 DDoS Attack

Fig 2. UDP bandwidth mitigation timeframe by Azure

In the last weeks of August, Microsoft’s Azure service was able to save a customer hosting his data in Europe – it was the biggest attack to date in terms of volume, with over 70 thousand hosts sending requests. The inbound traffic was 140% larger than the impressive attack from 2020, also mitigated by Azure.

Though the blog post covering the incident does not share details, other news outlets state the attack was a type of DDOS known as UDP reflection.

“Reflected amplification attacks are a type of denial of service attacks wherein a threat actor takes advantage of the connectionless nature of UDP protocol with spoofed requests so as to overwhelm a target server or network with a flood of packets, causing disruption or rendering the server and its surrounding infrastructure unavailable.” thehackernews report

Azure was able to fend off this attack due to the massive scale of the cloud, applying specific logic that could siphon the huge data wave before it ever arrived at the customer service. The solution was implemented behind the scenes, with customers experiencing no issues during the attack.

With services delivered over the internet, the risk of disruption is high – especially for high-risk targets. The abundance of IoT devices that form new botnets is such that protection against denial of service attacks must be considered when working on a critical system.

It is not an easy task, as DDoS mitigation happens at a very low level – not every company is able to invest in precautions. Even fewer companies are able to build in-house solutions to handle data floods of such volume.

Slow Yet Thorough – How to Scrape a LinkedIn Profile

The news of the attack came via email from a concerned author at PrivacyShark, who saw a list of LinkedIn user data for sale on a hacker forum. Due to the hack, private emails and phone numbers were hosted online, available to malicious actors for spam and identity theft.

The issue of identity theft is serious, as it leads to losses on the order of 56 billion USD, as reported by CNBC. The total number of US citizens hit by an identity fraud attempt is on the order of 45 million. If there is one thing we can take for certain, it is that data in circulation is being put to use by criminals at ever faster rates. Furthermore, attackers are using new approaches to access user data, which may occupy a legal grey area, such as automated scraping.

This activity does pose some interesting legal questions. LinkedIn is currently involved in a Supreme Court case that seeks to define online scraping as illegal. If the ruling is in LinkedIn’s favor, scraping their or other social websites could be deemed as criminal activity.

man in front of screen with cybersecurity icons

An Alibaba Hack Leads to New Laws in China

The attack on Taobao, part of Alibaba, had led to criminal prosecution and jail for the attacker as well as his employer. Personal data was siphoned out of the system for over eight months by an employee of a consultancy firm.

The data was supposedly not sold online. The judge ruled jail terms of three years, with fines totaling 70K USD. In the aftermath of this case, China introduced new data protection laws, granting the state the ability to shut down services at will or fine companies found mishandling core state data.

Subsequently, a personal information protection policy is also in the works as the government is heavily invested in IT infrastructure. This law will give immense power to officials running the country.

It is worth noting that security issues can lead to significant changes in federal and global laws. With IT security being considered at legislative levels, cybersecurity is an increasingly important subject for lawmakers to understand. After all, if those crafting and implementing new laws do not understand what they are doing, how can they make an informed decision on the matter?

npm Hosting Crypto Mining Malware

With over six million weekly downloads, UAParser.js is a popular package used by developers all around the world. However, malicious versions of this package entered the registry, likely through a hijacked account.

All computers running the package version served as open hosts to malware and trojans, starting a vicious cycle of infestation – this was an attack placed deep in the supply chain.

 “The malicious versions were found to steal data (including passwords and Chrome cookies, perhaps much more) from computers or run a crypto-currency miner.” Hackaday

 The response to the attack was immediately put to public attention, and users could mitigate the issue once seen. It’s not yet clear how big of an impact this caused in the real world.

The important takeaway from this story is that supply chain attacks that lead to ransomware are easier than ever (remember Kaseya?) and do real harm. It only shows that even developers, who supposedly know a thing or two about security, can be vulnerable too.

An important element of this story is that once the attack was confirmed, the npm registry pulled all infected packages. Swift action can be a deciding factor in how well cybersecurity issues are resolved and how companies recover.

Conclusions

Data safety, compliance, and security for sensitive information are prime topics for every industry touched by digital transformation. To create a secure ecosystem, it is important to know not only the systems we create but to also understand the attacks and outcomes for end-users. It’s crucial for users and designers to tread carefully when securing a system.

A leaked email may be relatively mild on the scale of hacking worries. Leaking credit card data or social security numbers, on the other hand, has real-world implications. Since the pandemic and the global drift toward remote work, hackers have developed new methods of stealing user data and money with each passing month.

Several organizations were not prepared to move toward digitized platforms and the predators lurking in the network. With cyberspace full of technologically advanced attackers, it is ever more important to stay on the safe side, with multiple layers of protection and strong IT practices.

The next entry in the Security Monthly series will describe ransomware attacks, as well as new attacks that use AI – stay tuned!

Article written by Piotr Słupski

3-2-1 Backup Strategy – Part 2: Mobile Devices

Banner for part 2 of 3-2-1 Backup Strategy series (Mobile Backup)

Back Up Your Mobile Device

Following our first article on the 3-2-1 Backup Strategy, we are now going to discuss backing up your mobile devices (smartphones, tablets, etc.).

While both Android and iOS devices have their own cloud storage backup solutions (Drive and iCloud, respectively), we will focus on local backups. These services can act as your third device/location backup solution. I normally prefer not to depend on third-party services, since these typically offer less user control over data and may pose privacy concerns.

Android Device

With Android devices, we have a few options to back up files with a hard drive:

  • If your phone supports USB OTG, you can connect your hard drive directly to the mobile device.
  • Connect your mobile device and your hard drive to your computer, creating a bridge between your devices.
  • Wirelessly connect your hard drive or NAS to your local network (preferred option).

While the first two options are simple enough, they do require you to manually copy your data to your hard drive on a regular (daily or weekly) basis.

The third option, wirelessly connecting or “syncing” your hard drive or NAS, enables you to implement an automatic backup process; this process can grant you peace of mind without needing to rely on manual backups. With a NAS, the storage device is readily available. However, if you only have your desktop/laptop and an external hard drive, you can still automate backups.

How to Wirelessly Sync Android Device Files

The first thing you need to do is to make your hard drive available to access in your local network. First, connect your hard drive to your desktop/laptop computer. Next, go to Windows Explorer/Finder (macOS) and “share” your hard drive device over your local network. (This process may vary depending on your operating system version and type.)
Screenshot of Microsoft Advanced Sharing Dialog Box
Then sync your files from your Android device. I recommend installing Folder Sync on your device to initiate the sync process. There are Free and Pro versions, with the Pro version offering more control over synchronization. Add your hard drive as a device (while connected to the same network as the desktop/laptop hard drive). Usually, you can do this by adding a new SMB (Server Message Block) device:

Screenshot of Dialog Box to add new SMB device

Simply fill in your computer username/password and enter the computer IP address (local network):

Screenshot of dialog box to Identify/name an SMB device

You can test the connection to verify and save your information. That’s all you need to do to connect the hard drive. Now, let’s create a sync pair:

Screenshot of dialog box - create a sync pair between hard drive and local network

Here is where “Sync in the background” can be configured. I recommend the following settings:

  • Sync Type: Remote folder. (If you change anything on your backup device, it will not delete the file on your mobile device.)
  • Remote Folder: Chose a path on your hard drive that will store your backup files.
  • Local Folder: in the screenshot, I chose to use the default DCIM folder to back up my pictures. I recommend adding different folders for other content (e.g., Downloads, Screenshots, Pictures, etc.)
  • Use Schedule Sync: Yes/Daily. Ensures the Sync operation happens every day.
  • Sync Subfolders/Hidden Files: Yes. Make sure you back up everything in the folder.
  • Sync Deletions: No. Very important – if you create large data files (like 4K videos) on your phone, then sync these files to your hard drive to save space, you can then choose to delete the files from your phone, knowing you have a backup of that video on your hard drive.
  • Connection Settings: Sync only over Wi-Fi. If you want to be more specific, you can write your Home Wi-Fi SSID to ensure Sync will only run over your home network.

Screenshot of network settings to enforce syncing only over same wi-fi

Once you complete your configurations, from then on your mobile device will sync to your hard drive when connected over the same home local network and Wi-Fi.

iOS Device

For iOS devices, your options are more limited for local backups:

  1. Connect your device to your computer and back up using iTunes (over USB or wirelessly).
  2. Use third-party applications or services to connect to local NAS.

iOS makes things simple for you to back up your device to your local computer using iTunes; connecting your device and choosing “back up to local computer” will copy your entire device.

Screenshot of iTunes Backup menu

Unfortunately, this will sync your files with the local computer, not the external hard drive. You will need to manually back up your data to the hard drive by copying/syncing your folder from the AppData directory “%appdata%\Apple Computer\MobileSync\Backup”. On a macOS computer, you can retrieve data from “~/Library/Application Support/MobileSync”.

Syncing with a third-party application varies widely since different NAS brands often use proprietary software included on the drive. For example, if you have a Synology NAS, you can use the Synology Moments application to back up your pictures (though not other files). As a result, we will not cover this topic in this article. In our next and final article related to the 3-2-1 backup strategy, we will cover syncing backups with third-party cloud storage.

Article written by Daniel Alarcon

3-2-1 Backup Strategy – Part 1: Desktop/Laptop Computers

Many people know that backups are a good idea, yet not everyone acts to ensure their data is backed up in case a device is stolen, lost, or damaged. By device, we’re not specifically talking about your smartphone; this term includes your laptop, tablet, video camera, etc. — basically any piece of technology that stores data.

Backups are not generally enjoyable. However, once you configure a backup strategy and automate the process as much as possible, you can attain peace of mind as well as effective protection for when the unexpected happens.

This is going to be a three-part article. In this first installment, we will go through the 3-2-1 strategy and explore how to back up your desktop and laptop files. In the second and third parts, we will explain how to back up your mobile device and online backup services.

What is the 3-2-1 Backup Strategy?

The 3-2-1 strategy means that your data resides in three different devices: for example, the pictures stored on your mobile device count as one copy. If you duplicate those pictures to your home desktop computer, that makes two copies of your pictures across two different devices. The third copy should be “off-site.” If your pictures are synchronized with an online service, that counts as your third copy.

In summary:

  1. The file stored in the original device is #1.
  2. The copy stored on your local computer, local server, Network Attached Storage (NAS), etc. is #2.
  3. The copy in an off-site location (via an online service or hard drive stored elsewhere) is #3.

The 3-2-1 strategy is not the only option to protect your files, nor is it the ultimate solution. This is the recommended starting point when it comes to safeguarding your valued data.

Create a Simple Process to Back Up Data

Establishing this process involves two main devices: a smartphone and a laptop or desktop computer. To back up the data on your main devices, you will need a secondary device. In this case, an external hard drive would be appropriate.

Should I Use a Hard Drive or a NAS?

This will depend mostly on preference and how much data you need to backup.

Advantages of a hard drive:

  • Simple to use — plug into the computer via USB, and sync your files.
  • Space — uses very little space in your home/office.
  • Efficient – the hard drive only uses energy when it is connected to the computer.
  • Affordable — a 2 TB hard drive can be found for around $50.

Advantages of a NAS:

  • Network available — no need to connect via USB, you can connect via your local Network.
  • Redundancy — set up a second drive to serve as a copy of the first one in case the first fails.
  • Larger capacity and expandability — use disks up to 14 TB and can upgrade these disks at any time.
  • Accessible – connect the NAS to the internet to store data outside your home/office.
  • Synchronization — the NAS can be synchronized with a cloud service storage provider.

On the other hand, using a NAS has some disadvantages:

  • Higher price — to start using a NAS, you will need the NAS enclosure and at least two hard drives, which can add up to around $150 – $200.
  • More complex setup — you need to install and configure your NAS, which can take at least one to two hours.
  • Always on – a NAS is basically another small computer connected to a power source and network.

We will review NAS storage in a future article. For now, we will focus on getting started with an external hard drive.

How Large Should an External Hard Drive Be?

This will depend on each user. For example, if you are the only person living in your household, you can do some simple math:

1 Laptop (1 TB total disk capacity) + 1 Mobile Device (256 GB) + 1 Tablet (256 GB) = 1.5 TB -> 2 TB

This means that a 2 TB hard drive should be large enough to meet your storage needs. Now, if there is more than one person in your household, consider adjusting this equation to reflect your requirements and invest in a larger hard drive or multiple.

How Do I Back Up My Files Using a Hard Drive?

There are several options to back up your data on a laptop or desktop computer (backing up a mobile device or tablet will be covered in a future article):

  1. Copy/Paste your files.
  2. Use an application to sync your files.
  3. Use your OS backup solution to copy your data.

The copy/paste option is the simplest way to back up your data. Connecting your hard drive to your computer enables you to copy and paste files directly into the hard drive folder. The drawback however is that this method requires user intervention and is often forgotten.

Use an Application to Sync Files

There are several applications available that can back up your data to your hard drive. Hard drives may come with an application pre-loaded, depending on the brand. Personally, I’ve used Free File Sync to copy my computer files to the external hard drive.

Free File Sync is a nice tool because it offers full control over how I want to back up my data, including removing files from the computer as they’re copied to free up space. At a minimum, it’s important to back up your important data, often found in these folders:

  • My Documents
  • Desktop
  • Downloads
  • Any User-Created Folder

Use Your OS to Run a Backup

If you are using Windows or macOS, you can choose to enable the built-in backup function. For Windows OS, you can use the built-in OS solution:

If you are using BitLocker to encrypt your files, you will only be able to back up your history version files, not the live version.

For macOS, you can use Time Machine. The macOS solution is far more powerful than the Windows option. Time Machine allows you to back up files, applications, and application settings, among other data. If you need to restore your data on a new computer, everything will be readily available.

If you choose to sync your files using Free File Sync, MS Windows Backup, macOS Time Machine, or another backup software option, ensure your computer will back up every day to avoid losing relevant or high-touch files.

In the next article, we will cover backing up mobile devices (Android or iOS).

Article written by Daniel Alarcon

FileCloud Single Sign-on with YubiKey and ADFS

What is SSO?

Single sign-on (SSO) is an authentication scheme that allows a user to log in with a single ID and password to any of several related, yet independent software systems.

True single sign-on allows the user to log in once and access services without re-entering authentication factors.

What is Two-factor Authentication?

Two-factor authentication (2FA), sometimes referred to as two-step verification or dual-factor authentication, is a security process in which users provide two different authentication factors to verify themselves. This process is done to better protect both the user’s credentials and the resources the user can access.

What is ADFS?

Active Directory Federation Services (ADFS), a software component developed by Microsoft, runs on Windows Server operating systems and provides users with single sign-on access to systems and applications located across organizational boundaries.

What is a YubiKey?

Yubico offers different types of “YubiKeys”. The most recent release is the YubiKey 5 Series, which comes in USB-A, USB-C, Lightning, and NFC.

The YubiKey is a device that makes two-factor authentication as simple as possible. Instead of a code being texted to you or generated by an app on your phone, simply plug in your YubiKey and press a button. Each device has a unique code built into it, which generates additional codes that help confirm your identity.

YubiKey is used by leaders in the tech industry across widely recognized platforms and software services. These include Microsoft, Google, Amazon, eBay, GitHub, Citrix, SalesForce, DropBox, Facebook, and Twitter, among others.

Set Up FileCloud SSO with ADFS and YubiKey as a 2FA method

  1. Add YubiKey as a two-factor authentication method to ADFS 2019 by following the steps described here.
  2. Find the GitHub Code here.
  3. Add custom attributes to Users in Active Directory by following the steps described here.
  4. Enable SAML SSO in FileCloud using the steps described here here.
  5. Set Up FileCloud SSO with ADFS using the steps described here.

FileCloud SSO with ADFS and YubiKey

When the user plugs in their YubiKey and presses the button to generate the token, the first 12 characters of the code are the YubiKey ID.
ADFS compares the first 12 characters with the YubiKey ID added in the custom attribute. If they match, ADFS sends an API call to a cloud API gateway from the YubiKey. This API call confirms whether the code is valid.

Once validated, the SSO session is confirmed. The user is redirected to their FileCloud dashboard. The whole process is easy, fast, and secure.

Article written by Wail Bouziane

A Brief History of Backend Data Security

Software is not like wine and cheese, it does not get better with age, on the contrary, security strength decreases over time because of software obsolescence. Data security has always been important. But since more people are working remotely as a result of the current health crisis, there are more opportunities for unauthorized access to your data than ever before.

Security is a group effort since the weakest link is the point of entry. According to a study conducted by IBM and The Ponemon Institute, the two root causes of data breaches in 2020 were compromised credentials (most often due to weak passwords) and cloud misconfigurations (leaving sensitive data accessible ). According to Gartner, In 2021, exposed APIs will pose large threats than UI in 90% of web-enabled applications. Organizations spend time and effort securing the information on the front end, but the attackers claw their way into the system anyway. Businesses need to set up another check on the way out of the network. In other words, if you miss a thief on the way in, you still can catch him on the way out. If the attacker accesses confidential information, it has value only if they can transfer it to their systems.

Database security is a complex process that involves all aspects of information security technologies and practices. It’s also usually at odds with database usability. The more accessible and easier it is to use the database, the more vulnerable it is; the more invulnerable the database is to threats, the more difficult it is to access and use. This paradox is called Anderson’s Rule.

Cyber Security evolution Over the Years

Let us take a look at how data security evolved over the decades. There are a few good stories in there you will enjoy reading.

1940’s

Access to the giant electronic machines was limited to a small group of people and they weren’t networked. Only a few people knew how to work them so there was no imminent threat. The theory regarding computer viruses was first known in 1949 when computer pioneer John Von Neumann said that computer programs could reproduce

1950’s

The roots of hacking are as much related to telephones as they are to computers. In the late 1950s, ‘phone phreaking’ was predominant. The term encapsulates several methods that ‘phreaks’ (people with an interest in the workings of telephones) used to override the protocols that allowed telecom engineers to work on the network remotely to make free calls.

1960’s

Most computers in the early 1960s were still huge mainframes, put away in secure temperature-controlled rooms. These were very costly, so accessibility – even to admins – was limited. Back then, the attacks had no commercial or geopolitical purposes. Most hackers were curious people or someone who wanted to improve existing systems.

1970’s

Cybersecurity actually began in 1972 with a project on ARPANET (The Advanced Research Projects Agency Network), a prequel to the internet. Researcher Bob Thomas came up with a computer program called “Creeper” that could travel within ARPANET’s network, leaving breadcrumbs wherever it went. The breadcrumb left a message saying: ‘I’m the creeper, catch me if you can’. Ray Tomlinson (the inventor of email ) wrote another program called Reaper. It chased and deleted Creeper. Reaper was the first antivirus software, it was also the first duplicating program, making it the first-ever computer worm.

1980’s

The 1980s saw an increase in high-profile attacks, like those at National CSS, AT&T, and Los Alamos National Laboratory. The terms Trojan Horse and Computer Virus were first used in 1980 s as well. Cybersecurity started to be taken more seriously. Tech users quickly learned to monitor the file size, having learned that an increase in the size of the file was the first sign of potential virus infection. Cybersecurity policies incorporated this, and a reduction in free operating memory remains a sign of attack to this day. Early antivirus software incorporated simple scanners that performed context searches to detect virus code sequences. Most of the scanners also included “immunizers” that made viruses think the computer was already infected and not attack them ( Similar to our vaccines).

1990’s.

New viruses and malware increased in the 1990s, from tens of thousands to around 5 million every year by 2007. In the mid-‘90s, it was clear that cybersecurity had to be mass-influenced to protect the public. One NASA researcher developed the first firewall program, basing it on the structures that prevent the spread of actual fires in buildings. By the end of the 1990s, email was booming and while it promised to revolutionize communication, it also opened up a new entry point for viruses.

2000’s

With the Internet being a household thing in the early 2000, cyber-criminals had more vulnerabilities to exploit than ever before. As more and more data was being stored digitally, there was more to hack._ In 2001, a new infection technique surfaced: people no longer needed to download – visiting an infected website was enough. Viruses infected the clean pages or ‘hid’ malware on legitimate web pages. Messaging services were also targeted, and worms were designed to propagate via IRC (Internet Chat Relay) channel. The development of zero-day attacks, which make use of gaps in security software and applications, meant that antivirus was less effective.

2010’s

Cybersecurity tailored specifically to the needs of businesses became more prevalent in 2011. As cybersecurity developed to handle a wide range of attack types, attackers started with their own innovations: multi-vector attacks and Social engineering. Attackers were smarter and antivirus was forced to move from signature-based methods of detection to next-gen innovations.

How the Backend Looks Like

Security is something that should be included in all stages of software engineering including architecture. Let us first understand how the back-end functions. Applications or front-end will never have access to the database directly. There is usually a master-slave approach to the architecture where there is an app server in between where the data is scrubbed (for protecting any personal data or PII) before sending it to the front-end.

Classic backend security design patterns | by Cossack Labs | Medium

So it is best to distribute security handling since there is no one solution for this. Most applications are framed so that people who are responsible for data management (application admins) are not given access to the underlying database. And people who have data access (Data scientists, Info-sec personnel, etc) are not included in the business end of the operations. The primary reason for this is for auditing. People who change data can do so only through the front end. The front end leaves an audit trail of actions taken. Having an audit trail keeps the application admins accountable. Also, you can prevent the app admins from looking at things they shouldn’t be looking at. Companies also prefer to keep their Architecture secret, since one of the ways to discover a vulnerability in a system is to understand what the underlying architecture is.

Common Threats to Data Security

We will now go through some common threats to data security in current times and how you can mitigate them

  • Injection Flaws – It happens when you pass unfiltered data to the SQL server, to the browser, to the LDAP server or anywhere else. The problem here is that the attacker can inject commands resulting in loss of data. Organizations that do not follow secure application coding practices and do not perform regular vulnerability tests are open to these attacks.
  • Broken Authentication – It’s the first line of defense against unrestricted access. However, if the implementation is poor and there is no proper security policy in place,it can lead to broken authentication. You can avoid it by doing Multi-Factor Authentication, enforcing a good password policy, limit the number of failed logins and incorporate session timeouts.
  • Cross-Site Scripting (XSS) – It occurs when the attacker posts some data containing malicious code that the application stores. This vulnerability is on the server-side; the browser simply renders the response. You can mitigate it by validating the input (Check for input length, use regex match and permit specific characters) and by validating output ( this data should be HTML-encoded to sanitize potentially malicious characters )
  • Insecure Direct Object References – A internal object such as a file or database key is exposed to the user. The problem with this is that the attacker can provide this reference and, if authorization is broken, the attacker can access the data and manipulate or steal it. The problem can be avoided by storing data internally and not being passed from the client via CGI parameters. Most frameworks have session variables that are well suited for this purpose.
  • Security Misconfiguration – It is the implementation of improper security controls, for servers or application configurations. Instances like running the application with debugging enabled in production, having directory listing enabled on the server which leaks valuable information, running outdated software, or having unnecessary services running on the machine may lead to the security vulnerability. The simple security misconfiguration solution is post-commit hooks, to prevent the code from going out with default passwords.
  • Sensitive Data Exposure – It occurs when the information is not properly protected in the application. Information such as credentials or sensitive data like credit cards or health records is usually targeted due to this vulnerability.. More than 4000 records are breached every minute. You can mitigate it by encrypting data both at rest and in transit. Incorporate key-based encryption and have a secure backup plan.
  • Missing function level access control – This can happen due to authorization failure at the server. You cannot keep an attacker from discovering this functionality and misusing it. Authorization must always be done on the server-side before giving any access or this vulnerability will result in serious problems.

Some Basic Security Practices to Cover all the Bases

Even a small error can allow the attackers to hijack the database systems that can cost up to millions. To prevent such consequences, organizations should always imbibe the “everything will be broken” threat model to secure databases and prevent valuable information from getting compromised. I have listed a few of the basic security measures you can take for your organization to keep your database safe

Separate Web server and Databases

Keep both the servers (application and database) on separate machines. A hosting server for the application can be utilized, but for storing customers’ valuable data, choose a separate database server with security features like multifactor authentication and proper access permissions. Hosting applications and databases on the same machine make it easier for the attackers to break into the system and hack into the administrator account.

Firewalls and Malware Solutions

Once the database is set up, it is important to ensure that is fully protected by a firewall that is capable of filtering any outbound connections and any requests which are meant to access information. The database server should also be protected from any malicious files by installing anti-malware and anti-ransomware software

Encryption and Backups

Encryption consists of protecting the data with a private key on the application server or the database server. So, even if attackers have access to the database, they cannot decrypt the data easily. Encryption of data in transit is also implemented, where the data is encrypted before it’s transferred over the network from the application server to the database server and vice-versa.

Account Management

Organizations should ensure the least number of users who can access the database (Usually Data scientist or Infosec personnel). There should be proper authentication (2FA, MFA, etc) process implemented for the users. Database credentials should be stored in a hashed format so they are unreadable. Activity logs should be updated regularly to monitor all the activities regarding queries and requests

Updated Operating Systems and Applications

All the third-party software, APIs, and plugins must be updated to their latest versions. These systems should be updated regularly or whenever the new patches are released. This ensures that the latest versions are capable of immunizing the system with newly discovered cyber threats.

Conclusion

Backend data protection is very important. It is critical for your sensitive data especially with new data protection policies in place all over the world. Using the best security practices, we can stop the most anticipated risks and start a foundation for really solid security for your product.

Keeping your Cloud Infrastructure Secure Using SIEM

What if instead of building a solution that processes and collects logs and security events, you could push the problem to the cloud through an encrypted channel? As a result, you would easily get detailed reports about threats to your company.

In this article, we will examine the topic of SIEM (Security Information and Event Management) and explain what SIEM is and what we gain from such a system. We will also list a few principles helpful in its implementation in the cloud model.

So, what’s the commotion?

SIEM is a multi-component security system for monitoring and analysis designed to help organizations detect threats and mitigate the effects of attacks. It combines several disciplines and tools under one coherent system:

  • Log Management System (LMS) – Tools used for traditional log collection and storage.
  • Security Information Management (SIM) – Tools or systems that focus on collecting and managing security-related data from multiple sources, such as firewalls, DNS servers, routers, and anti-viruses.
  • Security Event Management (SEM) – Systems based on proactive monitoring and analysis, including data visualization, event correlation and alerting.

SIEM is a term used today for a management system that combines all the above elements into one platform which knows how to automatically collect and process information from distributed sources, store it in one centralized location, compare various events and generate alerts based on this information.

The evolution of SIEM over time

SIEM is not a new technology. The platform’s core capabilities have existed in various forms for almost 15 years. Formerly, SIEM relied on local deployments to get a unified overview. This meant that hardware upgrades, data analysis, and scaling problems required constant tuning to achieve maximum performance. Modern SIEM tools focus on native sourcing support for cloud hosting providers. They also collect endpoint data such as parent/child processes into the flow to offer nuanced detection support – essential for compliance.

Why do we need SIEM?

No one doubts that the number and variety of attacks on information systems is constantly growing. System and network monitoring has always played a key role in protecting against attacks. Many interrelated attack methods and techniques have evolved over the years, and it has quickly become apparent that the changing nature of cybercrime means that some threats often go unnoticed.

For security analysts, SIEM systems are the central vantage point of the IT environment. By centralizing all the data that measures the health and security of your systems, you can have real-time visibility of all processes and events. The ability to correlate logs from multiple systems and present them in one view is the main advantage and benefit of SIEM.

Many complex incidents may go unnoticed by the first layer of security because individual events lack context. Rules set in SIEM systems and reporting mechanisms help organizations detect events that contribute to a more sophisticated attack or malicious activity. In addition, it is possible to automatically react to an ongoing attack and mitigate its effects.

What will you gain by moving SIEM to the cloud?

Cloud-based solutions provide flexibility to use a wide range of datasets in both on-premise and cloud-based systems. As more and more companies start working in models such as infrastructure as a service (IaaS), platform as a service (PaaS) and software as a service (SaaS), the ease of integration with third party systems shows that SIEM in the cloud makes even more sense. The most important benefits of moving SIEM to the cloud are the flexibility provided by hybrid architecture, automatic software updates, simplified configuration, scalable infrastructure, large possibilities of adjusting the system to individual needs and high availability.

5 rules to help implement SIEM in the cloud model

In order to fully use the potential of SIEM, in particular the versions intended for enterprises, you need a good action plan and a large dose of precaution and vigilance. With proper implementation, SIEM can transform an IT department from an infrastructure-based model to an information-centered model

Implementing and managing SIEM in the cloud increases accessibility, efficiency, and ease of management, but like any technology, it has some drawbacks and pitfalls. By following a few simple rules, you can avoid them:

1. Define your goals and adapt implementations to them

Before implementing, answer these 5 questions:

  1. What do you need SIEM for? Compatibility issues? BYOD? Vulnerability detection?
  2. How should SIEM be implemented to meet your expectations (what processes, functionalities and properties should be covered by the SIEM)?
  3. What should be recorded, analyzed and reported?
  4. What should be the scale of implementation to properly and cost-effectively meet your business needs?
  5. Where is the data that should be monitored? 

2. Incremental use.

The quickest way to succeed is to start with small steps to broaden your scope. In some cases, this may mean starting with managing the logs and adding a SIEM as soon as you understand the requirements, volume and needs. Now, when security as a service enables a flexible and scalable approach, the starting point may be to launch a SIEM within the scope of regulations and standards that you must comply with or within individual areas, departments or units.

3. Define an incident response plan.

You should plan and define actions to be taken when an incident attracts your attention. Do you investigate, suspend the user, deactivate the password, deny the service for a particular IP address, or apply other corrective measures based on the severity of the threat, the level of vulnerability, or the identity of the attacker? A well-defined incident response plan allows you to manage vulnerabilities in your network and ensure compliance with the requirements.

4. Real time monitoring 7/24/365.

This can be a challenge for many organizations, but hackers are awake. Despite the fact that SIEM is a fully automated solution, it requires constant vigilance and monitoring by a human 24 hours a day, and many IT departments do not have sufficient resources for this. In this case, security as a service has an advantage over traditional solutions and allows you to sleep more peacefully at night. Knowing that this element of the security process can be handled by professionals without the need to involve additional staff and budget makes the solutions in the cloud model worthy of interest.

5. Be cold as ice!

Soon after the implementation and launch of a SIEM you may observe the occurrence of a completely unexpected number and type of alarms due to malware, botnets, and a whole host of other security nightmares. It’s like viewing bedding under a microscope. You learn that you are surrounded by a lot of strange creatures that have always been there, but when you take adequate measures to get rid of them, they turn out to be not as dangerous as they looked. It is similar with the launch of SIEM. Once you realize what is a threat and how to react to it, you will be able to make intelligent decisions and automate the entire process more and more.

Summary

SIEM is a security platform that processes event records and collects them in one place, offering a single view of your data with additional information.

The most important benefits of moving SIEM to the cloud include:

  • Flexibility provided by hybrid architecture
  • Automatic software updates
  • Simplified configuration, scalable infrastructure
  • Large possibilities of adjusting the system to individual needs and high availability

To implement SIEM in your company, you need a good plan and a large dose of thrift and vigilance. Remember, be cold as ice!

 

Article written by Piotr Slupski.

Information Security – An Overview of General Concepts

Information Security – The protection of information and information systems from unauthorized access, use, disclosure, disruption, modification, or destruction in order to provide confidentiality, integrity, and availability.

– Definition of Information Security from the glossary of the U. S. Computer Security Resource Center

Why we need to protect information

Information and information systems help us to store and process information and distribute the right type of information to the right type of user at the right time. This sort of protection helps protect information from unauthorized access, distribution, and modification. Thus, it is evident that information is an asset and needs to be protected from internal and external resources.

CIA triangle

The CIA triad is a commonly used model for the requirements of information security.  CIA stands for confidentiality, integrity, and availability. These principles help in protecting information in a secured manner, and thereby safeguard the critical assets of an organization by protecting against disclosure to unauthorized users (confidentiality), improper modification (integrity) and non-access when access is required (availability).

Here, we’ll look at each of these concepts in more detail.

Confidentiality

Confidentiality helps to ascertain whether information is to be kept secret or private by employing mechanisms, such as encryption, which will render the data useless if accessed in an unauthorized manner. The necessary level of secrecy is enforced, and unauthorized disclosure is prevented.

Integrity

Integrity deals with the provision of accuracy and reliability of the information and systems. Information should be prevented from modification in an unauthorized manner by providing the necessary safety measures for timely detection of unauthorized changes.

Availability

Availability ensures that information is available when it is needed. Reliable and timely access to data and resources is provided to authorized individuals. This can be accomplished by implementing tools ranging from battery backup at a data center to a content distribution network in the cloud.

Balanced security

It is impossible to obtain perfect information security. Information security is a process, not a goal. It is possible to make a system available to anyone, anywhere, anytime through any means. However, such unrestricted access posses a danger to the security of the information.

On the other hand, a completely secure information system would not allow anyone to access information. To achieve balance, operate an information system that satisfies the user and the security professional – the security level must allow reasonable access, yet protect against threats.

Security Concepts

Vulnerabilities, Threats, and Risks. Security is often discussed in terms of vulnerabilities, threats, and risks.

Vulnerability

A vulnerability is a security weakness, such as an unpatched application or operating system, an unrestricted wireless access point, an open port on a firewall, lax physical security that allows anyone to enter a server room, or unenforced password management on servers and workstations.

Threat

A threat  occurs when someone identifies a specific vulnerability and uses it against a company or individual, thereby taking advantage of the vulnerability. A threat agent could be an intruder accessing the network through a port on the firewall, a process accessing data in a way that violates the security policy, or an employee circumventing controls in order to copy files to a medium that could expose confidential information.

Risk

A risk is the likelihood of a threat agent exploiting a vulnerability and the corresponding business impact.

If a firewall has several ports open, there is a higher risk that an intruder will use one to access the network in an unauthorized method.

If users are not educated on processes and procedures, there is a risk that an employee will make an unintentional mistake that may destroy data.

If an Intrusion Detection System (IDS) is not implemented on a network, there is a higher risk an attack will go unnoticed until its too late.

Exposure

Exposure is an instance of data being exposed. If users’ passwords are exposed they may be accessed and used in an unauthorized manner.

Countermeasure

Countermeasures are put into place to mitigate potential risks. A countermeasure may be a software configuration, a hardware device, or a procedure that eliminates a vulnerability or that reduces the chances a threat agent will be able to exploit a vulnerability. Examples of countermeasures are strong password management, firewalls, security guards, access control mechanisms, encryption and security awareness training.

Security Governance

Information security governance is the collection of practices related to supporting, defining, and directing the security efforts of an organization. Security governance is closely related to and often intertwined with enterprise and IT governance.

 

This article was written by Catherin S. 

References

https://csrc.nist.gov/glossary/term/information_security

Devopedia. 2020. “Information Security Principles.” Version 4, July 21. Accessed 2021-03-28. https://devopedia.org/information-security-principles

Maymi, F., & Harris, S. (2018). CISSP All-In-One Exam Guide, Eighth Edition. McGraw-Hill Education.

Vi Minh Toi. (2016, September 10). Security Risk Management, Tough Path to Success. Retrieved from https://www.slideshare.net/sbc-vn/vi-minh-toi-security-risk-management-tough-path-to-success

Finding a Safe Place for Your Data and Software

Data Security

 

Your organization runs on data and software. But this whole IT environment needs to live somewhere. Preferably a safe place that no unwanted people can access.

What options do you have? How should you choose where to host your data and your software?

In this article, we’ll explore these topics in-depth, hopefully giving you that bit of additional information that you need to choose a safe place for your IT environment.

 

Where can you host your software/data?

The traditional way is to host it on your own servers, which is called on-premise hosting.

It’s private by nature because the whole infrastructure is dedicated only to your company. The software literally lives on your own machines, along with the data and all of your intellectual property. Servers don’t need to actually be located at your headquarters, they’ll probably be in a dedicated data center.

The “new” (it’s not that new and pretty much standard by now) way to manage your IT resources is cloud hosting.

It’s public by default because it’s provided by a company like Amazon or Microsoft, whose insane server power is shared by all of their customers. But it can be private because cloud providers offer the option to get a share of their servers dedicated only to your company.

Finally, you can also mix the different options, and then you get hybrid hosting. There are a lot of ways to organize a hybrid solution, with different combinations of hardware and software. Choosing one cloud provider doesn’t mean you can only use that one, you can also combine different services from multiple providers.

How much control do you need?

When it comes to hosting your software and data, available server options generally fall into these categories:

  • Control the hardware, control the software
  • Control the hardware, outsource the software
  • Outsource the hardware, control the software
  • Outsource the hardware, outsource the software

Control the hardware and software

If you need to control and customize the performance of your physical servers, as well as the software that runs them, the go-to choice is on-premise hosting.
Control the hardware, outsource the software

What if you need to control the hardware, but you want the same workload management experience that’s offered by big cloud providers? There are ways to run, for example, AWS services on your own on-premise servers. The offerings in this area vary based on the provider.

Outsource the hardware, control the software

Your server workloads are pretty typical, you don’t need custom hardware for your IT environment – but you want to use, for example, FileCloud to share and manage your organization’s data. You can easily run FileCloud on AWS, as well as other services that you might need.

Outsource the hardware and software

This is probably the most popular solution at the moment for non-enterprise companies. You just spin up a server instance at your favorite cloud provider and manage it using the software tools they provide. Use it to host your data, your ERP system, or your SaaS, without worrying about the server infrastructure.

Comparing hosting options – On-Prem vs Cloud vs Hybrid

On-premise

So far we know that on-premise hosting is private (dedicated only to your company), with your IT environment living on your own physical servers.

But when should you use on-premise hosting? Modern tech companies usually start with the cloud, and move on to on-prem.

Take the case of Instagram, they migrated to Facebook’s infrastructure after FB bought them in 2012.

(but then they also branched out to different data centers around the world to ensure that all of their users have a good experience, so they’re definitely not on-prem only)

Companies and enterprises that have been around for decades tend to go from on-prem to adding a bit of cloud, or migrating to the cloud completely.

Like when AdvancedMD moved to the cloud. AdvancedMD is a healthcare-related provider of digital services that’s been around since 1999, which makes this a great example. The most common argument for on-premise hosting is that it’s the most secure option for highly sensitive data. AdvancedMD runs on healthcare data, which is extremely sensitive, and yet nothing tragic happened when they migrated to the cloud.

As AdvancedMD proves, the issue of security is not that important anymore. Both on-premise and cloud hosting can safely store sensitive data.

So the choice between on-prem and cloud is more about control and/or customization.

For the highest amount of control, and the ability to literally customize every part of your infrastructure, on-prem is the right option. Long-term cost management is easier, however, it takes a large initial cost to build your on-prem hosting from the ground up.

On-prem is also a good option when you have high demands:

  • You’re constantly moving large amounts of data in and out of your servers (cloud providers can charge fees for moving data outside of your cloud),
  • You need the lowest latency possible.

One problem with on-prem is that it’s harder to scale, but you can use a cloud provider to mitigate this issue.

Cloud

You’ve probably heard this, but – there is no cloud, it’s always somebody’s server. It’s a popular saying, but it carries a hidden warning about your data being on somebody else’s server.

How big is the risk that cloud providers will mismanage your data, or give someone else access to it? Unless you’re handing out access credentials to your cloud to everyone you meet, the risk is actually very small.

There is no way cloud would’ve become the new standard for hosting if it were risky. Providers know this, and they’ve put extreme amounts of money into making sure that your resources are safe with them.

Another popular issue that people bring up when talking about the cloud is compliance with standards. But it turns out that cloud providers are surprisingly compliant with cross-industry IT standards, so this issue depends on your unique case.

There is a different, much more real, risk associated with the cloud – cost management.

Sure, at the start you pay much less compared to an on-premise solution. As you keep going, it’s super easy to spin up new services from a cloud provider, especially if you have a huge IT budget.

This is a benefit because you can scale up extremely easily. It’s also a problem because you might end up paying for a lot of unnecessary services.

So if you don’t want to overspend, you need to be very careful about managing your cloud infrastructure.

Choosing cloud isn’t a problem of compliance nor security, but rather a problem of your unique workloads. As we learned above, on-premise can be better when you need to move huge amounts of data regularly, or you need minimal latency.

For example, if your servers are just supposed to do the standard job of serving a website to people online, the cloud is the logical solution. But if you’re building a complex web application that performs difficult computations on large amounts of data, you’ll probably be better off with an on-prem, or a hybrid solution.

Hybrid

And so we arrive at the most common option, hybrid hosting.

The complex demands of enterprise IT environments make it almost impossible to just pick one hosting option and roll with it for eternity.

There are too many considerations:

  • Integrating with legacy software,
  • Speed vs reliability,
  • Location of data,
  • Latency…

… and so on, and different parts of a typical IT environment require varying approaches. For example, a cloud provider might work for your in-house data store, but you still need on-prem servers to run particular applications or legacy software.

Hybrid hosting is a way to address all of this complexity because you can combine multiple options to create the infrastructure that meets your requirements to the letter.

Summary

All in all, there is no silver bullet when it comes to hosting your data and software. The safest place for your IT environment might be at a cloud provider, or on your own on-premise servers. Or both.

It depends on what you need, and it turns out that security and compliance are not the biggest issues when you’re thinking about migrating to the cloud. It’s more about the type of data workloads that you have, and the requirements that result from this.

Hope this article was helpful, thank you for reading!