Archive for the ‘Advanced Computer Administration and Architecture’ Category

Migrating VMs Between ESXI Servers Using SCP Command

FileCloud customers may choose to use a virtual machine (VM) in an ESXI server. At times, ESXI servers may be decommissioned, requiring a migration. When FileCloud is hosted on one ESXI server, it can be moved to another using this method. This is generally a bare metal migration.

Yet migrating VMware ESXI servers has always been difficult, at times even requiring the use of a third-party paid application. In this blog, we discuss a simple method to transfer VMs using the basic SCP command. We also ensure that the transferred VM disks are configured in thin provisioning.

Follow the steps below to migrate the ESXi servers:

Enable SSH Service on Source and Destination ESXI Servers

To enable the SSH service, log in to the web interfaces for your ESXI servers. Then click on Host at the top right. Click Actions -> Services -> Enable Secure Shell (SSH) (if it is not already enabled).

Enable SSH Client Service on Source ESXI Server.

Log in to the SSH of the source ESXI server using the putty tool. You may need to run the below commands:

esxcli network firewall ruleset list --ruleset-id sshClient

Check if the SSH client service is enabled. If disabled, the command will return a result of ‘False’. If a ‘False’ response is returned, run this next command. If ‘False’ is not the returned response, proceed to the next step!

esxcli network firewall ruleset set --ruleset-id sshClient --enabled=true

Copy the VM from Source to Destination

Before running the below commands, make sure the VM that will be migrated is turned off in the source ESXI server.

Connect to your source ESXI server using putty or your favorite SSH client (depending on Windows or Mac OS).

Navigate to your datastore where your guest VM resides. By default, it will show as below.

cd /vmfs/volumes/datastore1/

Next, migrate the data to the proper datastore path in the Destination VM.

Afterward, execute the below command in the source ESXI server:

scp -rv /vmfs/volumes/datastore1/VM_NAME root@xx.xx.xx.xx:/vmfs/volumes/datastore1/

Press ‘Enter.’ You should be prompted for a password – then the migration process will begin. The time to complete the transfer depends on the network speed between the ESXI servers.

Convert Thick Provisioning to Thin Provisioning

Log in to your SSH console of the destination server. Then, navigate to the datastore path where the new VM data will be migrated from the old server.

cd /vmfs/volumes/datastore1/ VM_NAME

Run the below command to clone the VMDK to a thin provisioned disk using vmkfstools

vmkfstools -i VM_NAME.vmdk -d thin VM_NAME -thin.vmdk.

After the cloning is complete, list the files in the directory and verify that two files were created:

VM_NAME.vmdk and VM_NAME -thin.vmdk.

Rename the old flat file to a different name (e.g., mv VM_NAME-flat.vmdk VM_NAME-flat.vmdk.old)

Rename the new flat file to a different name (e.g., mv VM_NAME-thin-flat.vmdk VM_NAME-flat.vmdk)

Register the Migrated VM on the ESXI Host

Log in to the web interface of the destination ESXI server where the VM was migrated from the source server.

Click on Virtual Machines –> Create/Register VM

Select ‘Register an Existing Virtual Machine.’ Then select one or more virtual machines, a datastore, or a directory. Select the folder of the VM Guest you moved to the new server. Click: Select –> Next –> Finish

Once you turn on the migrated VM in the destination ESXI server for the first time, you will be prompted to answer if you moved or copied the guest machine. Leave the default “I Copied It” and click “Answer.”

If the migration was completed without any errors, the VMs should start in the new host.


Article written by Nandakumar Chitra Suresh and Katie Gerhardt



Installing an SSL Certificate on an ESXI Server

In the latest version of the ESXI server, the web UI is only available for managing the existing virtual machines (VMs) or creating new VMs. By default, the SSL certificate that comes with ESXI is a self-signed certificate, which is not accepted by most browsers. In this case, we are using ESXI version 6.7, with the URL dubbed and an expired SSL certificate. We are going to replace it with a new SSL certificate.

Login to the ESXI Web UI

To install the new SSL, we will need to log in to the ESXI web UI and enable SSH access. We can use the Mozilla web browser, which will help us log in to the UI by accepting the risk associated with an expired SSL.

Install SSL Certificate-ESXI Server

Start the SSH Service

To start the SSH service, log in to the ESXI server with root credentials, then click on Manage –> Services –> Start TSM-SSH service.

Install SSL Certificate-ESXI Server

Locate Your Certificates

Navigate to the dir /etc/vmware/ssl

[root@vmxi:/etc/vmware/ssl] pwd

We will need to update the rui.crt and rui.key files by adding your new SSL and Chain certificates to file rui.crt (SSL certificate and Chain certificate in that order). Then you will add your SSL private key to the rui.key file.

Safety First

Before making any changes though, make a backup of the existing certificate and key.

cp /etc/vmware/ssl/rui.crt /etc/vmware/ssl/rui.crt_old
cp /etc/vmware/ssl/rui.key /etc/vmware/ssl/rui.crt_key

Update Certificates and Restart

Then, using the vi editor, replace the SSL and key certificate.

cat /dev/null > /etc/vmware/ssl/rui.crt
vi /etc/vmware/ssl/rui.crt
cat /dev/null > /etc/vmware/ssl/ rui.key
vi /etc/vmware/ssl/ rui.key

After making the changes, you will need to restart the hosted service using the below commands:

[root@vmxi:/etc/vmware/ssl]  /etc/init.d/hostd restart
watchdog-hostd: Terminating watchdog process with PID 5528316
hostd stopped.
hostd started.
[root@vmxi:/etc/vmware/ssl]  /etc/init.d/hostd status
hostd is running.

Now if we look at the browser, we can see the new SSL certificate is in effect.

Install SSL Certificate - ESXI Server


FileCloud is a powerful content collaboration platform that integrates with your favorite tools and programs. That includes cloud storage services, Microsoft and Google apps, online editing tools like OnlyOffice and Collabora, Zapier, Salesforce, and more. Set up APIs to fine-tune file and user operations and learn more about available features in FileCloud University. You can also reach out to our best-in-class support team through the customer portal for any questions regarding your FileCloud environment.


Article written by Nandakumar Chitra Suresh and edited by Katie Gerhardt


Unstructured Data Storage Solutions

Unstructured Data Storage Solutions

Leveraging business insights to grow your company or improve a product or service is easy when you have access to structured data. This data has already been labeled, categorized, and essentially optimized for analysis.

In comparison, unstructured data remains difficult to analyze. This is partly due to the sheer variety of file types and the unsorted content they contain.

To give you the edge in fast-paced markets and industries, the right data storage solution will not only store your unstructured data but also automatically sift and analyze your data to glean actionable insights.

With the advent of artificial intelligence and machine learning, more data storage solutions and platforms are gaining this ability. Once the treasure trove of unstructured data is unlocked, there is boundless potential for optimization of business processes as well as targeted improvement of products and services.

What is Unstructured Data?

To understand unstructured data, it helps to first define structured data.

Structured Data

Defined as data that incorporates relations between entities and variables, structured data is typically quantitative. This type of information can be easily rendered in spreadsheets since the categories of data are predefined.

Some common examples of structured data include names, dates, and addresses, credit card numbers and other Personally Identifiable Information (PII), stock information and financial trends, and geolocation coordinates, among others. Structured data has formed the backbone of business intelligence because it is easily read, filtered, and analyzed by machine and coding languages.

Structured data is stored specifically in Relational Database Management Systems (RDBMS), which enable users to add, search, or manipulate data using a specific programming language called Structured Query Language (SQL). This language was developed in the 1970s by Donald D. Chamberlin and Raymond F. Boyceat, programmers at IBM, and officially adopted by ANSI and ISO standard groups in 1986.

Unstructured Data

Now that we’ve explored what structured data is, we can better understand the role unstructured data plays in business operations and growth opportunities.

Unstructured data is essentially remaining types of data or qualitative data. Text files like email messages and word processing documents, audio and media files, social media posts, graphics, and surveillance/satellite imagery all count as unstructured data.

Unstructured data is maintained in its native formats and has not been sorted or labeled; it cannot be translated into a spreadsheet through programming commands, nor can it be stored in a relational database in its raw format.

Without the predefined data model of entities, variables, and relations, running automated analyses of unstructured data is much more difficult.

Growth in Unstructured Data

Opting to work only with structured data is not an option for modern businesses and organizations. According to MongoDB, “80 to 90 percent of data generated and collected by organizations is unstructured,” and the volume of this type of data is growing exponentially, especially compared to the growth of structured data.

The knowledge we can gain from unstructured data is far-reaching and rich in diversity, particularly as more people gain access to the internet and technological devices. We are creating more data than ever before, and this data can be used for nearly any purpose:

  • Advertising companies can use unstructured data to pick up on trending topics among interest groups or target audiences.
  • Governments and global think tanks can chart movements or changes of populations, which then influence policy proposals.
  • Logistics companies and manufacturers can better understand how their processes affect end products.
  • Banking and Financial Institutions can refine existing services and develop new tools to fill gaps in customer support.
  • Hospitality and Entertainment corporations can identify meaningful investment opportunities for expanded properties or amenities.
  • Real Estate firms can more adeptly respond to buyers and sellers according to swiftly changing needs and market shifts.

These are only a few examples of how mining unstructured data can support common industry objectives. If it can be imagined, relevant unstructured data likely already exists – reams of valuable information that can revolutionize how businesses and organizations respond to their clients and constituents.

However, the tools needed to properly analyze unstructured data are relatively new in the field. Thus, tapping into the knowledge contained in unstructured data remains a major challenge for businesses and organizations, up to 95% of those polled by TechJury). Furthermore, “97.2% of organizations are investing in big data and AI” to develop adept solutions to unstructured data. Emerging tools and technologies will empower companies and organizations with the ability to leverage both data and content insights, expanding beyond business intelligence into lustrous fields of predictive analytics, machine learning, and data discovery and profiling.

Manage and Store Unstructured Data

To work with unstructured data, we must first be able to retain and store unstructured data. As before, we’ll first review structured data to understand the roots of data storage.

Capture and Store Unstructured Data

Data Warehouses

SQL databases (a type of RDBMS) are built for structured data – quantitative, numerical-based information comprising variables and entities. The tabular nature of RDBMS means that storage solutions take up less space and are also easily scaled within data warehouses. This makes a SQL Database much more cost-effective to maintain and expand as needed.

Data warehouses are familiar components to business intelligence components. They are, in essence, the central hubs for the entire system, from which all insights are derived. The warehouse serves as a repository for data and also runs queries and analyses.

As a data warehouse collects data and the databases within the enterprise grow, a rich data history develops, which provides an invaluable resource to analysts and scientists within an organization. The information is stable, flexible, and largely accessible, often referred to as “the single source of truth” within an enterprise. The data warehouse itself forms the backbone of widely recognized reporting and dashboard features within UI.

Data Lakes

Unstructured data cannot fit into the relations-based structure of RDBMS. Non-relational or NoSQL databases must be used instead to store and manage unstructured data. However, the qualitative nature of the data makes it harder to store, even as it absorbs more space.

The answer to this challenge is found in a data lake. This type of data repository offers an interesting level of convenience and flexibility, in that data of all kinds, structured or unstructured, raw or clean, can be added. Data lakes are scalable tools that support advanced storage and processing of unstructured data such as big data and real-time or IoT analytics, as well as machine learning.

The downside of a data lake is that all that unstructured data is not organized. The very quality that makes this repository a necessary solution to recapture the value of 90% of our available data is the same quality that makes it a challenge to implement. A variety of analyses can be run using the data stored in a data lake, but that raw data needs to be processed and organized before it can deliver any meaningful insights. Without proper oversight or consistent organization of stored data, it is very easy for a data lake to become a “data swamp.”

The often raw and uncategorized nature of data lakes means that they are not often used directly by business analysts. Instead, data scientists and developers must first curate and translate data before delivering it to business analysts, who interrogate the data and adjust business decision processes. Considering the costs of expertise, a data lake may not be immediately accessible without the support of enterprise-grade developer teams.

Alternatively, smaller organizations and businesses can set up a data lake to capture data they plan to use in the future. Establishing a data lake without an immediate implementation strategy offers two distinct benefits:

  • The cost to maintain a data lake is relatively low, especially compared to maintaining a data warehouse.
  • Early data collection ensures a rich vein from which to work later and the ability to establish data patterns and history.

As a third option, businesses and organizations can take advantage of Software-as-a-Service or Infrastructure-as-a-Service solutions. Services include widely recognized names in the industry, including Microsoft Azure and Amazon Web Services (AWS). Azure offers both data lake storage and analysis in a two-part service: Azure Data Lake Storage and Azure Data Lake Analytics. Selecting AWS as your service provider grants you access to Amazon’s suite of complementary services, including Amazon RedShift (data warehousing), Amazon Athena (SQL Storage), and Amazon QuickSight (business analytics), among others.

FileCloud Meets Your Data Storage Requirements

S3 Storage FileCloud Admin Settings Preview

FileCloud is an adept solution that is continuously evolving to meet new challenges across diverse industries. You can opt for FileCloud Server (on-premises) or have FileCloud host your data in our world-class data center with FileCloud Online.

There is also a hybrid solution available that combines the best of both worlds: on-premises hosting for high-touch data, cloud storage for archived files, and FileCloud’s ServerSync to support synchronization and easy access to files and permissions wherever you are and across your devices.

Flexible Integrations

With our on-premises solution, you can choose your storage provider, either by integrating with an already-deployed service or by setting up brand new storage buckets. Integrations include Azure Blob, AWS (and AWS GovCloud), Wasabi, Digital Ocean, Alibaba, and more.

With Microsoft Azure, you can take advantage of pre-built FileCloud VMs on the Azure Marketplace, with deployment in seconds. FileCloud’s integration with Azure also ensures your active directories, permissions, and files are migrated into your FileCloud platform.

Opting for the AWS S3 storage service also provides a wealth of different options within the Amazon Marketplace. The FileCloud integration offers smooth data migration for existing files and permissions, as well as access to robust features like endpoint backup, data analytics, monitoring and governance, security, and granular permissions.

For those concerned about the relative data security of unstructured files, users can employ FileCloud’s integration with Symantec. This powerful security software suite provides anti-malware, intrusion prevention, and firewall features. Even better, Symantec’s Data Insight Solution supports administrative-level review of unstructured data to check usage patterns and access permissions.


Unstructured data is an increasingly vital element of analytics and business intelligence for companies and organizations of all sizes. The challenge lies in how this unstructured data is stored and leveraged to yield actionable insights.

Sophisticated solutions like FileCloud are working to put this unstructured data at your fingertips. By partnering with major leaders in the field, FileCloud aims to provide you with secure, scalable, efficient storage options that support sustainable growth and meaningful relationships with your target audience.

For more information, contact our Sales Team or sign up for a free trial here!

FileCloud Aurora – All About DRM Capabilities


In November 2020, FileCloud released update 20.2 – a complete rehaul of our Sync, Mobile and browser UI and functionalities. We at FileCloud have been working on this for a very, very long time, and so we’re incredibly proud to present to you: FileCloud Aurora.

Today, we’re going to be covering one of the most important security functions that Aurora introduces: DRM Capabilities.

For a comprehensive overview of all of FileCloud Aurora’s new features, please visit our previous blog post Introducing FileCloud Aurora!.

Secure Document Viewer

If the new UI was the biggest change in terms of appearance, FileCloud Aurora’s new Digital Rights Management (DRM) capabilities are unquestionably the most significant change in terms of functionality. 

Your data security has always been FileCloud’s number one priority. We’ve got all the files you’re storing with us safe and sound, but what happens when you need to send out or distribute important documents, such as external contracts, reports, or training materials? Our new DRM solution ensures that nothing you send out gets used in a malicious or abusive manner, even after it’s left your system and entered others. 

Our secure document viewer helps you protect confidential files from unsolicited viewing with FileCloud’s restricted viewing mode. Show only selected parts of the document and hide the rest of it — or choose to reveal sections only as the user scrolls, minimizing the risk of over-the-shoulder compromisation.

For more details, read more about the FileCloud DRM solution here

Screenshot Protection

Utilize the Screenshot Protection feature to prevent recipients from taking screenshots of secure information and documents.

This is an option that can be selected when you create your DRM Document or Document Container, and prevents any recipients from taking screenshots of the document. Not only that, the recipient won’t be able to share screens or screen-record to share the documents either, nullifying any chance of your documents being distributed without your permission or consent.

Document Container 

Easily and securely export multiple documents in an encrypted document container (AES 256 encryption), and share it via FileCloud or third party emails. 

DRM Protection

Support for Multiple File Formats

Protect your Microsoft Office (Word, Powerpoint, Excel), PDF, and image (jpeg, png) files, and include multiple types of files in a single encrypted document container! FileCloud’s DRM solution doesn’t discriminate, ensuring all your most regularly used file, folder and document formats can all be easily handled by our containers and viewer. 

Anytime Restriction of Access to Your Files

Remove the risk of accidentally transmitting confidential files and enforce your policy controls even after distribution. You can revoke file access or change view options (screenshot protection, secure view and max account) anytime, via the FileCloud portal.

Thanks for Reading!

We at FileCloud thank you for being a part of our journey to creating the most revolutionary user interface and experience on the market. We’d love to know what you think about these changes. For full information about all these changes, release notes can be found on our website here

We hope that you’re as excited about these new changes as we are. Stay safe, and happy sharing, everyone!

VDI vs VPN vs RDS (Remote Desktop Services) | FileCloud

As the world slowly moves to inevitably work from home, most organizations have begun actively exploring remote work options. As such, security has become one of the prime considerations of businesses. After all, ensuring the safety of your organizational data and processes is just as important as ensuring business continuity. Virtual digital workspaces managing seamless workflows among employees spread across the globe, of course, must aim to consistently better their user experiences.

However, hackers also thrive during such crises as they know that many people may willingly or unknowingly compromise on safety aspects to meet their business needs. Any breach of data can prove to be a costly affair, especially when taking into account the loss of reputation, which takes a long time to overcome, if at all. It is important then, to understand and evaluate the remote work options, and choose wisely. The most popular options considered are Virtual Private Network (VPN), Virtual Desktop Infrastructure (VDI) and Remote Desktop Services (RDS).

What is a VPN?

In an online world, a VPN is one of the best ways you can ensure the security of your data and applications while working remotely. This is not just about logging in and working securely every day. It also protects you from cyber attacks like identity thefts, when you are browsing the internet through it. This is simply an added layer of security through an application that secures your connection to the Internet in general if using a personal VPN, or to a designated server if using your organizational VPN.

When you try to connect to the Internet through a VPN, it is taken through a virtual, private channel that others do not have access to. Then, this virtual channel (usually a server hosting the application) accesses the Internet on behalf of your computer so that you’re masking your identity and location; especially with hackers who are on the prowl. Many VPN solution providers ensure military-grade encryption and security via this tunnel. Usually, the security encryption differs based on the need of the individuals and organizations choose what works best for them.

VPNs came into being in this every concept of enterprises wanting to protect their data over the public as well as private networks. Access to the VPN may be through authentication methods like passwords, certificates, etc. Simply put, it is a virtual point-to-point communication for the user to access all the resources (for which they have requisite permissions) of the server/network to which they are allowed to connect. One of the drawbacks in this could be the loss in speed due to the encrypted, routed connections.

What is VDI?

This is used to provide endpoint connections to users by creating virtual desktops through a central server hosting. Each user connecting to this server will have access to all resources hosted on the central server, based on the access permissions set for them.  So, each VDI will be configured for a user. And it will feel as if they are working on a local machine. The endpoint through which the user accesses the VDI can be a desktop, laptop, or even a tablet or a smartphone. This means that people can access what they want, even while on the go.

Technically, this is a form of desktop virtualization aimed at providing each user their own Windows-based system. Each user’s virtual desktop exists within a virtual machine (VM) on the central server. Each VM will be allocated dedicated resources that improve the performance as well as the security of the connection. The VMs are host-based; hence, multiple instances of the VMs can exist on the same server or a virtual server which is a cluster of multiple servers.  Since everything is hosted on the server, there is no chance of the data or identity being stolen or misused. Also, VDI ensures a consistent user experience across various devices and results in a productivity boost.

What is RDS?

Microsoft launched Windows Terminal Services with MS Windows 2008, and this later came to be known as remote desktop services. What it means is that a user will be allowed to connect to a server using a client device, and can access the resources on the server. The client accessing the server through a network is a thin client which need not have anything other than client software installed. Everything resides on the server, and the user can use their assigned credentials to access, control and work on the server as if they are working on the local machine. The user is shown the interface of the server and will have to log off the ‘virtual machine’ once the work is over.  All users connected to the same server will be sharing all the resources of the server. This can usually be accessed through any device, even though working through a PC or laptop will provide the best experience. The connections are secure as the users are working on the server, and nothing is local, except the client software.

The Pros and Cons of each

When considering these three choices of VPN, VDI, and RDS, many factors come into play. A few of these that need to be taken into account are:

  1. User Experience/Server Interface – In VDI, each user can work on their familiar Windows system interface so that it increases the comfort factor. Some administrators even allow users to customize their desktop interface to some extent, giving that individual desktop feel which most users are accustomed to. This is not the case in RDS wherein each user of the Server is given the same Server interface, and resources are shared among them. There is a very limited choice of customization available, and mostly all users have the same experience. Users will have to make do with the Server flavor of the Windows systems rather than the desktop flavor that they are used to. The VPN differs from either of these in that it only provides an established point to point connection through a tunnel and processing happens on the client system, as opposed to the other two options.
  2. Cost – If cost happens to be the only consideration, then VPN is a good choice to go with. This is because users can continue to use their existing devices with minimal add-ons or installations. An employee would be able to securely connect to their corporate network and work safely, without any eavesdropping on the data being shared back and forth. The next option is the RDS the cost of which will depend on a few other factors. However, RDS does save cost, time and money, with increased mobility, scalability, and ease of access, with no compromise on security. VDI is the costliest of the three solutions as it needs an additional layer of software for implementation. Examples of this software are VMware of Citrix which helps run the hosted Virtual Machines.
  3. Performance – When it comes to performance, VDI is a better solution, especially for those sectors that rely on speed and processing power like the graphics industry. Since the VDI provides dedicated, compartmentalized resources for each user, it is faster and makes for a better performance and user satisfaction. VPN connections, on the other hand, can slow down considerably, especially depending on the Client hardware, the amount of encryption being done, and the quantum of data transfer done. RDS performance falls in between these two options and can be considered satisfactory.
  4. Security – Since it came into being for the sake of ensuring the security of the corporate data when employees work outside the office, VPN does provide the best security in these three remote work options. With VDI and RDS, the onus on ensuring security lies with the administrators of the system, in how they configure and implement the same. But, it is possible to implement stringent measures to ensure reasonably good levels of security.
  5. End-User Hardware – Where VDI and RDS are considered, end-user hardware is not of much consequence, except in using to establish the connection. In these cases, it is the Server hardware that matters as all processing and storage happen on it. But in ensuring VPN connections, end-user hardware configurations are important as all processing happens on this after establishing the secure connection. VDI offers access to clients for Windows, Mac and at times, even for iPhone and Android. RDS offers clients for Windows and Mac; however, a better experience is delivered with Windows.
  6. Maintenance – VPN systems usually require the least maintenance once all the initial setup is done. VDI, however, can prove to be challenging, as it requires all patches and updates to be reflected across all VMs. RDS needs lesser maintenance than VDI, but more than that of VPN systems. At best, RDS will have to implement and maintain a few patches.

The Summary

Looking at the above inputs, it is obvious that there is no best solution that can be suggested for every business. Each enterprise will have to look at its existing setup, the number of employees, the business goals, the need for remote work, the challenges therein, and then decide, which factor needs to be provided more weightage. If the number of employees is less, perhaps VPN or RDS may be the better way to go. But, if your need is of better performance owing to the graphics kind of work, then we highly recommend taking a look at the VDI option. VDI may be the way to go if you have a large number of employees as well.

Are You Committing Any of These Super Common DevOps Mistakes?


A new venture is never easy. When you try something for the first time, you’re bound to make mistakes. DevOps isn’t an exception to the rule. Sure, you might have read up a lot on the subject but nothing can prepare you for the real thing. Does that mean you give up trying to understand DevOps? Not at all! That’s the first mistake you must overcome; if your knowledge of basic DevOps theory is weak, you will speed up the disaster, and before long, your efforts will seem more disappointing than productive. So, keep at it and in the meantime, check out this list of common mistakes that you can easily avoid:

 A Single Team to Handle the Whole DevOps Workload



Most organizations make this mistake – they rely on a single team to support the entire DevOps functions. Your overburdened development and operations crew already has to communicate and coordinate with the rest of the company. Adding a dedicated team for this purpose adds to the confusion.

The thing is, DevOps began with the idea of enhancing collaboration between the teams involved in software development. So, it is more than just development and operations. They must also handle security, management, quality control, and so on. Thus, the simpler and straightforward you keep things within your company, the better.

Instead of adding a dedicated team for all DevOps functions, work on your company culture. Focus more on automation, stability, and quality. For example, start a dialogue with your company regarding architecture or the common issues plaguing production environments. This will inform the teams about how their work affects one another.  Developers must realize what goes on once they push code, and how operations often have a hard time maintaining an optimum environment. The operations team, on the other hand, should try to avoid becoming a blocker through the automation of repeatable tasks.


 Greater Attention to Culture Than Value

Though it’s a bit contrary to the last point, DevOps isn’t all about organizational culture. Sure, it requires involvement from the company leadership as well as a buy-in from every employee, but they don’t understand the benefits until they have an individual “aha” experience and discover the value. And that happens only when they have a point of comparison. Numbers help with this.
User with Tablet

Start paying more attention to measurable aspects. When reading the DevOps report, check the four metrics – lead time for changes, deployment frequency, change failure rate, and mean time to discover. A higher deployment rate helps minimize the risk of releasing minor changes. Enhance the time needed to provide value to consumers once the code is pushed. If you experience failure, decrease the recovery time and also reduce the rate of failure. The truth is, culture isn’t something that can be measured; your customers will not have much interest in the various aspects of your company by the end. They will, however, show an interest in visible and tangible things.


 Select Architecture to Deter Changes



Software that cannot be evolved or changed easily presents some interesting challenges. If parts of your system cannot be deployed independently, starting the system will become difficult. Architecture that isn’t loosely coupled is difficult to adapt. Users face this problem while deploying large systems. They don’t spend a lot of time considering the deployment of independent parts; so they have to deploy all the parts together. You risk breaking the system if only a single part is deployed.

However, know that DevOps is more than simple automation. It tries to decrease the time you spend deploying apps. Even while automated, if the deployment requires a lot of time, customers will fail to experience the value in automation.

This mistake can be avoided by investing a bit of time in the architecture. Simply understand how the parts can be deployed independently. However, do not undertake the effort of defining every little detail, either. Rather, postpone a few of the decisions until a later more opportune moment, when you realize more. Allow the architecture to change by itself.


Lack of Experimentation in Production


In the field of software, companies used to try and get everything right ahead of releasing it to production. Nowadays though, thanks to automation and culture change, it’s easier to get things into production. Thanks to unprecedented speed and consistency, new changes are easily releasable numerous times a day. But people make the mistake of not harnessing the true power of DevOps tooling for experiments in production.

Reaching the production stage is always laudable, but that doesn’t mean the company should stop experimenting and testing in production. Normally, using tools such as production monitoring, release automation, and feature flags allow you to carry out some cool functions. Split tests can be run to verify the layout that works best for a feature, or you can conduct a gradual rollout to understand people’s reactions to something new.

The best part is, you’re capable of doing all of this without obstructing the pipeline for changes that are still on their way. Harnessing the full power of DevOps means to allow actual production data affect the development process in a closed feedback loop.


Too Much Focus on Tooling

While some tools help with DevOps practice, using them doesn’t mean you’re doing DevOps. New tools are coming to the forefront all the time, which means you now have different tools for deployment, version control, continuous integration, orchestrators, and configuration management. A lot of vendors will say they have the perfect tool for DevOps implementation. However, no single tool can possibly cover all your requirements.


So, adopt an agnostic outlook towards tools. Know that there will always be a better method of doing things. Fresh tools will get adopted once a certain amount of time has passed. Use tools to spend more and more time on things that provide customers with the necessary value. Develop a mindset for delivering value to end users at every moment. Think of your job as getting over only when your customers’ expectations are met post-delivery.


Even the smallest DevOps issue can affect other functions of your company if you do not take the effort to correct the problems. Focus on the right aspects of DevOps and keep on perfecting the techniques for a smoother, faster deployment.

The Most Important Tech Trends to Track Throughout 2018

2017 was a roller coaster of a year; it’s breathtaking how the time to market for technologies to create observable impact is shrinking year after year. In the year that went by, several new technologies became mainstream, and several concepts emerged out of tech labs in the form of features within existing technologies.

In particular, industrial IoT and AI-based personal assistance space expanded manifolds in 2017, data as a service continued its rollicking growth, and connected living via smart devices appeared to be a strategic focus for most technology power players. Efforts to curb misinformation on the web also gained prominence.

Artificial intelligence, the blockchain, industrial IoT, mixed reality (AR and VR), cybersecurity– there’s no dearth of buzzwords, really. The bigger question here is – which of these technologies will continue to grow their scope and market base in 2018, and which new entrants will emerge?

Let’s try to find out the answers.


The Changing Dynamics of Tech Companies and Government Regulators

Indications are too prominent to ignore now, there’s increasing pushback from governments, along with attempts to influence the scope of technological innovations. The power and control of technology in human life is well acknowledged now, and naturally, governments feel the need to stay in the mind space of tech giants, as they innovate further. With concerns around smart home devices ‘tapping’ your conversations all the while, it’s reason enough for the end user community to be anxious.

GDPR will come into force in mid-2018, and the first six months after that will be pretty interesting to watch. The extent and intensity of penalties, the emergence of GDPR compliance services, the distinct possibilities of similar regulations emerging in other geographies – all these will be important aspects to track for everyone. Also, the net neutrality debate will continue, and some of the impacts will be visible on the ground. Whether it will be for the better or for the worse of the World Wide Web? By the end of 2018, we might be in a good position to tell.

The ‘People’ Focused Tech Business

The debate around the downsides of technology in terms of altering core human behaviour is getting louder. Call it the aftermath of Netflix’s original series called Black Mirror, which explores the fabrics of a future world where the best of technology and the worst of human behaviour fuse together. Expect the ‘people’ side of technology businesses evolve more quickly throughout this year.

Community-based tech businesses, for instance, will get a lot of attention from tech investors. Take for example businesses such as co-working spaces with particular attention on specific communities, such as women entrepreneurs, innovators who’re dedicated to research in a specific technology, people with special requirements and who’re differently abled.

Also, AI algorithms that make humans more powerful instead of removing them from the equation will come to the fore. Take, for instance, Stitch Fix, an AI-powered personal shopping service that enables stylists to make more customized and suitable suggestions to customers.

Blockchain and IoT Meet

For almost 5 years now, IoT has featured on every list of potentially game-changing technologies, and for good reason. There are, however, two concerns.

How quickly will business organizations be able to translate innovation in IoT into tangible business use cases?

How confident can businesses be about the massive data that will be generated via their connected devices, every day?

Both these concerns can be addressed to a great extent by something that’s being termed BIoT (that’s blockchain Internet of Things).

BIoT is ready to usher in the new era of connected devices. Companies, for instance, will be able to track package shipments and take concrete steps towards building smart cities where connected traffic lights and energy grids will make human lives more organized. When retailers, regulators, transporters, and analysts will have access to shared data from millions of sensors, the collective and credible insights will help them do their jobs better. Of course, the blockchain concept will ensure that data will be too difficult to be hacked.



Yes, bots. We’ve almost become used to bots answering our customer service calls. Why is this technology, then, a potential game-changer for the times to come? Well, that’s because of the tremendous potential for growth that bots have.

Bots are the outcome of coming together of key technologies – natural language processing (NLP) and machine learning (ML). Individually, there’s a lot of growth happening in both these technologies, which means that bots are also growing alongside.

Because of the noteworthy traction of chatbots in 2017, businesses are very likely to put their money in chatbots over apps in 2018. From chatbots that give you tailor-made financial advice to those that tell you which wine would go well with your chosen pizza, the months to follow will bring a lot of exciting value adds from this space.

Quantum Computing: From Sci-Fi to Proof of Concept

Let’s face it; quantum computing has always been a thing from science fiction movies, and not really anything tangible. The research activity in this space, however, hasn’t slackened a bit. Innovators are, in fact, at a stage where quantum computing is no more just a concept. The promise of outperforming traditional supercomputers might not be an empty promise after all. Tech giants are working hard to improve their qubit computing powers while keeping error probability at a minimum. 2018 has every reason to be the year when quantum computing emerges as a business-technology world buzzword.


Concluding Remarks

The pace of disruption of a tech trend is moderated by government regulations, the price war among competing tech giants, and cybersecurity, among other factors. Eventually, we all have to agree that the only thing we can say for certain is that by the time this year draws to an end, we will all be living in ways different from today. It’s very likely that at the core of these changes will be one or more of the technology trends we discussed in this guide.



Author – Rahul Sharma

Top 5 Use Cases For Machine Learning in The Enterprise

machine learning

Artificial intelligence can be loosely defined as the science of mimicking human behavior. Machine learning is the specific subset of AI that trains a machine to learn. The concept emerged from pattern recognition and the theory that computers can learn without being programmed to complete certain tasks. Things like cheaper, more powerful computational processing, the growing volumes of data, and affordable storage has taken deep learning from research papers and labs to real life applications. However, all the media and hype surrounding AI, has made it extremely difficult to separate exciting futuristic predictions from pragmatic real-world enterprise applications. In order to avoid begin caught up in the hype of technical implementation, CIOs and other tech decision makers have to build a conceptual lens and look at the various areas of their company that can be improved by applying machine learning. This article explored some of the practical use cases of machine learning in the enterprise.

1. Process Automation

Intelligent process automation (IPA) combines artificial intelligence and automation. It involves the diverse use of machine learning. From automating manual data entry, to more complex use cases like automating insurance risk assessments. ML is suited for any scenario where human decision is used, but within set constraints, boundaries or patterns. Thanks to cognitive technology like natural language processing, machine vision and deep learning, machines can augment traditional rule-based automation and overtime learn to do them better as it adapts to change. Most IPA solutions already utilize ML-powered capabilities beyond simple rule based automation. The business benefits are much more extensive than cost saving and include better use of costly equipment or highly skilled employees, faster decisions and actions, service and product innovations, and overall better outcomes. By taking over rote tasks, machine learning in the enterprise frees up human worker to focus on product innovation and service improvement; allowing the company to transcend conventional performance trade-offs and achieve unparalleled levels of quality and efficiency.

2. Sales Optimization

Sales typically generates a lot of unstructured data that can ideally be used to train machine learning algorithms. This comes as good news to enterprises that have been saving consumer data for years, because it is also the place with the most potential for immediate financial impact from implementing machine learning. Enterprises eager to gain a competitive edge are applying ML to both marketing and sales challenges in order to accomplish strategic goals. Some popular marketing techniques that rely on machine learning models include intelligent content and ad placement or predictive lead scoring. By adopting machine learning in the enterprise, companies can rapidly evolve and personalize content to meet the ever changing needs of prospective customers. ML models are also being used for customer sentiment analysis, sales forecasting analysis, and customer churn predictions. With these solutions, sales managers are alerted in advance to specific deals or customers that are risk.

3. Customer Service

Chatbots and virtual digital assistants are taking over the world of customer service. Due to the high volume of customer interactions, the massive amounts of data captured and analyzed is the ideal teaching material required to fine tune ML algorithms. Artificial intelligence agents are now capable of recognizing a customer query and suggesting the appropriate article for a swift resolution. Freeing up human agents to focus on more complex issues, subsequently improving the efficiency and speed of decisions. Adopting machine learning in the enterprise cloud have an infallible impact when it comes to customer service-related routine tasks. Juniper research maintains that chatbots will create an annual $8 billion cost savings by 2022. According to a 2017 PWC report, 31 percent of enterprise decision makers believe that virtual personal assistants will significantly impact their business, more than any other AI powered solution. The same report found that 34 percent of executives say that the time saved as a result of using virtual assistants allowed them to channel their focus towards deep thinking and creativity.

4. Security

Machine learning can help enterprises improve their threat analysis and how they respond to attacks and security incidents. ABI research analysts estimate that machine learning in data security will increase spending in analytics, big data and artificial intelligence to $96 billion by 2021. Predictive analytics enables the early detection of infections and threats, while behavioral analytics ensures that any anomalies within the system does not go unnoticed. ML also makes it easy to monitor millions of data logs from mobile and other IoT capable devices and generate profiles for varying behavioral patterns with your IoT ecosystem. This way, previously stretched out security teams can now easily detect the slightest irregularities. Organizations that embrace a risk-aware mind-set are better positioned to capture a leading position in their industry, better navigate regulatory requirements, and disrupt their industries through innovation.

5. Collaboration

The key to getting the most out of machine learning in the enterprise lies in tapping into the capabilities of both machine learning and human intelligence. ML-enhanced collaboration tools have the potential to boost efficiency, quicken the discovery of new ideas and lead to improved outcomes for teams that collaborate from disparate locations. Nemertes’ 2018 UC and collaboration concluded that about 41 percent of enterprises plan to use AI in their unified communications and collaboration applications. Some uses cases in the collaboration space include:
• Video intelligence, audio intelligence and image intelligence can add context to content being shared, making it simpler for customers to find the files they require. Image intelligence coupled with object detection, text and handwriting recognition helps improve meta data indexing for enhance search.
• Real time language translation, facilitates communication and collaboration between global workgroups in their native languages.
• Integrating chatbots into team applications enables native language capabilities, like alerting team members or polling them for status updates.
That is just the tip of the iceberg, machine learning offers significant potential benefits for companies adopting it as part of their communications strategy to enhance data access, collaboration and control of communication endpoints.


How to Deploy A Software Defined Network

Software Defined Network (SDN) was a bit of a buzzword throughout the early to middle of this decade. The potential of optimal network utilization promised by software-defined networking captured the interest and imagination of information technology companies quickly. However, progress was slow, because the general understanding of software-defined networking wasn’t up to the mark, which caused enterprises to make wrong choices and unsustainable strategic decisions upfront.


Where Does SDN Come Into the Picture?

SDN is still a nascent concept for several companies. The virtualization potential for networks offered by SDN calls out IT leaders to improve their understanding of this software heavy approach of network resource management. We hope this guide helps.

What is Software Defined Networking Afterall?

You would know and appreciate how software managed virtual servers and storage make computing resource management more agile and dynamic for enterprises. Imagine the benefits that enterprises could enjoy if the same capabilities could be extended on to your company’s network hardware. That’s what software-defined networking offers.

SDN vs Traditional Networking
SDN vs Traditional Networking

SDN is about adding a complex software layer on top of the hardware layer in your company’s network infrastructure. This allows network administrators to route network traffic as per sophisticated business rules. These rules can then be extended across to network routers so that administrators don’t have to depend solely on hardware configuration to manage network traffic.

This sounds easy in principle. Ask any network administrator, and they will tell you that’s its really difficult to implement, particularly in companies with matured and stabilized networking infrastructure and processes.




SDN Implementations Demand Upgrades in Network Management Practices

An almost immediate outcome of SDN implementation will be your enterprise’s ability to quickly serve network resource demands using the software. To maintain transparency, the networking team needs to immediately evaluate the corresponding changes they need to bring in, let’s say, the day end network allocation and utilizing reports. This is just one of the many examples of situations where every SDN linked process improvement will need to be matched by equivalent adjustments in related and linked processes.



Managing De-provisioning Along the Way

At the core of SDN implementations is the enterprise focus on optimizing network usage and managing on-demand network resource requests with agility. While SDN implementations help companies achieve these goals fairly quickly, they often also cause unintended network capacity issues. Among the most common reasons for this is that SDN engineers forget to implement rules for de-provisioning networks when the sudden surge in demand is met. By building de-provisioning as the last logical step in every on-demand resource allocation request, networking teams can make sure that SDN doesn’t become the unintentional cause of network congestion.


Pursue 360 degrees network performance visibility

It’s unlikely that your company will go for a complete overhaul of its network management systems and processes. So, it’s very likely that the SDN implementation will be carried out in a phased manner. Some of the key aspects of managing this well are:

  • Always evaluate the ease with which your existing network performance monitoring tools will allow SDN to plug into them.
  • Look for tools whose APIs allow convenient integration with SDN platforms
  • Evaluate how your current network performance management tools will be able to manage and integrate data from non-SDN and SDN sources.

Note – because hybrid SDN (a balance of traditional and software-defined network) is a practical approach for enterprises, implementations much accommodation the baseline performance monitoring goals of the enterprise. In fact, the introduction of SDN often requires networking teams to improve performance monitoring and reporting practices so that concrete and business process-specific improvements can be measured and reported.



Is SDN an Enterprise Priority Already?

The basics reason why SDN is making its way into IT strategic discussions for even SMBs is that the nature of business traffic has changed tremendously. Systems have moved to the cloud-computing model, and there’s a lot of focus on mobile accessibility of this system.

In times when systems operated mostly in the client-server configuration, the basic tree structure of Ethernet switched worked well. Enterprise network requirements today, however, demand more. SDN is particularly beneficial in enabling access to public and private cloud-based services.

SDN also augers well for another very strong enterprise movement – the one towards mobility. That’s because, with SDN, network administrators can easily provision resources for new mobile endpoints, taking care of security considerations. Also, enterprise data volumes and information needs will only grow. Managing network optimization with many virtual machines and servers in the play, traditionally, will require tremendous investments. SDN makes it more manageable, even from a financial perspective.


Understand and Acknowledge Security Aspects of SDN

Make no assumptions. SDN is a major change in the way your company’s network works. There are specific known risks of SDN implementations that consultants and vendors from this sphere will help you prepare for.

Protocol weaknesses are right at the top. A crucial question for the application security and network security teams to work together on is – do our application security routines accommodate the needs of protocols used in the SDN platform? Another key security-related aspect is to devise measures to prevent SDN switch impersonation.


Choosing External Vendors

The success of an SDN implementation is measured in terms of the positive impact it has in the context of business use cases. If/when you initiate discussions with external consultancies and vendors for your enterprise SDN implementation, make sure you evaluate them not only on the basis of their SDN knowledge but also their ability to understand your business applications ecosystem. This helps them implement SDN platforms that accommodate complex and highly sophisticated business rules of network resource allocation. This, in turn, significantly improves the project’s probability for getting all its goals tick marked.


Concluding Remarks

If SDN is on the strategic roadmap being followed by your enterprise, there’s a lot you can help with. Start with the tips and suggestions shared in this guide.



Author: Rahul Sharma

Personal Data Breach Response Under GDPR

personal data breach

Data security is at the heart of the upcoming General Data Protection Regulation (GDPR). It sets strict obligations on data controllers and processors in matters pertaining data security while concurrently providing guidance on the best data security practices. And for the first time, the GDPR will introduce specific breach notification guidelines. With only a few months to go until the new regulations come into effect, businesses should begin focusing on data security. Not just because of the costs and reputational damage a personal data breach can lead to; but also because under the GDPR, a new data breach notification regime will be applied to statute the reporting of certain data breaches to affected individuals and data protection authorities.

What Constitutes a Personal Data Breach Under GDPR?

GDPR describes A personal data breach as – a security breach that leads to the unlawful or accidental loss, destruction, alteration, or unauthorized disclosure of personal data stored, processed or transmitted. A personal data breach is by all means a security incident; however, not all security incidents require the same strict reporting regulations as a personal data breach. Despite the broad definition, it is not unusual in data security laws that require breach reporting. HIPAA, for example, makes the same distinctions at the federal level for medical data. It aims to prevent data protection regulators from being overwhelmed with breach reports.

By limiting breach notifications to personal data (EU speak for personally identifiable information – PII), incidents that solely involve the loss of company data/ intellectual property will not have to be reported. The threshold to establish whether an incident has to be reported to a data protection authority is dependent on the risk it poses to the individuals involved. High risk situations are those that can potentially lead to the significant detrimental suffering – for example, financial loss, discrimination, damage to reputation or any other significant social or economic disadvantage.

…it should be quickly established whether a personal data breach has occurred and to promptly notify the supervisory authority and the data subject.

– Recital 87, GDPR

If an organization is uncertain about who has been affected, the data protection authority can advise and, in certain situations, instruct them to immediately contact the individuals affected is the security breach is deemed to be high risk.

What Does The GDPR Require You to Do?

Under GDPR, the roles and responsibilities of processors and data controllers have been separated. Controllers are obliged to only engage processors who are capable of providing sufficient assurances to implement appropriate organizational and technical measures to protect the rights of data subjects. In the event of a data breach that affects the rights and freedoms of said data subjects, the organization should report it, without any delay and, where practicable, within 72 hours of becoming aware of it.

The data processor is mandated to notify the controller the moment a breach is discovered, but has no other reporting or notification obligation under the GDPR. However, the 72-hour deadline begins the moment the processor becomes aware of the data breach, not when the controller is notified of the breach. A breach notification to a data protection authority has to at least:

  1. Have a description of the nature of the breach, which includes the categories and number of data subjects affected.
  2. Contain the data protection officer’s (DPO) contact information.
  3. Have a description of the possible ramifications of the breach.
  4. Have a description of steps the controller will take to mitigate the effect of the breach.

The information can be provided in phases if it is not available all at once.
If the controller determines that the personal data breach can potentially put the right and freedoms of individuals at risk, it has to communicate any information regarding the breach to the data subjects without undue delay. The communication should plainly and clearly describe the nature of the personal data breach and at least:

  1. Contain the DPO’s contact details or a relevant contact point.
  2. Have a description of the possible ramifications of the breach.
  3. Have a description of measures proposed or taken to mitigate or address the effects of the breach.

The only exception in this case is if the personal data has been encrypted, and the decryption key has not been compromised, then there is not need for the controller to notify the data subject.

The most ideal way for companies to handle this GDPR obligation is to not only minimize breaches, but also, establish policies that facilitate risk assessment and demonstrates compliance.

The GDPR stipulates that all the records pertaining the personal data breach, regardless of whether the breach needs to be reported or not. Said records have to contain the details of the breach, any consequences and effects, and the follow up actions taken to remedy the situation.

Should Ransomware Attacks Be Reported?

Ransomware typically involves the ‘hijacking’ of cooperate data via encryption and payment is demanded in order to decrypt the ransomed data. Under GDPR, Ransomware attacks may be categorized as a security incident but it does not necessarily cross the threshold of a personal data breach. A Ransomware attack would only be considered a personal data breach if there is a back up but the outage directly impacts user’s freedoms and rights, or if there is no back up at all. Ideally, a Ransomware attack where the ransomed data can be quickly recovered does not have to be reported.

What Are the Consequences of Non-Compliance?

A failure to comply with the GDPR’s breach reporting requirements will not only result in negative PR, constant scrutiny, and possibly loss of business; but will also attract an administrative fine of up to € 10 million or up to two percent of the total global annual turnover of the preceding financial year. Additionally, failure to to notify the supervising authority may be indicative of systematic security failures. The would show an additional breach of GDPR and attract more fines. The GDPR does have a list of factors the supervising authority should consider when imposing fine; chief among them being the degree of co-operation by the data controller with protection authority.

In Closing

Data breach notification laws have already been firmly established in the U.S. These laws are designed to push organizations to improve their efforts in the detection and deterrence of data breaches. The regulators intentions are not to punish but to establish a trustful business environment by equipping organizations to deal with with security issues.

Author: Gabriel Lando

image courtesy of freepik