Thank you! We will send an email with details to download the server and client apps. Please check your SPAM folder, if you do not receive the email within a few minutes.
Registration Successful!
Thank you! We have sent an email with your site access details. If you do not receive the email within a few minutes, please check your SPAM folder.
Select FileCloud Edition:
Server
Run FileCloud on your server
Online
We host FileCloud for you
Your free trial includes:
Free for 14 days, up to 20 Users
All the Features
Mobile and Desktop Apps - Sync, Drive, Add-ons for Office & Outlook
Free Support
Your free trial includes:
Free for 14 days (5 Users, 1TB)
All the Features
Mobile and Desktop Apps - Sync, Drive, Add-ons for Office & Outlook
Admin Portal for Managing Your Site
Free Support
Trusted by
By submitting the above details, you agree that we can store and process your information as covered by FileCloud Privacy. If you want to delete your information, email us at support@filecloud.com.
This blog post explains how to upgrade the FileCloud High Availability cluster using the FileCloud Offline Upgrade tool for Linux. At the moment, the FileCloud Offline Upgrade tool only supports CentOS7 and RHEL7 machines.
In this scenario, let us consider the architecture. The FileCloud architecture below consists of:
2 x web servers
3 x MongoDB servers
1 x Solr server
The example used throughout this how-to blog post is based on FileCloud 20.1, where MongoDB runs on 3.6. Starting from 21.1, we will have to upgrade the MongoDB clusters manually, prior to Web node upgrades.
Upgrading FileCloud’s MongoDB Servers
We described how to upgrade MongoDB servers for Windows and Linux in a previous blog post. Here, we describe steps to upgrade MongoDB with the FileCloud offline upgrade too.
Step 1: Download the Upgrade Tool and Create a Path
First, download mongodb_upgrader_40_rpm.tgz and mongodb_upgrader_42_rpm.tgz into the MongoDB servers. You will need to implement these upgrades step by step.
mongodb_upgrader_40_rpm.tgz is MongoDB 4.0
mongodb_upgrader_42_rpm.tgz is MongoDB 4.2
Step 2: Create a Directory and Path
Create a directory as below in any path; $path can be any path location
mkdir -p $path/mongo40
mkdir -p $path/mongo42
tar -xzvf mongodb_upgrader_40_rpm.tgz -C $path/mongo40
tar -xzvf mongodb_upgrader_42_rpm.tgz -C $path/mongo42
Run the upgrader_offline_rpm.sh in the web nodes (you can skip the MongoDB upgrade option in upgrader_offline_rpm.sh as we will upgrade MongoDB servers manually prior to web nodes)
For Solr nodes, select the option Solr server and skip the web server and Solr.
Conclusion
Please note that this blog post is written based on the sample architecture mentioned at the start of the post. If you have different architecture, please feel free to reach out for any clarifications at support@filecloud.com.
If you are a developer, chances are that you’ve probably heard about or even use build automation software (also known as build tools). The purpose of these software is to automate some tasks associated with building software. For example, a build automation software can help you to compile an app source code, run tests of the app, create an installer and even install the app on a remote server.
Build tools are an essential part of the DevOps process. They help to save time, highlight potential issues, and ease the work of developers. There are scores of build automation software on the market, so how do you know which ones to choose? Well, that’s the purpose of this article. We’ve compiled a list of the top 10 build automation software in 2019.
1. Jenkins
This is a Java-based open-source build automation server. Jenkins has been around for over a decade and is used by many developers. It features up to 1,400 plugins which broaden the list of what it can do. Jenkins can be used to compile the source code, test, and deploy an app among other things. You can run Jenkins as a servlet in a Java app server like Apache Tomcat, or it can be launched as a standalone app. Jenkins can even be used to share your app with different devices.
2. Apache Ant
This is another Java-based, open source build tool. Ant has been around for nearly two decades now. Although it is considered “old,” it is still very useful. Apache Ant is very flexible. You can customize it based on the tasks you need this software to perform. Like other build automation software, Apache Ant can be used to compile the source code of an app, and run it. Ant writes codes in XML – this is one of the reasons why it is preferred by some developers.
3. Gradle
If you want a modern version of Apache Ant, I recommend Gradle. One of the main differences between the two is that instead of XML, Gradle uses Apache Groove, domain-specific language (DSL). Gradle is useful for every step in the app development process. It can do everything that Ant can do, and much more, and it also supports incremental builds.
4. TeamCity
This Java-based build automation software was released by JetBrains in 2006. This is a commercial software. However, you can request a free license if you are working on an open source project. TeamCity has the same features as other build tools. In addition, it has up to 100 build configurations, and you can run up to three builds simultaneously. This is a powerful tool, and it produces sleek, modern apps.
5. Maven
This app from the Apache Software Foundation has been around since 2004. Maven has been described as a modern version of Apache Ant. Although this is a Java-based build tool, it supports projects built written in other programming languages. It uses conventions for building, and you only need to write exceptions. With Maven, you can easily write plugins for a specific task. Also, you can use it for multiple projects concurrently. Maven depends on XML.
6. Travis CI
This is an open source continuous integration service. It is used to build and test projects hosted on GitHub. This service comes with a vast library of pre-installed database and services. Also, it tests Pull Requests before merging to avoid potential issues. Travis CI is written in Ruby but supports different programming languages.
7. CMake
This open source build automation software was released in 2000. You can use CMake to compile, test, and package a cross-platform code. This is a versatile app. It can be linked with third-party libraries and works with your native build environment. It is perfect for working on large C++ projects and can be used for apps that are sourced from different libraries. It creates a directory tree for your app.
8. sbt
This is an interaction build tool released in 2008. sbt stands for Scala Build Tool. Although it is mainly used for Scala projects, it also supports Java. sbt provides all the standard tools that you will find in the standard build automation software and more, but it is specifically for Scala projects. It also manages dependencies. sbt comes with several plugins, and you can add other features to the software.
9. MSBuild
This build automation software from Microsoft that works with XML code. Ir was released in 2003 as a free, open-source build tool. MSBuild is part of the .NET Framework. You can configure the build process to perform specific tasks. MSBuild is similar to Ant in many ways, and many believe it is better. Although you can generate files to use on MSBuild from Visual Studio, it is not compulsory.
10. Bamboo
This build too and continuous deployment server is written in Java and was released in 2007. Although it may not be as popular as some of the top build automation software, it is equally good. It can run multiple builds concurrently. It also provides an in-depth analysis of the problems with your software if bugs are found. It can be used to import data from Jenkins and can be integrated with other software from Atlassian. This is a premium software, and it is not open source.
These are some of the top build automation tools on the market. Although these are some of the best build tools out there, there are many others that may be equally good. When choosing which build automation software to use, you must analyze the requirements of your project and the features that each tool provides. Some of the build tools highlighted above are more suited for teamwork than others. Also, some can be integrated with other apps. These are the things to consider when choosing which build automation tool to use. Ultimately, there is “no” best build tool – it all depends on which particular tool suits the project that you’re working on.
One of the most important and often misunderstood pieces of functionality in Microsoft Windows is the File and Folder security permissions framework. These permissions not only control access to all files and folders in the NTFS file system, it also ensures the integrity of the operating system and prevents inadvertent and unauthorized changes by the non-admin users as well as by malicious programs and applications.
So let’s begin at the very beginning, the NTFS file system can be considered as a hierarchical tree structure, with the disk volume at the top level and with each folder being a branch off the tree. Each folder can have any number of files and these files can be considered as leaf nodes. i.e. there can be no further branches off that leaf node. Folders are therefore referred to as Containers, ie objects that can contain other objects.
So, how exactly is access to these sets of hierarchical objects controlled exactly? That is what we will talk about next. When the NTFS file system was originally introduced in Windows NT, the security permissions framework had major shortcomings. This was revamped in Windows 2000 onwards and is the basis of almost all the file permission security functionality present in modern day Windows OS.
To begin, each object in the file hierarchy has a Security Descriptor associated with it. You can consider Security Descriptors as an extended attribute to the file or folder. Note that Security Descriptors are not only limited to files but also apply to other OS level objects like Processes, Threads, Registry keys etc.
At the basic level, a security descriptor contains a bunch of flags in the Header, along with the Owner information as well as the Primary Group information, followed by a set of variable lists called a Discretionary Access Control List (DACL) as well as a System Access Control List (SACL).
Any File or Folder will always have an associated Owner associated with it and no matter what, that Owner can always perform operations to it. The Primary Group is just enabled for compatibility with POSIX standards and can be ignored. The SACL is for specifying which users and groups get audited for which actions performed on the object. For the purposes of this discussion, let’s ignore that list as well.
The DACL is the most interesting section of any Security Descriptor. You can consider a DACL to define a list of users and groups that are allowed to or denied access to that file or folder. So to represent each user or group with the specific allowed or denied action, each DACL consists of one or more Access Control Entries (ACE). An Access Control Entry specifies a user or group, what permissions are being allowed or denied and some additional attributes. Here’s an example of a simple DACL.
So far, if you have been following along, this seems pretty straightforward and it mostly is, but the practical way how this is applied to folders introduces complexity especially if you are unclear how the permissions interact with each other.
Inheritance of Security Descriptors
If every object had its own unique copy of the Security Descriptor associated with it, things would be pretty simple but impossible to manage practically. Imagine a file system with thousands of folders used by hundreds of users. Trying to set the permissions on each and every folder individually will simply break down quickly. If you needed to add or modify the permissions on a set of folders, you will have to individually apply the change to each and every file and folder in those set of folders.
Thus was born the notion of inheritance. Now, not only is it possible to apply a permission (ACE) to a folder, it is also possible to indicate if the permissions should “flow” to all children objects. So if it is a folder, all subfolders and files inside that folder should have the same permissions “inherited”. See below for an example:
Here, when Folder 3 has some permissions setup, these permissions by default are inherited by its children objects which includes SubFolder1, SubFolder2 and so on. Note that this inheritance is automatic, ie if new objects are added to this section of the tree, those objects automatically include the inherited permissions.
The DACL of any subfolder item now looks like the following, assuming this was set as the permissions for Folder 3.
You can see inherited permissions on any security dialog by the grayed out options seen. To edit these options, you have to traverse up to the tree till you reach the object where the items are actually setup (which in this case is Folder 3, where you can actually edit the permissions). Note that if you ever edit the permissions in Folder3, the new permissions automatically re-flow to the child objects without you having to set them one by one explicitly.
So if inheritance is such a cool thing, why would you ever want to disable inheritance? That’s a good question and it brings us to setting up folder permissions for a large organization. In many organizations which have groups and departments, it is pretty common to organize the folders by groups and then just allow permissions to the folders based on the groups the users belong to.
In most cases, this kind of simple organization works fine, however, in some cases, there will be some folders that belong to a group or department which absolutely need complete security and should only be accessed by a select handful of people. For example: consider SubFolder1 as a highly sensitive folder that should be fully locked down.
In this case this subset of folders should be setup without inheritance.
Disabling Inheritance at Sub Folder 1 helps change a few things. Permission changes happening at parent folders like Folder 3 will never affect Sub Folder 1 under any conditions. It is impossible to give access to Sub Folder1, by someone adding a user or group at Folder 3. This now effectively isolates the SubFolder 1 into its own permission hierarchy disconnected from the rest of the system. So IT admins can setup a small handful of specific permissions for Sub Folder1 that are applicable to all the contents inside it.
Order of Permission Evaluation
Having understood Security Descriptors and Inheritance (as well as when inheritance should be disabled), now it is time to look at how all this comes together. What happens when mutually exclusive permissions are applicable to a file or folder, how does the security permissions remain consistent in that case?
For example, consider an object (File 1) where at a parent level folder (Folder 3), say JOHN is allowed to READ and WRITE a folder and these permissions are inherited to a child object in the hierarchy.
Now, if JOHN is not supposed to WRITE to this child item and you as an IT admin add DENY WRITE to JOHN to the File1 item how does this conflicting permissions make sense and get applied?
The rules are pretty simple in this case, the order of ACE evaluation is
• Direct Deny or Disallowed Permission Entries applied directly on the object
• Direct Allowed Permission Entries applied directly on the object
• Inherited Negative Permission Entries from Parent Objects
• Inherited Parent Permission Entries from Parent Objects
The Windows OS will always evaluate the permissions based on this order, so any overrides placed on that object directly or explicitly will be considered first before any inherited permissions. The first rule that denies permissions for a user under consideration is applied and evaluation is stopped, otherwise evaluation continues till required permissions are allowed and then evaluation is stopped. Note that even with inherited permission entries, the permission entries from the nearest parent are evaluated first before the evaluation continues to the farther parent. ie the distance from the child to the parent matters in the evaluation of the permissions.
Applying Folder Security inside the network
If you thought that setting up permissions on folders is all that you need for a network share you are mistaken, you also need to create a folder share and specify permissions for the share. The final permissions for a user is a combination of the permissions applied in the share as well as the security permissions applied to the folders. The minimum permissions applicable is always applied.
So it is always the best practice to create a share and choose Everyone Full access at the share level and let all the permissions be managed by the Security permissions.
Applying NTFS folder security outside the network
It is simple to provide network folder access over the LAN and apply these folder permissions efficiently, however if you want to allow users to access these files outside the LAN via the web browser, mobile apps etc and still enforce NTFS file and folder permissions, then consider using FileCloud (our Enterprise File Sharing and Sync product) that can effortlessly enforce these permissions and still provide seamless access.
Delta (Differential) sync is a type of synchronization technology that will only synchronize parts of a file that have been updated or changed. The general claim is that delta sync technology will help to save time and bandwidth by just synchronizing the changed parts instead of the whole file. For instance, let us say you have a 10MB file and you are changing just 1 bit, instead of synchronizing the whole 10MB file it will only sync the changed bit. At first glance, it seems to provide huge benefits for large files. But it is a myth and a hyperbole by vendors who espouse this technology for marketing purposes.
The Facts
The fact is that for most of the modern files types (especially the large ones) delta sync won’t help at all. The simple reason is that majority of the widely used modern file types (pdf, jpeg, png, .docx, pptx, docx, .mp3, .mp4 and many others) are compressed . Unfortunately, compression negates any benefit from delta sync. When a file is stored compressed, in the process of saving the file, the file is run through a special process that finds duplicate data and removes it. Even if you change a bit in a compressed file format, it will change the entire file. Also , you will be hard pressed to find a modern file format that is not compressed especially the large ones. For instance, let us say you are editing an image file (.jpeg) you will not see any difference between a regular sync and delta sync. The reason is that .jpeg is a compressed file format like many other image formats. The same thing applies to your video files and office file formats.
Where will Delta Sync Help?
Delta Sync will help in scenarios in where large files are stored uncompressed. The most typical case is log files for system administrators. For instance, web server access logs can often get quite big (MBs to GB). This is an ideal case for delta sync. But again how often do you sync a log files using your Enterprise File Sharing and Sync solution.
Next time when you see delta sync marketed about as a way to save bandwidth and time, please take those claims with a grain of salt. The real criteria that one must consider for evaluation are the security of the Enterprise File Share and Sync (EFSS) system, ROI, custom branding, user experience and fine grained sharing controls. Delta Sync should not be a criterion for evaluating an EFSS solution. It simply doesn’t matter.
Despite being the latest buzzword in IT, the term DevOps still raises a lot of question marks any time it’s brought up. Simply put, DevOps is the combination of tasks performed by an organization’s systems operations, development and QA engineering teams in the entire service lifecycle, from design through the development process to production support. However, DevOps is considered by many as more of a belief or cultural approach that aims to foster improved communication between development and operations teams as more elements of operations become programmable. DevOps has strong similitude with Lean and agile approaches. The need to break down the barrier between operations and development has been accelerated by cloud computing. DevOps and cloud computing are mutually reinforcing strategies for delivering business value through technology.
At the turn the century, enterprises began shifting their focus from efficiency and stability towards innovation and agility. In order to adapt to the changing face of the business market and increase delivery frequency, application delivery teams have to adopt concepts like experimentation, rapid iteration, collaboration, and Minimum Viable Product (MVP) deployment. DevOps successfully bridges this gap. A DevOps approach applies lean and agile thinking principles to all the stake holders who develop, operate or simply benefit from a company’s software systems, this includes partners, suppliers and customers. Cloud computing, whether on –premise or purchased as a service combines infrastructure, services and software to help organizations develop and deliver quality software at a much faster rate. The elastic properties of the cloud expedite scalability while DevOps streamlines and accelerates application releases; this is why the marriage of The Cloud and DevOps is the perfect partnership.
Best Practices for DevOps in the Cloud
DevOps practices and principles form the foundation that enables enterprises to fully utilize cloud-based computing and to address and mitigate the inherent risks associated with the cloud. Companies that are capable of reliably building their infrastructure, provision servers and deploy apps are in a better position to handle any challenge the cloud throws at them. However, IT professionals who practice DevOps in the cloud typically make mistakes due to a rudimentary understanding of the best practices and various deployment technologies.
A survey of 600 IT professionals conducted by Forrester on DevOps practices, and where enterprises are in terms of the maturity of DevOps practices adoption revealed that roughly 33 percent of teams consistently deliver at cycles of one to three weeks and that the fastest teams generated higher business satisfaction than slower teams; a clear indication that quality is not sacrificed for fast delivery if the proper practices are implemented.
DevOps Team Assemble!
DevOps places a strong emphasis on the collaboration between development and operations. Assembling a team of developers who have more interpersonal, operational and communication skills than a regular head down developer is the best way to break down organizational silos and build a more agile approach to application development and deployment. Developers are responsible for selecting and implementing new technologies and features, and they should be able to quickly respond to, and address any issues that arise within existing systems. The operations team contributes the important expertise of how the technology behaves under live production conditions. If development and operation functions are separated, active collaboration is limited, leading to applications problems that subsequently delay deployment.
The DevOps scope is much larger than the operations and development teams. It also has to include other stakeholders from the organization and the service provider. In order to realize the success of the enterprise through DevOps, the key stakeholders within the organization need to participate in cloud and DevOps training.
Automated Performance Testing
In cloud deployments, application performance issues are typically a result of flawed application design. Most of these performance issues are missed and end up going into production where users eventually find them, which isn’t good. Performance testing is a crucial aspect that should never be overlooked by the DevOps stream. The development team should adopt automated regression testing as a common practice, and ideally extend it to test-first approaches like behavior-driven development (BDD) and test-driven development (TDD). This guarantees that the operations team receives a solution of sufficient quality before it’s approved for release into production. Shifting away from manual testing improves quality, delivery speed, and testing accuracy, thus dramatically reducing cost. Automated testing should combine existing accuracy and stability testing, as well as with existing testing for user interfaces and APIs.
Incorporate Containers into the Cloud Strategy
The easily manageable and portable nature of containers makes their integration one of the best practices for DevOps in the cloud. Containers provide a way to ‘componentize’ applications, simplifying every step from development to deployment. However, it is prudent to consider cluster management, governance, security and orchestration tools for applications that leverage containers.
Continuous Integration and Deployment
Continuous deployment and integration are effective techniques used in DevOps to eliminate unnecessary steps, delays and friction between steps to increase work flow. Cloud-based development can greatly benefit from automating deployments and frequently integrating changes. Continuous integration allows developers to safely create high-quality solutions in small, regular steps by providing immediate feedback on code defects; while continuous deployment allows them to minimize the time between a new feature being identified and being deployed into production. Continuous deployment and integration may increase operational risk if the development teams are not properly disciplined. For a continuous app delivery model to succeed, a strong management system must be put in place.
For the Cloud, by the Cloud
In order to take full advantage of the cloud, including platform as a service (PaaS) and infrastructure as service (IaaS), the applications have to be designed in a way that they are decoupled from physical resources. This is where the term ‘Infrastructure as code’ or ‘programmable infrastructure’ comes in play. DevOps places a strong emphasis on the ability to build and maintain essential infrastructure components with automated, programmatic features. So from a DevOps perspective, infrastructure as code (IAC) includes the ability to build middleware, provision servers, and install application code that makes up the core components of the system architecture. The use of loose architectural coupling within and between applications greatly reduces complexity and enables delivery in small increments. Considering the decoupled architecture in your design improves the overall utilization and efficiency of the cloud resources by up to 70 percent. Cloud computing subsequently helps in saving money and you only end up paying for the resources you use.
Understanding customer problems is the most difficult aspect of creating new products and services. Traditional ways of gaining this understanding include talking to a focus customer group or doing market research. But the downside to these one-time research methods is that they fail to account for evolving customer needs within changing business contexts. Being aware of these changing contexts is one of the most crucial factors of product design. This principle is more relevant in software development than the development of physical products.
At Codelathe, we follow a unique software development methodology that exposes the developers to customer problems throughout the software development process. This helps the developers empathize with the customers and create the right solutions. It also helps us to select the right set of features and keeps the product relevant in an evolving market.
We religiously follow this rule, and we don’t hire anybody who doesn’t believe in this process. We have this printed and posted in heavily trafficked places in our office. This method has worked very well for us, so we thought these simple rules would benefit other software companies as well.
Bridging the gap between development and operations can only be successfully accomplished using the right DevOps toolkit. With the help of the Cloud, IT professionals are now using cloud DevOps tools to build efficient application in the cloud on demand. These tools make it easier to share tasks and information and to automate multiple processes; significantly reducing deployment time and eventually helping enterprises get closer to the continuous deployment and integration ideals that DevOps has evolved around.
The most predominant tools in the Cloud DevOps toolkit are configuration management platforms that utilize the idea of abstracting infrastructure components into code in order to orchestrate and automate continuous delivery of both old and new environments. A DevOps team relies on configuration management to maintain a single source of consistent, documented system configuration. Enterprise infrastructure is becoming code; this means that instances can easily be spun up or down with a few clicks, the security this provides is crucial for complex deployments. Aside from CM tools, the DevOps approach requires many more tools, typically open source, to combine application development and deployment into a more streamlined practice. These tools are used to standardize builds, improve collaboration between developers and infrastructure pros, and monitor systems.
According to software architects working with DevOps for the cloud, the use of DevOps makes them more aware of the impact of their software structure – or how applications are broken down into individual components on deployment; resulting in more efficient application design. Cloud DevOps tool not only helps reduce the complexity of near-term deployments, but they can also help developers understand how to build flexible, agile applications. The tools below are not listed in any particular order.
Most Popular 8 DevOps Tools
Top Tools for DevOps
Git (GitHub)
Git refers to a repository for storing versions of code, it is also known as a revision-control system or source code management system. It was original designed and built in 2005 by Linux kernel developers for Linux kernel development. GitHub is a publicly hosted code repository that can be freely downloaded and shared. Git is most likely the most popular source code management tool. Both are crucial for running DevOps environments.
Puppet
Puppet Enterprise, from Puppet Labs is a configuration management system that allows cloud engineers orchestrate data centers by automating time consuming manual tasks such as the configuration and management of software and machines. Using Puppet, developers can ensure stability, reliability and consistency at every step of the software delivery process. Puppet supports key DevOps practices, including continuous delivery and fostering communication between systems administrators and developers.
Docker
Docker appeals to DevOps practitioners who wish to build, ship and run their applications anywhere. Docker’s containerization technology makes applications portable while easing configuration management and control issues; applications run in self-contained units that can be moved across platforms. It is comprised of Docker Hub, which is a cloud service for workflow automation and application sharing and Docker Engine, a lightweight packaging and runtime tool.
JuJu
Juju is a cloud infrastructure automation tool that enables developers to build cloud environments with a few commands. Its best practice Charms can be used by DevOps practitioners to handle configuration, maintenance, scalability, deployment, and management. Juju orchestrates services to assist in the deployment of hundreds of pre-configured services, OpenStack or a workload on any private or public cloud. It works well with configuration management tools like Chef and Puppet.
Jenkins
Jenkins is an extensible continuous integration engine that allows DevOps engineers to monitor executions of repeated jobs. Using Jenkins, engineers will be able to easily integrate changes to projects. Its top selling point is its ease of use and it has a vast ecosystem of add-ons and plugins. It has also been optimized for easy customization. This open source continuous integration engine plays a crucial role in testing new code before its deployed.
Ansible
Ansible is a configuration management tool that is similar to Chef and Puppet. However, Ansible mainly appeals to DevOps engineers who are looking for the simplest way to automate infrastructure and applications. It has a visual dashboard, graphical inventory management, job scheduling, and role based access control. Ansible Tower can easily be inserted into existing processes and tools. Additionally, Ansible has an excellent eco system and automation jobs can be delegated to non-Ansible users via portal mode.
Nagios
Nagios is a seasoned monitoring solution that is highly effective due to its large open source community of contributors who are constantly building plugins for the tool. The importance of monitoring how changes to code affect the environment during application deployment cannot be overlooked. DevOps practitioners can use Nagios to identify and fix problems before they affect critical business processes.
Chef
Chef is one of the most popular infrastructure automation tools; it facilitates continuous delivery and configuration management. By converting infrastructure to code, Chef enables DevOps engineers to automate Infrastructure management, deployment and creation through short, repeatable scripts referred to as ‘recipes’. These recipes can manage unique configurations and automatically check and update nodes.
Monit
Monit is a system monitoring and recovery tool. It basically ensures every process on a machine is running as it should be. Monit handles automatic repair and maintenance and executes meaningful actions in error situations. For example, if there is a failure in Apache, Monit restarts the Apache process. Apart from monitoring general system resources on localhost, Monit also keeps an eye on daemon processes, directories, file systems and network connections to various servers.
Take the decision for next phase of activities such as, estimate the cost & schedule of future projects.
Understand the kind of improvement required to success the project
Take decision on process or technology to be modified etc
Software Test Metrics
Software Metrics are used to measure the quality of the project. Simply, Metric is a unit used for describing an attribute. Metric is a scale for measurement.
Test metrics example:
How many defects are existed within the module?
How many test cases are executed per person?
What is the Test coverage %?
Why Test Metrics?
Generation of Software Test Metrics is the most important responsibility of the Software Test Lead/Manager.
Test Metrics are used to,
Take the decision for next phase of activities such as, estimate the cost & schedule of future projects.
Understand the kind of improvement required to success the project
Take decision on process or technology to be modified etc.
Type of metrics Base Metrics (Direct Measure)
Base metrics constitute the raw data gathered by a Test Analyst throughout the testing effort. These metrics are used to provide project status reports to the Test Lead and Project Manager; they also feed into the formulas used to derive Calculated Metrics.
Ex: # of Test Cases, # of Test Cases Executed
Calculated Metrics (Indirect Measure)
Calculated Metrics convert the Base Metrics data into more useful information. These types of metrics are generally the responsibility of the Test Lead and can be tracked at many different levels (by module, tester, or project).
Ex: % Complete, % Test Coverage
Definitions and Formulas for Calculating Metrics:
Formulas For Calculating Metrics
#1) %ge Test cases Executed: This metric is used to obtain the execution status of the test cases in terms of %ge.
%ge Test cases Executed =(No. of Test cases executed / Total no. of Test cases written) * 100.
So, from the above data,
%ge Test cases Executed = (65 / 100) * 100 = 65%
#2) %ge Test cases not executed: This metric is used to obtain the pending execution status of the test cases in terms of %ge.
%ge Test cases not executed =(No. of Test cases not executed / Total no. of Test cases written) * 100.
So, from the above data,
%ge Test cases Blocked = (35 / 100) * 100 = 35%
#3) %ge Test cases Passed: This metric is used to obtain the Pass %ge of the executed test cases.
%ge Test cases Passed =(No. of Test cases Passed / Total no. of Test cases Executed) * 100.
So, from the above data,
%ge Test cases Passed = (30 / 65) * 100 = 46%
#4) %ge Test cases Failed: This metric is used to obtain the Fail %ge of the executed test cases.
%ge Test cases Failed =(No. of Test cases Failed / Total no. of Test cases Executed) * 100.
So, from the above data,
%ge Test cases Passed = (26 / 65) * 100 = 40%
#5) %ge Test cases Blocked: This metric is used to obtain the blocked %ge of the executed test cases. A detailed report can be submitted by specifying the actual reason of blocking the test cases.
%ge Test cases Blocked =(No. of Test cases Blocked / Total no. of Test cases Executed) * 100.
So, from the above data,
%ge Test cases Blocked = (9 / 65) * 100 = 14%
#6) Defect Density=No. of Defects identified / size
(Here “Size” is considered as requirement. Hence here the Defect Density is calculated as number of defects identified per requirement. Similarly, Defect Density can be calculated as number of Defects identified per 100 lines of code [OR] No. of defects identified per module etc.)
So, from the above data,
Defect Density = (30 / 5) = 6
#7) Defect Removal Efficiency (DRE)= (No. of Defects found during QA testing / (No. of Defects found during QA testing +No. of Defects found by End user)) * 100
DRE is used to identify the test effectiveness of the system.
Suppose, During Development & QA testing, we have identified 100 defects.
After the QA testing, during Alpha & Beta testing, end user / client identified 40 defects, which could have been identified during QA testing phase.
Now, The DRE will be calculated as,
DRE = [100 / (100 + 40)] * 100 = [100 /140] * 100 = 71%
$8) Defect Leakage:Defect Leakage is the Metric which is used to identify the efficiency of the QA testing i.e., how many defects are missed / slipped during the QA testing.
Defect Leakage= (No. of Defects found in UAT / No. of Defects found in QA testing.) * 100
Suppose, During Development & QA testing, we have identified 100 defects.
After the QA testing, during Alpha & Beta testing, end user / client identified 40 defects, which could have been identified during QA testing phase.
Defect Leakage = (40 /100) * 100 = 40%
#9) Defects by Priority: This metric is used to identify the no. of defects identified based on the Severity / Priority of the defect which is used to decide the quality of the software.
%ge Critical Defects = No. of Critical Defects identified / Total no. of Defects identified * 100
From the data available in the above table,
%ge Critical Defects = 6/ 30 * 100 = 20%
%ge High Defects = No. of High Defects identified / Total no. of Defects identified * 100
From the data available in the above table,
%ge High Defects = 10/ 30 * 100 = 33.33%
%ge Medium Defects = No. of Medium Defects identified / Total no. of Defects identified * 100
From the data available in the above table,
%ge Medium Defects = 6/ 30 * 100 = 20%
%ge Low Defects = No. of Low Defects identified / Total no. of Defects identified * 100
From the data available in the above table,
%ge Low Defects = 8/ 30 * 100 = 27%
Improvements that can be taken in order to increase the quality of testing.
Test data should be readily available in the testcase
New Features should to be always tested first , to allow enough time for the developers to fix the bug
Cloud computing has proven to be not only critically phenomenal to large businesses, but also to small businesses and startups. Programs which had to be installed in personal computers can now be accessed over the web, translating to millions of dollars in savings for small, medium and large corporations. You can now easily leverage online software solutions in the form of SaaS without investing in an extensive infrastructural framework to support underlying applications.
The resultant excitement sparked a widespread migration over the years, pushing the SaaS market value to $10 billion by 2010. The next couple of years experienced tremendous exponential growth, which doubled this value by 2015. Currently, about 54% of IT professionals have adopted SaaS applications in their organizations, and 14% are planning to join the bandwagon in the next 6 months or so.
Top Cloud Apps
Enterprises have adopted cloud applications due to increased efficiency, better efficacy and consequently improved overall productivity. Such applications, in fact, have fueled migration particularly within small and mid-sized companies, which always have limited IT budgets. Here are the top ones which are currently increasingly growing in popularity and creating ripples in the SaaS industry:
Office 365
This is an application which is gradually eliminating traditional on premise windows server administrators. At a monthly fee of $6 to $24 per user, you’re granted full office features including Word Processing, PowerPoint, Access and Excel. For improved user experience, it’s also integrated with existing active directory environments, a feature which is particularly critical for enterprises that are Windows centric.
Sage One
Built for micro and small businesses, Sage One offers a wide range of business administration features including expense management, project tracking and invoicing. Although it’s not entirely customizable, its intuitive workflow and solid interface have proven to be significantly helpful to enterprises, going by its growing popularity. All these features come at a cost of $24.99 MSRP.
MailChimp
Although it’s been around for a couple of years, MailChimp is still making news in 2015. It’s an application that has greatly revolutionized email marketing by granting enterprises a wide range of mass emailing tools. It allows business not only to design and create effective email campaigns, but to also send and track them. The measuring and tracking is facilitated by Google Analytics integration.
QuickBooks
Being one of the principle business elements, enterprises cannot resist an application built to optimize their accounting- and that’s exactly what comes with Quickbooks. It’s an accounting service which comes with integrated features including creating business reports, setting budgets, creating VAT returns and monitoring cash flow- all of which can be done remotely. It’s particularly helpful to micro and small businesses which cannot afford to hire a dedicated team of accountants to create and maintain proper financial records.
Trello
Trello is changing how enterprises manage their projects through optimized tools. The free software allows its users to arrange tasks in cards according to their respective organizational workflows. The cards and their contents, including attached documents, are visible to all parties in an enterprise and can be edited on accounts with administrative privileges.
Salesforce Professional Edition
CRM experienced a major breakthrough with the introduction of Salesforce, an app built to effectively optimize and manage the entire process. It comes with a robust series of features including real-time data sharing, granular permissions, sales forecasts, email marketing, custom dashboards, reporting, analytics and other effectual customer service tools. It can therefore be leveraged by all types of businesses- small, medium and large corporations.
Toggl
As a time management application, Toggl allows you to arrange your tasks accordingly and schedule your time depending on the projected task completion periods. You can also track the time spent on respective tasks by logging on as you execute each task. Overall, this app makes it easy for executive teams to track the time spent on individual tasks in their enterprises.
Since 2015 is regarded as a prime year for cloud providers and vendor startups, there are many other pivotal applications which are yet to be mentioned. The best way to determine their suitability is to comprehensively assess your enterprise’s needs and subsequently use the findings to discern the most relevant applications from the rest.
Author: Davis Porter
Image Courtesy: KROMKRATHOG, freedigitalphotos.net
Here’s something that you probably know – Microsoft has now discontinued support for 32 bit processors on their Windows servers. For Windows Server 2012 to function smoothly, the hardware requirements need to be a 64 bit processor. Having fast processors will provide increased speed and more memory space. After picking the right hardware and installing the 2012 edition of the Windows Server, do you know the most important steps after that? Here are the top 10 steps.
Change Computer Name: Post deployment, you’ll be logged as the administrator by default. On the Server Manager Box that would already be open, click on the left side of the pane and choose the option – ‘Local Server Category’. Next, on the right side of the pane under the Properties column, select the name of the device displayed next to the Computer Name option. Make sure that the Computer Name Option is on the System Properties Box that would be displayed. Select the Change button. In the Computer Name option under the field-Computer Name. Click on OK once done and click OK again on the information box that pops up.
In System Properties Section, select Remote Tab: Then, choose the ‘Allow Remote Connections to Computer’ radio button. A warning box would be displayed. Click ‘OK’. Additionally, you can disable option to run your computer run connections from remote desktop. To do this, unselect the checkbox to allow the Windows Server 2012 to accept the Remote Connections from servers. Once this is completed, click on Close to save. A confirmation box would pop up. Click on Restart Later in that box.
Integrate to the Network- Next, from the Server Manager section under the Properties window, select the IPv4 address designated. In the Network Connections section that opens, right-click on the NIC symbol that would be integrated to the network. To fill in the address, a list would be displayed. Go to Properties from the list that appears on screen. On the Properties box that opens, there will be a list of options. Double click on the IP Version 4 option. In the box that comes up, select the radio button which asks you to use the IP pathway. Then, fill the enabled IP address.
Enter DNS Specific Address: In case you are running a DNS server, populate it with the IP Address. Generally, if it is the initial domain installation in the network, then no DNS Servers would be available. However, if you intend upon introducing a new DNS Server in the Active Domain Controller itself, then you need to populate it with the same IP pathway used before. Once this stage is complete, click on OK.
Unsubscribe TCP/IPv6 option to save space: Coming to the NIC Properties section, save the modifications made by clicking on OK. Additionally, you can also unsubscribe to the TCP/IPv6 option in order to avoid extra processing and memory usage before selecting the same. Now, you can exit the Network Connections section.
Modify Time and Date: On the Server Manager Section, check for the time zone and see whether it is accurate and click on it to make the necessary changes. Open the Date and Time section and under the Timezone section, click on the Change Timezone button. From the dropdown list that appears, select the correct Timezone as per the requirements of your geographical region. Select OK on all the boxes which are opened to revert to the Server Manager Section.
Windows Update Configuration: Configuring the Windows Update settings is critical to protect your sever. To begin, click on ‘Not Configured’ next to Windows Update. A screen would appear. Click on Turn on Automatic Updates in this screen. This will ensure that Windows will look for updates that are yet to be applied on the system and install them automatically. You can also customize the time and periods in which such updates appear as some updates require the entire system to be restarted. To change the settings, go the left side of the window pane and click on Change Settings
Update Servers: Admins often need tools to help the servers match the visualizations of the digital environment they foster. The Windows Server 2012 has the necessary tools already installed but for other variants of the servers, they need to be downloaded.
Firewall: If you prefer to disable host-based firewalls, change the settings by turning on the Domain in the Windows Firewall Configuration page. On the left hand side of the window, click on Turn Windows Firewall On/Off to effect the necessary changes and select the Radio Button next to Turn off Windows Firewall next to each network to disable the same for all the networks.
Anti-Virus installation: This is an important step in ensuring the security of your servers. If you don’t have a preferred anti-virus solution, you can download a free trial versions to begin with.
Once the entire process is completed, close the Server Manager Window and restart the computer to allow the changes made to take effect. Wait till the system restarts and once it does, you’ll see that the Windows Server 2012 is up and ready for functioning on your computer. This might not be the full list of steps but these gives some basic steps for you to get started.
When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer.
Strictly Necessary Cookies
These cookies are used by the website for basic operations and services, including security. As a result, these cookies cannot be disabled.
If you disable this cookie, we will not be able to save your preferences. This means that every time you visit this website you will need to enable or disable cookies again.
Show details
Name
Purpose
FileCloud
FileCloud uses cookies to remember preferences and settings, authenticate users, manage sessions, maintain security, and determine if cookies are enabled. Users cannot opt out of FileCloud functional cookies, or they will be unable to access or use FileCloud Services.
reCAPTCHA
reCAPTCHA cookies help improve the security of the site by identifying spam and abuse with traffic analyses. reCAPTCHA helps block bots and automated software from engaging maliciously or abusively with the site.
Functional Cookies
These cookies help us remember user preferences and settings to improve interactions with our website and services.
Please enable Strictly Necessary Cookies first so that we can save your preferences!
Show details
Name
Purpose
Stripe
This cookie supports payment/donation processing for Community Edition of FileCloud by remembering and supplying saved user information.
Calendly
This cookie is used on the FileCloud site to support scheduling requests for FileCloud demos. It identifies and authenticates users and saves site login information.
Olark
This cookie supports the live chat/instant messaging functions on the website, including identifying visitors across devices as well as unique visits and maintaining message history across pages.
Google Optimize
This cookie supports A/B Testing by identifying users and content experiments to measure engagement in experiments.
Advertising & Analytics
These cookies help FileCloud and partners better understand who visits our website, what they’re interested in, and how we can improve our services.
Please enable Strictly Necessary Cookies first so that we can save your preferences!
Show details
Name
Purpose
Google Analytics
This cookie stores and updates a unique value for each visit to track page views.
Google Tag Manager
This cookie registers statistical data on users' behavior on the website for internal analytics.
Google Ads
Google helps us target search engine marketing, track how users interact with the FileCloud, and target advertisements on other websites. Google services used include AdWords, Dynamic Remarketing, DoubleClick Floodlight, and DoubleClick Ad Exchange-Buyer.
Hotjar
Hotjar sets a unique ID for a user’s session, which allows the website to obtain data on visitor behavior for statistical purposes. Specifically, Hotjar collects statistics on visits to the website, including number of visits, average time spent, and what pages were read.
LinkedIn Ads
This cookie enables targeted advertising to appear in your LinkedIn feed, based on your visit to a FileCloud page.
Facebook Pixel
This cookie enables targeted advertising to appear in your Facebook feed, based on your visit to a FileCloud page. Meta Pixel cookies also enable FileCloud to track conversions, define custom audiences, and build ad campaigns.
Leadfeeder
This cookie tracks pages viewed, visitor source, time spent on the site, and IP address to support Account-Based-Marketing (ABM) and Business-to-Business (B2B) marketing objectives.