CloudSigma endeavors to deliver a high degree of security and privacy for customers in accordance with the various aspects of their computing. This is reflected in CloudSigma’s ISO-27001 certification. We regard this as a top priority. We are committed to openness and transparency with respect to our security procedures and policies. Thus, this post presents a detailed overview of CloudSigma’s security and business continuity features.
Network Security & Traffic Separation (Data in Transit)
CloudSigma’s cloud leverages the open source KVM hypervisor. It provides full separation of all traffic between client accounts below the virtual machine level. No end user can view traffic from any other end user. This is achieved through full packet inspection of all incoming and outgoing packets to VMs by Linux KVM. KVM implements a virtual switch for every networking interface of each VM. Acceptable traffic courses (i.e. other VMs in the user’s account) are instantiated on boot and updated as VMs are added and removed from various networks in (i.e. end user private networks in the cloud).
In addition, end users can apply virtual firewalls at the hypervisor level that apply additional rules.
Storage Separation (Data at Rest)
Users can easily keep data private and secure using two different approaches. The first approach is to ensure the operating system/file structure is fully encrypted using technologies such as KVM for Linux distributions or Truecrypt for Windows environment. While this approach doesn’t eliminate the potential for data leakage, it does render any leaked data completely unusable to others. However, this approach can be somewhat disruptive. For example, if an encrypted server crashes, it will require manual procedures to enable access to encrypted data on reboot.
The second approach is to apply encryption to the drive on creation. This does eliminate the possibility of data leakage. Further, it ensures the automatic encryption of any new data as it is written. Encryption can be enabled via the API or WebApp when creating a new drive. It should be noted that this approach may have a small impact on performance. Customers can always configure their servers to have a system drive with no encryption and a data drive that is fully encrypted.
CloudSigma customers are able to use Google’s Two-step authentication in order to log onto their accounts. Two-step verification increases the security for access to their cloud platform account by providing a six to eight-digit unique password, which users must provide in addition to their username and password in order to log into the cloud platform UI. The feature is currently available via an API call and will be soon exposed in the WebApp. The default status of the feature is disabled and can be activated by individual customers if they want to.
Root Access & Operating System Security
Customers retain full sole access to their data at the file system level; the CloudSigma system does not have access inside VMs or drives. All customer data is handled automatically by our system. This includes activities such as drive deletion and scheduled deletion (for deprecated accounts). CloudSigma makes no copies of client drive data and therefore the sole copy resides in our cloud unless the customer chooses to clone the drive to another storage system or location.
Via the drives marketplace preinstalled systems of a large selection of operating systems are provided, these operating systems are correctly patched regularly to ensure security vulnerabilities are patched enabling end users to deploy secure virus and vulnerability free operating systems for their VMs on first boot.
CloudSigma is ISO-27001 certified including all areas of sales, operations and support as well as being PCI-DSS compliant. A copy of the latest of the ISO-27001 certificate can be obtained upon request. In addition, the CloudSigma cloud is also certified by Canonical as a certified Ubuntu Public Cloud.
CloudSigma is currently in the process of obtaining an ISO-9001 certificate. At present CloudSigma applies internal quality management procedures to processes relating to the creation and quality control of the products and services offered by the company. We use a combination of methodologies and management tools to ensure customer requirements and expectations are continuously monitored and met. The heads of each department are responsible for the implementation of all quality management procedures. They also need to ensure the management system is compatible with ISO-900:2008 standards other certifications for which we are already certified, such as ISO-27001.
An integrated management interface is the centralized system we use to manage and monitor the cloud, from both an operations and account management perspective. There are different access levels defined by the separate user roles and rights. Team members are trained and kept up to date on the different components and metrics used. Then, they are granted an access level based on their roles.
An agile framework provides us with a group of software development methods in which requirements and solutions evolve through collaboration between self-organizing, cross-functional teams. Retaining short term flexibility through an agile approach reduces the risk of failure. It also surfaces issues earlier before they threaten the success of the proposal. The iterative sprint process provides the ability over time forecast the work effort required for each for each deliverable allowing the product owner to fine tune their product roadmap. Being agile also moves the trade off between completeness of product and release timing. It is possible to release more frequently and to iterate faster.
The second facet of our engineering approach, are the systems in place to manage software deployment in a secure and reliable manner, complementing the agile methodologies discussed. Deployment is managed across three separate environments: development, acceptance testing, and production. The main source code repository is managed through the Mercurial Source Code Management tool. The updated codebase is verified through the Jenkins Continuous Integration tool, which tests each check-in via an automated build, and running a sequence of integration tests and unit tests on the code.
On the integration servers we run a suite of user level acceptance tests, that primarily monitor performance. If these tests pass successfully, the code is added to the Mercurial Production Repository. At this point the code becomes subject to an internal code review, by a developer who has not been involved with this code base. When this is signed off, the code is sent to a third and final mercurial repository, ready for deployment into the production environment.
Risk Management is applied in tandem with our agile approach and assigned the following four elements: risk description, probability, size of loss measured in days or story points and exposure. The risks are reevaluated at each sprint, with a single consolidated risk value created.
All customers of the CloudSigma platform are entitled to perform security, operations and processes auditing in relation to the services that we provide. The audit can be performed by the customer or a third party authorized by the customer. Please note the following:
- any audits shall be executed at the cost of the customer, including but not limited to charges that we have incurred during this process;
- the data center can be visited and access can be granted only after an advance notice of two weeks prior to the day of visit;
- in order to conduct the audit, the customer or their third-party auditor shall be accompanied by a CloudSigma staff member.
The CloudSigma cloud supports the encryption of partial or full i.e. boot level encryption of virtual drives. With this in mind, we recommend as a best practice that end users perform boot-level encryption of sensitive data and retain the keys outside our cloud. The cloud platform currently supports a number of customers running fully encrypted data storage in conjunction with their services in the cloud. End users can also connect to their VMs using encrypted protocols to ensure the integrity of login and other data they transmit to and from their servers.
Typical end user use cases where encryption would be used would be when a hosted processing provider is storing sensitive end user information. Another example is when a service provider themselves wishes to store proprietary data that they wish to be secured additionally. In these case an encrypted partition can be created for that specific data or a separate virtual drive with full file system encryption used. In this way the end user providing the service can combine best performance from data not needing encryption with high security for the data that does.
CloudSigma has extensive experience of encrypting drive data using numerous encryption approaches, such as Cryptsetup, dm-crypt, FDE, TrueCrypt (VeraCrypt), as well as lower-level block storage encryption via ZFS and is happy to work with end users to ensure the right encryption is implemented to reflect their requirements.
Secure access to end-user VMs is facilitated using SSH key pairs. This allows users to run commands on a machine’s command prompt without them being physically present near the machine. This enables users to establish a secure channel over an insecure network.
The SSH key creation covers the following three scenarios:
- CloudSigma support team can generate a public and a private SSH key for the customers.
- Customers can generate the SSH keys themselves and upload only the public key in their CloudSigma account. In this scenario customers take the responsibility for the protection and access of the private key. This option is provided for customers that are especially concerned about security in the cloud.
- Customers can generate the SSH keys themselves and upload both SSH keys in the CloudSigma account. Currently, this scenario doesn’t provide additional benefits. However, in the near future, an SSH console (similar to the VNC console today) will be opened automatically in the WebApp. This option will be only available for customers that have uploaded both their public and private SSH keys to their CloudSigma accounts.
Access Control Lists/Policies
Access control lists (ACLs) are meant to segment account control rights and access to the different operational aspects. With this feature the account administrators can allow access to different resources or a group of resources across the account. The account administrator delegates permissions to each account and lets each user log in to the web console with their own user credentials. Examples of delegated abilities:
- Provide accounting with access to billing, but not to edit any server/networking resources.
- Give junior sysadmins access to start/stop servers, but not to create or delete anything.
- Provide senior sysadmins access to fully manage the architecture, but not being able to access billing.
- Give the operations team access to firewall policies and networking, but not to servers.
- Provide a team with full access to their servers (using server tagging), but not any of the other resources.
ACLs enable a very granular control over the account’s permissions and budget, resulting in higher levels of transparency and security. For each module, it is possible to delegate either read-only or read-write permission. It is also possible to delegate permission on individual resources, for example, a server or set of drives.
CloudSigma implements comprehensive logging against all its infrastructure deployments. All infrastructure components contain logging information against all critical system functions (including access or data impacting actions for example) and by user. Logs are retained locally on the infrastructure component and replicated to a central repository using the logging service tool Kibana. Logs include networking activity, as well as key application and operating system events. The logs are retained for a minimum of one year onsite with logs retained for up to two years upon request.
Software upgrades and system patches at both the operating system and application layer are achieved without service disruption. This is possible due to the redundant and clustered architecture of the solution. System patching including security updates are subject to our security and change management procedures covered by CloudSigma’s ISO27001:2013 certified processes.
DDoS Protection Measures
- Implement additional rules for fraud payment prevention (Number of tries per new account eg. 5. This should only apply if the account age is less than a week)
- Apply an ISP approach for safety – Traffic shaping (put in place a policy limiting the number of packets and throughput). Upon request that policy will be editable for a particular client or set of clients
- Blacklisting of IP addresses in the event of an attack
- Maintenance of significant spare external IP connectivity to absorb malicious traffic
- Additional firewall measures both at our edge and internally
- Obfuscation of and removal (in some cases) of public IP connectivity from core cloud infrastructure where possible to avoid targeting of key cloud infrastructure assets
- Externally hosted cloud status page allowing status updates even during a potential total outage (see http://status.cloudsigma.com/
- Using IP proxies on core services and other measures that can’t be shared publicly
- Automatic blocking of DDOS attacks against our clouds.
Backup and Recovery Tools
The CloudSigma platform offers the ability to create live snapshots. You can also complete automated backups of virtual drives to a second site and other such data management and recovery tools. The backup system allows a policy to be created that specifies the backup frequency, as well as retention policy. In this way, an end user can create a policy that, for example, backs up every hour and retains the last 48 recovery points (i.e. 2 days of incremental backups). Such policies can then be applied to one or more drives. Тhe user can create multiple policies to allow different system requirements and data management policies to be maintained within their cloud infrastructure. The cloud billing system allows the purchase of backup/archive capacity alongside mainline compute storage. This is done to allow managing archive storage purchasing requirements.
The system includes the ability to recover back any restore point from a drive to the primary storage medium.
Live Drive Snapshots
This feature enables users to create point-in-time snapshots of their drives, which can later be cloned and upgraded to create stand-alone drives. Customers can again create custom snapshot policies which include automated snapshot frequency and retention parameters. Snapshots are priced simply by the underlying storage size occupied by each snapshot; it means end users only pay for the delta (i.e. the difference between the snapshot and the source drive) over time. Unlike cloning of drives, snapshots can be created while the server is running. By using snapshots customers can protect themselves from data corruption or use them for auditing purposes.
The advanced snapshot management feature, allowing customers to create snapshot management policies and apply them to one or more drives. In this manner, customers are able to automate the snapshot process.
Live Continuous Backup/Migration/Recovery
CloudSigma offers an installable agent which allows the provision of a continuous live backup and migration solution. This solution allows for both the live migration of servers into and out of the cloud. Also, it allows for Disaster Recovery as a Service and Data Backup solutions. The end user is able to have a secondary environment ready to instantly switch as required by the end user failover scenario. The solution provides a comprehensive set of configuration and monitoring tools, to fully automate its deployment and an automated workflow automation tool that allows an automated recovery of even complex infrastructure deployments upon disaster recovery.
For example, a user can mirror a virtual or physical environment outside of the cloud to the cloud using it as a secondary disaster recovery location. Alternatively an end user can mirror cloud to cloud to enable a highly resilient dual site set-up.
Often when customers move from private to public infrastructure they suffer a loss of visibility over the physical infrastructure on which their computing is residing. This in turn can allow single points of failure to creep into an end customer’s deployment without them realising it. For this reason CloudSigma incorporates an advanced ‘avoid’ functionality as an integral part of it’s cloud functionality. End users can create a high availability architecture to avoid single points of failure on the infrastructure level. When utilised, this feature allows any drive or server being provisioned to be separated from one or more other drives or servers. In this way the end user can ensure that infrastructure does not reside on the same same physical system. This allows true high availability cluster setups to be provisioned. For example, as in the case of a clustered database server setup.
CloudSigma supports all types of network traffic on customer private networks. This includes multicast and broadcast, which are critical in many cases to run high availability heartbeat protocols over the network.
Sometimes users need to create many cloud servers at once. In this case, they have the opportunity to make use of the bulk server clone and start. The feature combines cloning and starting of (e.g. template) servers without a limit on the number of clones. Once the cloning is complete, the system can be set to automatically start the cloud servers one by one.
In addition, all new cloned servers are tagged with a tag, which can be either automatically generated or customer-specific. Last but not least, this whole process requires just a single API call to be initiated for those customers integrating directly against the webapp. As a result, the customer experience even for large scale-up operations is improved. Besides, the time required for bulk actions is decreased tremendously with the avoidance of human error. Finally, by accepting bulk action requests, the cloud management system can better deal with the request rather than be bombarded with numerous repetitive action requests which are harder to manage holistically.
- 5 Tips to Achieve Optimal Cloud Computing Cost Efficiency - October 31, 2022
- CloudSigma Empowers Blockchain Infrastructure Provider with a Green, Reliable, and Cost-Efficient Cloud Platform - October 27, 2022
- Whitech Launches New Egypt Cloud Computing Offering, Powered by HPE and CloudSigma - October 17, 2022
- Key Aspects of Protecting your Data in the Cloud - September 20, 2022
- How to Configure MongoDB Replication and Automated Failover - September 19, 2022