THINWORX's Server-based Computing (SBC) Solution centralizes the delivery and management of enterprise applications while ensuring secure, on-demand access for local and remote users.
THINWORX'S SBC architecture removes applications from the desktop and consolidates them on a central server, where administrators can deploy, manage and support them from a secure, single point. In a server-based configuration, PCs become terminals, or 'thin clients.'
Virtualization has begun to transform the way that enterprises are deploying and managing their infrastructure, providing the foundation for a truly agile enterprise, so that IT can deliver an infrastructure that is flexible, scalable, and most importantly economical by efficiently utilizing resources.
The x86 architecture has proven to be the dominate platform in enterprise computing, moving from its humble beginnings in desktop systems to now, powering the large enterprise applications that run businesses across the globe. The current generation of x86 CPUs include features such as large scale multi-threading wiht 8 or more processing cores, support for large memory systems with NUMA and integrated memory controllers, high speed CPU interconnects and chipset for support for advanced reliability, availability and serviceability (RAS) features.
In order to provide a secure operating environment, the x86 architecture provides a mechanism for isolating user applications from the operating system using the notion of privilege levels.
In this model the processor provides 4 privilege levels, also known as rings which are arranged in a hierarchical fashion from ring 0 to ring 3. Ring 0 is the most privileged with full access the hardware and is able to call privileged instructions. The operating system runs in ring 0 with the operating system kernel controlling access to the underlying hardware. Rings 1,2 and 3 operate at a lower privilege level and are prevented from executing instructions reserved for the operating system. In commonly deployed operating systems such as Linux and Microsoft Windows the operating system runs in ring 0 and the user applications run in ring 3. Rings 1 and 2 historically have not used by modern commerical operating systems. This architecture ensures that an application running in ring 3 that is compromised cannot make privileged system calls, however a compromise in the operating system running in ring 0 hardware expose applications running in the lower privileged levels.
While this model provides benefit for traditional "bare metal" deployments, it presents challenges in a virtualized environment.
In a virtualized environment the hypervisor must run at the most privileged level, controlling all hardware and system functions. In this model the virtual machines run in a lower privileged ring, typically in ring 3.
Desktop virtualization, referred to as Virtual Desktop Interface (VDI), aims to reduce the total cost of ownership (TCO) for desktop management, while providing an equivalent or better end-user experience to what is available with a physical PC, which includes managing virtual machines in the datacenter and eventually on local devices through the client virtualization platform.
For desktops hosted in the datacenter, the screen, keyboard, and mouse must be 'displayed' to a remote endpoint. The display protocol performs this function and is one of the primary factors defining the quality of the end-user experience when performing functions such as moving application windows, scrolling through documents and accessing rich media content. IT organizations have historically faced challenges when using traditional display protocols to try to deliver a full fidelity experience to end-user. These challenges have therefore reduced the reach and limited the possible use cases for desktop virtualization in most organizations.
Currently, there are following popular solutions available in VDI industry.
VMWare View with the PCoIP protocol was designed to deliver an uncompromised desktop experience to a board set of users with a single protocol over the LAN ans WAN. To meet this objective, the protocol approaches the task of delivering the virtual desktop differently than other display protocols. The vision from the beginning was to deliver a rich desktop experience, made up of content such as application windows, web pages, graphics, text, streaming video, and auido. To deliver on this vision, PCoIP was architected to recognize different types of content and then use different compression algorithms based on the content type.
Recognizing that the desktop is a composite of different content types resulted in a display protocol ideally suited to deliver on the promise of a rich user experience. PCoIP delivers a much improved experience to end-users accessing virtual desktops accross the WAN when compared to legacy display protocols such as RDP. The graphs below compare PCoIP to RDP and show a more than 50% reduction in display latency for the common operations of manipulating presentations and scolling through lengthy PDF documents.
Another unique rendering approach - Progressive Build - works to provide the best overall user experience even under constrained network conditions; it will provide a highly compressed initial lossy image, which is progressively built to a full lossless state, while text is always displayed using a lossless compression. PCoIP uses highly efficient encoding based on content and adaptive network managment to build in graphics according to the bandwidth characteristics in real time. This allows the desktop to remain responsive and display the best possible image during varying network conditions.
For a cloud computing deployment (i.e. desktop virtualization) to be successful, the end-user experience should not be poorer than before, and that means ensuring that storage needs are matched to desktop virtualization in a way suited to the performance required. Just as server virtualization abstracts the functions of the server from its physical box, desktop virtualization cuts the ties between a user's desktop and their local hard drive and processor. Storage is no longer local to the desktop and has to be optimized to the I/O requirements of the OS, user profiles and applications.
The main challenge that data storage has faced is applications that demands highly random write intensive transactional workloads. The only way we could serve these workhorses in the past was to string together a bunch of 15,000 RPM spindles in a RAID 10. The result was very fast IO but at a very high cost and extremely poor utilization leaving much of that expensive capacity unused. As the workloads of the enterprise start to enter into the average medium-to-large data center as a result of increasing server and desktop virtualization, we are seeing the need for a better approach.
According to GlassHouse Technologies' Pinder, the key is "to understand the I/O characteristics of not only the OS but of individual applications. Once you have a thorough understanding of the storage requirements for your existing environment, you can then size desktop virtualization storage appropriately and decide whether SSD, SAN or NAS is appropriate."
Server-based computing resolves the management, access and security issues of a traditional distributed computing platform.
Single-point applicatons deployment, management and
support simplify administration and demands fewer IT
resources than the traditional PC computing platform.
Applications are configured once, on a central server,
and instantly propagate to all users; applications and files
installed on a few servers can be made accessible to
hundreds of workstations.
Patch management for clients is unnecessary.
Troubleshooting is simplified: issues are resolved centrally and simultaneously redressed on user devices.
Centralized support and help-desk functions combined with a thin-client architecture dramatically reduce the number and duration of desk-side calls and save IT administrators time and travel costs.
A server-based computing platform reduces overhead costs, increases ROI and boosts prodictivity.
An SBC-enabled thin-client computing platform can dramatically reduce the energy consumption of end-use devices
By shifting the computing burden from client to server, SBC prolongs the life of client-device and raises the hardware ROI
SBC effectively extends Windows or DOS-based applications beyond the performace capabilities of the client workstation, allowing legacy hardware to run newer applications despite limitations of local memory and CPU.
SBC is recognized as the most secure architecture for application delivery.
Centrally controlled, role-based access, further ensures data security.
Central control over data printing and storage rights.
SBC accelerates application deployment and extends it to virtually any client device, enhancing an organization's responsiveness and agility.
Ensures continuity during server interruptions
Reduces system downtime through improved redundancy and disaster management.
Reduces the risk of data loss through centralized storage, management, and backup.
SBC promotes flexible work arrangements by enabling remote access to enterprise applications and data.
Within ring 3 we can see the virtual machine running with an operating system running on virtual (emulated) hardware. Since the operating system was originally designed to run directly on hardware it expects to be running in ring 0 and will make privileged calls that are not permitted in ring 3. When the operating system makes these privileged calls the hardware will trap the instructions and issue a fault, which tpically destory the virtual machine.
Early x86 hypervisor such as Bochs created a fully emulated system with the x86 CPU completed emulated in software. This technique resulted in very poor performance so a more advanced technique was developed for use in the first generation of commercial x86 hypervisors.
In this model, pioneered by VMWare, instead of emulating the processor, the virtual machine runs directly on the CPU. When privilege instructions are encountered the CPU will issue a trap that could be handled by the hypervisor and emulated. However there are a number of x86 instructions that do not trap for example pushf/popf and there are some cases where the virtual machine could identify that it was running in ring 3. To handle these cases a techinque called Binary Translation was developed. In this model the hypervisor scans the virtual machine memory and intercepts these calls before they are executed and dynamically rewrites the code in memory. The operating system kernel is unaware of the change and operates normally. This combination of trap-and-execute and binary translation allow any x86 operating systems to run unmodified upon the hypervisor. While this approach is complex to implement it yielded significant performance gians compared to full emulating the CPU.
While the emulation and binary-translation approached focus on how to handle a privileged instruction executed in a virtual machine a different approach was taken by the open source Xen project. Instead of handling a privileged instruction the approach with paravirtualization is to modify the guest operating system running in the virtual machine and replace all the privileged instructions with direct calls into the hypervisor. In this model, the modified guest operating system is aware that it is running on a hypervisor and can cooperate with the hypervisor for improved scheduling and I/O, removing the need to emulate hardware devices such as network cards and disk controllers.
Since paravirtualization requires changes to the operating system it needs to be implemented by the operating system vendor. These changes were made to the Linux operating system initially in the form of custom patches to the Linux kernel and later were incorporated into the mainline Linux kernel, starting with Kernel 2.6.23. Linux distributions that use kernels earlier than 2.6.23, for example Red Hat Enterprise Linux 5 uses kernels with a customized set of patches.
The Xen Hypervisor Platform is comprised of two components - the Xen hypervisor which is responsible for the core hypervisor activities such as CPU, memory virtualization, power management and scheduling of virtual machines.
The Xen hypervisor loads a special, privileged virtual machine called Domain0 or dom0. This virtual machine has direct access to hardware and provides device drivers and I/O management for virtual machies.
Each virtual machine, known as an unprivileged domain or domU, contains a modified Linux kernel that instead of communicating directly with hardware interfaces with Xen hypervisor.
CPU and memory access are handled directly by the Xen hypervisor but I/O is directed to domain 0. The Linux kernel includes "front end" devices for network and block I/O. Requests for I/O are passed to the "back end" process in domain 0 which manages the I/O.
In this model the guest kernel in domU runs in ring 1 while user space runs in ring 3.
Both Intel and AMD developed extensions to the x86 architecture provide features that could be used by hypervisor vendors to simplify CPU vitualization. The first CPUs including these features were released late in 2005. Today most Intel and AMD CPUs include hardware virtualization support including desktop, laptop and server product lines.
The implementations of these features by Intel (VT-X) and AMD (AMD-V) are different but use a similar approach. A new operating mode is added to the CPU which can now operat in host mode or guest mode. A hypervisor can request that a process operates in guest mode, in which it will still see the four traditional ring/privilege levels, but the CPU is instructed to trap privileged instructions and then return control to the hypervisor.
Using these new hardware features, a hypervisor does not need to implement the binary translation that was previously required to virtualize privileged instructions.
While VT-X and AMD-V reduced the overhead for virtualizing the CPU a significant amount of resources are expended by the hypervisor in handling memory virtualization.
Because the guest operating system cannot directly access memory the hypervisor must provide a virtualized memory implementaiton in which the hypervisor provides mapping between the physical host memory and the virtual memory used by the virtual machine. This is often implemented using shadow page tables within the hypervisor.
AMD developed the Rapid Virtualization Indexing (RVI) feature, previously known as nested page tables, and Intel devloped the Extended Page Table (EPT) feature. These are incorporated into the recent generation of Intel and AMD CPUs. These features provide a virtualized memory management unit (MMU) in hardware that delivers significant performance improvements compared to the software only implementation.
Kernel-based Virtual Machine (KVM) is implemented as a loadable kernel module that converts the Linux kernel into a bare metal hypervisor. There are two key design principles that KVM adopted that have helped it mature rapidly into a stable and high performance hypervisor.
Firstly, because KVM was designed after the advent of hardware assisted virtualization, it did not have to implement features that were provided by hardware. The KVM hypervisor requires Intel VT-X or AMD-V enabled CPUs and leverages those features to virtualize the CPU.
Secondly, there are many components that a hypervisor requires in addition to the ability to virtualize the CPU and memory, for example: a memory manager, a process scheduler, an I/O stack, device drivers, a security manager, a network stack, etc. In fact a hypervisor is really a specialized operating system, differing only from its general purpose peers in that it runs virtual machines rather than applications. Since the Linux kernel already includes the core features required by a hypervisor and has been hardened int a mature and stable enterprise platform.
In the KVM architecture the virtual machine is implemented as regular Linux process, schedule by the standard Linux scheduler. In fact each virtual CPU appears as a regular Linux process. This allows KVM to benefit from all the features of the Linux kernel. Device emulation is handled by a modified version of QEMU that provides enmulated BIOS, PCI bus, USB bus and a standard set of devices such as IDE and SCSI disk controllers, network cards, etc.
Developments in disk and array technology have today resulted in a tiered storage model, which comprises at least three or four classes:
Tier 0: Fast data storage is used to ensure that data can be accessed very quickly. For example, Solid State Drives (SSD) usually is used for cache storage in database engineer or gold image in desktop virtualization.
Tier 1: Mission critical data making up about 15% of all data, very fast response time, FC or SAS disk, FC-SAN, data mirroring, local and remote replication, automatic failover, 99.999% availability, recovery time objective: immediate, retention period: hours.
Tier 2: Vital data, approx. 20% of data, less critial data but fast response time, FC or SAS disk, FC-SAN or IP-SAN (iSCSI), point-in-time copies, 99.99% availability, recovery time objective: seconds, retnetion period: days.
Tier 3: Sensitive data, about 25% of data, moderate response times, SATA disk, IP-SAN (iSCSI), virtual tape libraries, MAID, disk-to-disk-to-tape periodical backup, 99.9% avilabiliy, recovery time objective: minutes, retention period: years.
Tier 4: Non-critical data, ca. 40% of the data, tape FC-SAN or IP-SAN (iSCSI), 99.0% availability, recovery time objective: hours/days, retention period: unlimited.
Fibre channel technology as a whole is regarded as particularly powerful for enterprises since the introduction of storage area networks (SANs). They read reliably and quickly. SAS disks (Serial Attached SCSI) today play a significant part in this professional sector as they are gradually replacing the SCSI disks. As they are compatible to SATA, they can be installed together in a joint array that can result in tier 1 and tier 2 being connected within one single device.
Solid State Drives (SSD) already installed by some manufacturers in their storage systems are already significant as a kind of second cache (RAM) due to their high access rates. As they have no mechanical parts, they have a lifespan that is longer than that of classic hard disks. But they too are nearing their end as the SSD lifecycle comes to a conclusion after 10,000 to 1,000,000 write accesses accroding t manufacturer specifications.
Today storage networks are denoted as "state of the art"; at least all large companies use this technology. With cloud computing and virtualized applications growing, it is especially necessary since small to medium-sized companies in particular are currently also able to set up own storage networks for their purposes. However, it exists in various versions.
Hard disks that are installed in servers and PCs or are directly connected to the servers in storage arrays are still the most widespread structure in small to medium-sized companies - known in this case as Direct Attached Storage (DAS). Small to medium-sized businesses have discovered the productive forces of IT and use them for their business processes. However, at the same time their financial resources limit their investments in an own IT infrastructure.
The need for a separated network for storage purposes only was reflected toward the end of the nineties in a separate technology for Storage Area Networks (SANs). The new infrastructure consisted of own cabling and a further development of the SCSI protocol, which was already used for the connection of various devices, such as storage arrays or printers to a server, and bears the name Fibre Channel (FC). The Fibre Channel protocol was specially developed for the transport of files. It is said to be reliable, and most recently with 8 Gbit/sec achieved a transport speed that even outperformed the Ethernet.
Approximately at the same time as the FC-SANs, an alternative network structure came into being for the storage of data within the company network, which is particularly associated with the name of Network Appliance (today NetApp). A Network Attached Storage (NSA) denotes an integrated overall solution, which combines servers, operating system, storage units, file system and network services. For this purpose NetApp offers so-called filers, which support the file service NFS (= Network File System, originally developed by Sun) and CIFS (= Common Internet File System under Windows) and are especially suited for unstructured data.
In an NAS the foucs is placed on the network functions and less on the performance of the hard disks used. Many users consider it a lower-cost alternative to an SAN. Whoever decides in favor of which version depends on a great many factors that are perhaps also very individual. David Hitz, one of the founders and now Executive Vice President of Engineers at NetApp, expressed a frank opinion in an interview: "NAS and SAN are like two flavors of the same ice cream. NAS is chocolate-falavored and SAN is strawberry-flavored. And everything the customer needs to know about the two technolgies is only that both systems can be used at any time for data storage. What intelligent person would be disturbed by the fact that the someonem does not like chocolate-flavored ice cream, but prefers strawberry-flavored ice cream." THis somewhat flippant statement can also be interpreted in such a way that companies with SAN and NAS have two storage architectures to choose from, which can be individially adapted depending on their requirements. No-one needs to have any reservations.
Another version has been under discussion for some years now: iSCSI network for storage units (also known as IP-SAN) obviously overcome the length introductory phase a year ago and have achieved significant sales figures. The attraction of this architecture is its ability to use the existing TCP/IP infrastructure for data storage. This makes the installation and maintenance of a second infrastructure (only set up for storage) supperfluous, and the administrators can fall back upon their existing IP known-how. In practice, however, there have been greater obstacles in the integration of the various tasks of the LAN and iSCSI storage network. Nevertheless, new prospects are the result of the new transfer speed of 10 Gbit/sec for the Ethernet, because this technology is currently faster than Fibre Channel with only 8 Gbit/sec at present. However, customers incur additional costs due to the new cabling that becomes necessary. In the meantime, it is generally assumed that an iSCSI infrastructure is mainly suited for small to medium-sized companies and has found its true position there.
Wysnan would be pleased to send you a quote for THINWORX; please fill in the boxes below:
Thank you for investing in THINWORX.
After the successful completion of the installation and prior to the expiration of the trail period, THINWORX must be registered online, where the product key is validated and the product is essentially "activated".
To purchase THIWNORX product key, please contact Wysnan Corporation via email at email@example.com; you'll obtain a THINWORX product key after your purchase order has been processed.
To register THINWORX product key, go the THINWORX Manager, click on "Registration", the window shows a link called "Register THINWORX" which will ask to enter THINWORX product key and lead your THINWORX Controller to connect securely online (via Internet) to THINWORX Licensing Center to activate the key.
You also can enter your THINWORX product key during installation which will be validated and the system is activated once clicking "Register THINWORX" link in Registration interface of THINWORX Manager.