ZAMBiDM Virtualisation
Surrounding it is the multiple resources, in this work classified as ZAMBiDM sever, network, operating system-level and application virtualisation mechanism. As discussed above, the ZAMBiDM would draw huge heterogeneous data in terabits from all walks-of-life. To manage such voluminous data it is an enormous and complicated task. To operate such system, multiple applications and resources such as operating systems, need to be run and processed preferably on a single hardware installation through a virtualisation mechanism. Such a mechanism has the capacity to configure multiple resources to function in a single hardware. That’s, virtualisation would scale down the huge data to a manageable operational size of the resource. In addition, the huge resources would be virtualised to make the multiple applications be reused the same resource footprint. In fact, ZAMBiDM would virtualise its Big Data to be operated in the cloud computing environment, where it would not require large servers and other relevant equipment. In fact virtualising the ZAMBiDM would yield a lot of benefits to the system. Actually, the Red Had Summit (2014) described such benefits as: easy access to Big Data; seamless integration of Big Data and existing data sets; sharing of integration specifications; collaborative development on Big Data; any reporting or analytical tool can be used; enterprise democratization of Big Data; fine-grained of security Big Data; and increased time-to-market of reports on Big Data. Such attributes benefits would make the ZAMBiDM very agile in accessing the huge data for analytics processing.