Home

Orabuntu-LXC creates infrastructure of physical hosts, VM hosts, LXC containers, and Docker containers, all fully-networked to each other over OpenvSwitch networks and all with full name-resolution provided by a containerized replicated DNS-DHCP. 

For example, Orabuntu-LXC can be deployed initially on a physical host, and then VM's can be installed on that physical host and those VM's will get their IP from the containerized DNS-DHCP on the physical host too (see the Network Configuration section for how to do this).

Orabuntu-LXC can then be installed in those VM's, and all the LXC and Docker containers in the VM, and all of the LXC and Docker containers on the physical host, will all be networked to each other by default and to the VM and to the physical host. 

Orabuntu-LXC can add additional physical hosts over GRE, and those GRE hosts can have VM's on them too, and the VM's on the GRE hosts can have Orabuntu-LXC installed and can have more LXC and Docker containers running in them, and this vast infrastructure of physical hosts, VM's, and containers all have a shared, replicated, dynamic DNS-DHCP and all have full networking connectivity over ethernet to each other.

Orabuntu-LXC provides a "products" facility which is constantly being expanded to provide automated configuration for a wide variety of installed products, such as Oracle Grid Infrastructure, Oracle RAC database, Oracle Standalone Database, and BlackBerry Workspaces.  The product facility creates a gold copy of a container that has all product pre-requisites completed.  Orabuntu-LXC during install then makes a user-specified number of exact copies of the gold copy container.  Then just download your product installation software from your propretiary vendor, and install, knowing that all prerequisites are already completed.

Orabuntu-LXC provides a dynamic, replicated, LXC containerized DNS-DHCP.  This DNS-DHCP provides DHCP and DNS to ALL physical hosts, VM's, and containers.  The DNS-DHCP is also replicated and continuously updated with new containers, phyical hosts, and VM's added to the infrastructure.  Every VM and physical host has a continuously updated copy of the DNS-DHCP zone files so that if the primary DNS-DHCP goes down a backup DNS-DHCP can quickly be started on one of the VM's or physical hosts.  Third-party products, such as, for example, HP ServiceGuard or similar, can be used to detect and manage the DNS-DHCP container failover.

Orabuntu-LXC runs in the AWS cloud on Ubuntu and RedHat EC2 Linux instances, as well as on-premise on Oracle Linux (6 and 7 and 8), RedHat Linux (6 and 7 and 8), CentOS Linux (6 and 7 and 8), Ubuntu Linux (14-20+), Fedora Linux (22-27), and Pop!_OS Linux 17+.  In both the AWS cloud and on-premise, Orabuntu-LXC runs OpenvSwitch networks ontop of the platform network (including on AWS) allowing a very high degree of control over networking using the rich production feature sets of OpenvSwitch.

Orabuntu-LXC is launched with a single command and builds all of the above-described infrastructure automatically and so can be used in situations that require full automation.

For users just starting with containers, simply launch Orabuntu-LXC and it will take care of ALL steps and build a next-generation Docker, LXC, DNS-DHCP, OpenvSwitch and SCST Linux SAN infrastructure which can be used to study and learn about all aspects of container technology including creating, configuring, replicating, networking, and using containers. 

For corporate production users, Orabuntu-LXC is a highly-flexible, highly-configurable deployment and tool to quickly build out vast container networks. It has great enterprise value for supporting daily IT training activities where it can be a drop in fast-turnaround-time, high-performance replacement for resource-hog VM training stations.

Orabuntu-LXC also includes the Google #1 ranked SCST Linux SAN fully-automated deployer for creating block storage devices for containers, VM's and physical hosts.  The award-winning SCST Linux SAN deployer installs SCST using DKMS so that SCST will continue to run transparently across OS kernel upgrades because DKMS takes care of updating SCST kernel modules transparently.  The SCST Linux SAN deployer also automatically creates the target, LUNs, and groups, and then automatically creates the /etc/multipath.conf file and creates container-friendly UDEV rules that provide LUNs that can be presented directly into the LXC containers.