To overcome the hardware limitations of external services we run our own computing cloud with the Opennebula platform and 12 Dell PowerEdge FC630 nodes. Each node has a Intel Xeon E5-2630 CPU with 20 cores (40 with hyperthreading) and 768GB memory. The cloud resources are furthermore connected to a Dell S4048-ON Open Networking Switch which is managed by one Opendaylight Controller. This cluster facilitates SDN and cloud experiments as well as provides compute resources for high requirement simulations and emulations.
The network interfaces, datastores, primary images and VM templates are already created during the installation of opennebula core with the ansible playbook. These are the steps in case it needs to be done manually:
Network → Virtual Networks - Plus → Create
ip a
and start network interface dhclient -v -i ens3
%adm ALL=NOPASSWD: ALL %il11admin ALL=NOPASSWD: ALL
dpkg -i one-context_*deb || apt-get install -fy
sudo apt-get install –install-recommends linux-generic-hwe-18.04
scp ..ansible-scripts/roles/onevm/files/loc-10-network vm:/etc/one-context.d/
sudo apt install
#!/bin/bash curl -H "remote-host: $NAME" -H "remote-ip: $(hostname -I)" --noproxy "*" -k -XPOST --data "host_config_key=<AWX-PLAYBOOK-CONFIG-KEY>" <AWX-PLAYBOOK-URL>
After the one_core playbook run all the initial templates and images already exist. These steps are necessary to get a final OS image to actually deploy VMs from. Follow exactly the following steps:
ip a sudo dhclient -v -i ens5 sudo vim /etc/sudoers --- %adm ALL=NOPASSWD: ALL %il11admin ALL=NOPASSWD: ALL --- # download the according version for the opennebula version wget https://github.com/OpenNebula/addon-context-linux/releases/download/v5.10.0/one-context_5.10.0-1.deb sudo su dpkg -i one-context_*deb || apt-get install -fy # adjust one contextualization for automatic dhcp # file is in ansible-scripts:roles/onevm/files/loc-10-network local: scp roles/onevm/files/loc-10-network vm:~ mv loc-10-network /etc/one-context.d/ exit mkdir .ssh vim .ssh/authorized_keys # paste the pub key of admin_i11 into the authorized_keys file chmod -R og-rwx .ssh # set the i11 admin password -> keepassx sudo passwd i11 sudo vim /etc/apt/sources.list # replace the whole file with the following content --- deb http://ubuntu.mirror.lrz.de/ubuntu/ bionic main restricted universe multiverse deb http://ubuntu.mirror.lrz.de/ubuntu/ bionic-updates main restricted universe multiverse deb http://ubuntu.mirror.lrz.de/ubuntu/ bionic-security main restricted universe multiverse --- sudo apt update sudo apt upgrade sudo apt install --install-recommends linux-generic-hwe-18.04 sudo reboot # login after reboot - dhcp not working yet login via VNC and manual dhclient sudo apt update sudo apt upgrade sudo apt install python python3 python-apt python3-apt rm .bash_history sudo shutdown -h now
sudo apt install python python-apt htop sudo vim /etc/network/interfaces --------------------------------- #Add under the last auto entry auto chair iface chair inet dhcp bridge_ports intern.83 bridge_fd 15 --------------------------------- sudo ifup chair
ceph-deploy install emu04 emu05 ceph-deploy config push emu04 emu05
vim ansible/playbooks/opennebula/hosts ansbile-playbook -i hosts one_hosts.yml
Fix Routing on Node Servers
echo 1 intern >> /etc/iproute2/rt_tables && echo 2 mwn >> /etc/iproute2/rt_tables
ip route add 10.200.64.0/18 dev intern table intern ip route add 172.24.24.0/23 dev intern table mwn
ip route add default via 10.200.127.254 dev intern table intern ip route add default via 172.24.25.254 dev intern table mwn
ip rule add to 10.200.64.0/18 table intern ip rule add from 10.200.64.0/18 table intern
Fix Routing on VMs
echo 1 isp2 >> /etc/iproute2/rt_tables
ip route add 131.159.24.0/23 dev ens6 table isp2 ip route add default via 131.159.25.254 dev ens6 table isp2
ip rule add from 131.159.24.0/23 table isp2 ip rule add to 131.159.24.0/23
iface ens6 inet dhcp post-up ip route add default via 131.159.25.254 dev ens6 table isp2 post-up ip route add 131.159.24.0/23 dev ens6 table isp2 post-up ip rule add from 131.159.24.0/23 table isp2 post-up ip rule add to 131.159.24.0/23 table isp2
Fix Routing on ONE Host access MWN
echo 5 mwn >> /etc/iproute2/rt_tables
auto intern.240 iface intern.240 inet manual vlan_id 240 vlan-raw-device intern auto mwn iface mwn inet manual bridge_ports intern.240 bridge_fd 15 post-up ip route add 172.24.24.0/23 dev mwn table mwn post-up ip route add default via 172.24.25.254 dev mwn table mwn post-up ip rule add to 172.24.24.0/23 table mwn post-up ip rule add from 172.24.24.0/23 table mwn
sudo ifup mwn #waiting for mwn to get ready
Add new ONE Node
sudo apt install python python-apt
cd ~/Documents/ansible/playbooks/opennebula ansible-playbook -i hosts one_hosts.yml
ssh maas cd sto_cluster ceph-deploy install <server> ceph-deploy config push <server> cd ../ceph_user_libvirt/ scp * <server>:~
#from maas log in to the server ssh <server> sudo mv client.libvirt.key secret.xml /var/lib/one/ sudo chown oneadmin:oneadmin /var/lib/one/client.libvirt.key /var/lib/one/secret.xml sudo su oneadmin && cd virsh -c qemu:///system secret-define secret.xml cat secret.xml UUID= #UUID from secret.xml virsh -c qemu:///system secret-set-value --secret $UUID --base64 $(cat client.libvirt.key) rm client.libvirt.key #test access - output no error rbd -p one ls --id libvirt
sudo vim /etc/network/interfaces --------------------------------- #Add unter the las auto entry auto chair iface chair inet dhcp bridge_ports intern.83 bridge_fd 15 --------------------------------- sudo ifup chair
sudo su echo 1 mwn >> /etc/iproute2/rt_tables && echo 2 intern >> /etc/iproute2/rt_tables #temporary activate network #intern ip route add 10.200.64.0/18 dev intern table intern ip route add default via 10.200.127.254 dev intern table intern ip rule add to 10.200.64.0/18 table intern ip rule add from 10.200.64.0/18 table intern #mwn ip route add 172.24.24.0/23 dev intern table mwn ip route add default via 172.24.25.254 dev intern table mwn ip rule add to 172.24.24.0/23 table mwn ip rule add from 172.24.24.0/23 table mwn sudo vim /etc/network/interfaces --------------------------------- #add under iface intern ine dhcp post-up ip route add 10.200.64.0/18 dev intern table intern post-up ip route add default via 10.200.127.254 dev intern table intern post-up ip rule add to 10.200.64.0/18 table intern post-up ip rule add from 10.200.64.0/18 table intern post-up ip route add 172.24.24.0/23 dev intern table mwn post-up ip route add default via 172.24.25.254 dev intern table mwn post-up ip rule add to 172.24.24.0/23 table mwn post-up ip rule add from 172.24.24.0/23 table mwn --------------------------------- sudo shutdown -r now
sudo apt install htop python python-apt
opennebula
to install opennebulaSunstone / Webui
CEPH Datastores backend
Ceph Cluster Setup
ceph osd pool create one 256 256 replicated ceph osd pool set one size 2
rbd_default_format = 2
ceph-deploy install emu09 emu08 emu10
ceph-deploy config push mon01-cm sto01 sto02
New Ceph User
ceph aut get-or-create client.libvirt mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=one' ceph auth get-key client.libvirt | tee client.libvirt.key ceph auth get client.libvirt -o ceph.client.libvirt.keyring
scp ceph.client.libvirt.keyring emu09:~ ssh emu09 sudo mv ceph.client.libvirt.keyring /etc/ceph scp client.libvirt.key emu09:~ #same with node emu10 and emu08
UUID=`uuidgen`; echo $UUID cat > secret.xml <<EOF <secret ephemeral='no' private='no'> <uuid>$UUID</uuid> <usage type='ceph'> <name>client.libvirt secret</name> </usage> </secret> EOF scp secret.xml emu09:~
sudo mv client.libvirt.key secret.xml /var/lib/one/ sudo chown oneadmin:oneadmin /var/lib/one/client.libvirt.key /var/lib/one/secret.xml sudo su oneadmin cd virsh -c qemu:///system secret-define secret.xml UUID= #uuid from secrect.xml virsh -c qemu:///system secret-set-value --secret $UUID --base64 $(cat client.libvirt.key) rm client.libvirt.key
rbd -p one ls --id libvirt #output should be nothing - no error
Create Datastores
ssh emu10 sudo su oneadmin cd vim ceph_image.txt #content ------------------------------- NAME = "ceph-ds" DS_MAD = ceph TM_MAD = ceph DISK_TYPE = RBD POOL_NAME = one BRIDGE_LIST = emu10 CEPH_HOST = "mon01-cm:6789" CEPH_USER = "libvirt" CEPH_SECRET = "XXXXX" RBD_FORMAT = 2 ------------------------------- vim ceph_system.txt ------------------------------- NAME = "ceph_system" TM_MAD = ceph TYPE = SYSTEM_DS DISK_TYPE = RBD POOL_NAME = one BRIDGE_LIST = emu10 CEPH_HOST = "mon01-cm:6789" CEPH_USER = "libvirt" CEPH_SECRET = "XXXX" RBD_FORMAT = 2 ------------------------------- onedatastore create cephds_image.txt onedatastore create cephds_system.txt
New Template
We will create two templates: One default template with system files on the local hard disk and one HA template for system and image file in the Ceph cluster and live migration capabilities.
HA Template
Storage -> Images -> Add #Server ISO Name: ubuntu_server_16.04.02 Type: Readonly-CDROM Datastore: 102:ceph_img #Image does not need to be made persistent, no changes are made on the disk Image location: Upload -> Server.iso #OS Datablock Name: default_vm_disk Type: Generic datastore block Datastore: 102:ceph_img This image persistent: yes Image location: Empty disk image -> 5000 MB
Templates -> VMs -> Add Name: default_vm Hypervisor: KVM Memory: 512 CPU: 1 Logo: Ubuntu
Disk0: default_vm_disk Disk1: ubuntu_server_16.04.02
Nic0: dell1 Nic1: dell2 Nic2: Chair
CPU Architecture: x86_64 Boot order: Check disk and ubuntu_server
VNC-Keymap: de Inputs: Type - Tablet, Bus - USB
#!/bin/bash usermod -aG adm $UNAME && chsh -s /bin/bash $UNAME
SET_HOSTNAME = $NAME USERNAME = $UNAME
Select clusters -> Choos emu cluster --> Create
#download package to the VM sudo apt install -y cloud-utils wget https://github.com/OpenNebula/addon-context-linux/releases/download/v5.0.3/one-context_5.0.3.deb sudo dpkg -i one-context*deb
make images persistent
unless you only intend to instantiate the Template once.
User Management
Name: cm Advanced Layout - User View: check Default Users View: User Permission: VMs and check allow users to view group resources --> Create
Name: emu Groups: cm Resources: Hosts (emu03-emu10) Vnets (intern, chair) Datastores (ceph_system, ceph_img) -->Create
sudo vim /etc/one/sunstone-views/user.yaml
small_logo: images/opennebula-5.0.png provision_logo: images/opennebula-5.0.png enabled_tabs: - dashboard-tab - instances-top-tab - vms-tab #- oneflow-services-tab #- vrouters-tab - templates-top-tab - templates-tab #- oneflow-templates-tab #- vrouter-templates-tab - storage-top-tab - datastores-tab - images-tab - files-tab #- marketplaces-tab #- marketplaceapps-tab - network-top-tab - vnets-tab - vnets-topology-tab #- secgroups-tab #- infrastructure-top-tab #- clusters-tab #- hosts-tab #- zones-tab #- system-top-tab #- users-tab #- groups-tab #- vdcs-tab #- acls-tab - settings-tab #- support-tab #- upgrade-top-tab autorefresh: true features: # True to show showback monthly reports, and VM cost showback: true # Allows to change the security groups for each network interface # on the VM creation dialog secgroups: false # True to hide the CPU setting in the VM creation dialog. The CPU setting # will be set to the same value as VCPU, that will still be visible for the # end users instantiate_hide_cpu: false tabs: dashboard-tab: # The following widgets can be used inside any of the '_per_row' settings # bellow. As the name suggest, the widgets will be scaled to fit one, # two, or three per row. The footer uses the widgets at full size, i.e. # one per row. # # - storage # - users # - network # - hosts # - vms # - groupquotas # - quotas panel_tabs: actions: Dashboard.refresh: false Sunstone.toggle_top: false widgets_one_per_row: - vms widgets_three_per_row: widgets_two_per_row: - network - storage widgets_one_footer: system-top-tab: panel_tabs: actions: users-tab: panel_tabs: user_info_tab: true user_quotas_tab: true user_groups_tab: true user_accounting_tab: true user_showback_tab: true table_columns: - 0 # Checkbox - 1 # ID - 2 # Name - 3 # Group #- 4 # Auth driver #- 5 # Password - 6 # VMs - 7 # Memory - 8 # CPU #- 9 # Group ID #- 10 # Hidden User Data #- 11 # Labels #- 12 # Search data actions: User.refresh: true User.create_dialog: false User.update_password: true User.login_token: true User.quotas_dialog: false User.groups_dialog: false User.chgrp: false User.change_authentication: false User.delete: false User.edit_labels: true User.menu_labels: true groups-tab: panel_tabs: group_info_tab: true group_users_tab: true group_quotas_tab: true group_accounting_tab: true group_showback_tab: true table_columns: - 0 # Checkbox - 1 # ID - 2 # Name - 3 # Users - 4 # VMs - 5 # Memory - 6 # CPU #- 7 # Labels actions: Group.refresh: true Group.create_dialog: false Group.update_dialog: false Group.quotas_dialog: false Group.delete: false Group.edit_admins: false Group.edit_labels: true vdcs-tab: panel_tabs: vdc_info_tab: true vdc_groups_tab: true vdc_resources_tab: true table_columns: - 0 # Checkbox - 1 # ID - 2 # Name - 3 # Groups - 4 # Clusters - 5 # Hosts - 6 # VNets - 7 # Datastores #- 8 # Labels actions: Vdc.refresh: true Vdc.create_dialog: true Vdc.update_dialog: true Vdc.rename: true Vdc.delete: true Vdc.edit_labels: true Vdc.menu_labels: true acls-tab: panel_tabs: table_columns: - 0 # Checkbox - 1 # ID - 2 # Applies to - 3 # Affected resources - 4 # Resource ID / Owned by - 5 # Allowed operations - 6 # Zone #- 7 # ACL String actions: Acl.refresh: true Acl.create_dialog: true Acl.delete: true templates-top-tab: panel_tabs: actions: templates-tab: panel_tabs: template_info_tab: true template_template_tab: true table_columns: - 0 # Checkbox - 1 # ID - 2 # Owner - 3 # Group - 4 # Name - 5 # Registration time #- 6 # Labels #- 7 # Search data actions: Template.refresh: true Template.create_dialog: false Template.import_dialog: false Template.update_dialog: true Template.instantiate_vms: true Template.rename: false Template.chown: false Template.chgrp: false Template.chmod: true Template.clone_dialog: true Template.delete_dialog: false Template.share: false Template.unshare: false Template.edit_labels: true Template.menu_labels: true template_creation_tabs: general: true storage: true network: true os_booting: true features: true input_output: true context: true scheduling: false hybrid: true other: true oneflow-templates-tab: panel_tabs: service_template_info_tab: true service_template_roles_tab: true service_template_template_tab: true table_columns: - 0 # Checkbox - 1 # ID - 2 # Owner - 3 # Group - 4 # Name #- 5 # Labels #- 6 # Search data actions: ServiceTemplate.refresh: true ServiceTemplate.create_dialog: true ServiceTemplate.update_dialog: true ServiceTemplate.instantiate: true ServiceTemplate.chown: false ServiceTemplate.chgrp: false ServiceTemplate.chmod: true ServiceTemplate.rename: true ServiceTemplate.clone_dialog: true ServiceTemplate.delete: true ServiceTemplate.edit_labels: true ServiceTemplate.menu_labels: true vrouter-templates-tab: panel_tabs: vrouter_template_info_tab: true vrouter_template_template_tab: true table_columns: - 0 # Checkbox - 1 # ID - 2 # Owner - 3 # Group - 4 # Name - 5 # Registration time #- 6 # Labels #- 7 # Search data actions: VirtualRouterTemplate.refresh: true VirtualRouterTemplate.create_dialog: true VirtualRouterTemplate.update_dialog: true VirtualRouterTemplate.instantiate_dialog: true VirtualRouterTemplate.rename: true VirtualRouterTemplate.chown: false VirtualRouterTemplate.chgrp: false VirtualRouterTemplate.chmod: true VirtualRouterTemplate.clone_dialog: true VirtualRouterTemplate.delete_dialog: true VirtualRouterTemplate.share: true VirtualRouterTemplate.unshare: true VirtualRouterTemplate.edit_labels: true VirtualRouterTemplate.menu_labels: true template_creation_tabs: general: true storage: true network: true os_booting: true features: true input_output: true context: true scheduling: true hybrid: true other: true instances-top-tab: panel_tabs: actions: vms-tab: panel_tabs: vm_info_tab: true vm_capacity_tab: true vm_storage_tab: true vm_network_tab: true vm_snapshot_tab: true vm_placement_tab: false vm_actions_tab: true vm_conf_tab: true vm_template_tab: false vm_log_tab: false table_columns: - 0 # Checkbox - 1 # ID - 2 # Owner - 3 # Group - 4 # Name - 5 # Status #- 6 # Used CPU #- 7 # Used Memory - 8 # Host - 9 # IPs #- 10 # Start Time - 11 # VNC #- 12 # Hidden Template #- 13 # Labels #- 14 # Search data actions: VM.refresh: true VM.create_dialog: true VM.rename: true VM.chown: false VM.chgrp: false VM.chmod: true VM.deploy: false VM.migrate: false VM.migrate_live: false VM.hold: true VM.release: true VM.suspend: true VM.resume: true VM.stop: true VM.recover: false VM.reboot: true VM.reboot_hard: true VM.poweroff: true VM.poweroff_hard: true VM.undeploy: true VM.undeploy_hard: true VM.terminate: true VM.terminate_hard: true VM.resize: true VM.attachdisk: true VM.detachdisk: true VM.disk_saveas: true VM.attachnic: true VM.detachnic: true VM.snapshot_create: true VM.snapshot_revert: true VM.snapshot_delete: true VM.disk_snapshot_create: true VM.disk_snapshot_revert: true VM.disk_snapshot_delete: true VM.resched: false VM.unresched: false VM.save_as_template: true VM.updateconf: true VM.edit_labels: true VM.menu_labels: true oneflow-services-tab: panel_tabs: service_info_tab: true service_roles_tab: true service_log_tab: true panel_tabs_actions: service_roles_tab: Role.scale: true Role.hold: true Role.release: true Role.suspend: true Role.resume: true Role.stop: true Role.reboot: true Role.reboot_hard: true Role.poweroff: true Role.poweroff_hard: true Role.undeploy: true Role.undeploy_hard: true Role.terminate: true Role.terminate_hard: true RoleVM.hold: true RoleVM.release: true RoleVM.suspend: true RoleVM.resume: true RoleVM.stop: true RoleVM.reboot: true RoleVM.reboot_hard: true RoleVM.poweroff: true RoleVM.poweroff_hard: true RoleVM.undeploy: true RoleVM.undeploy_hard: true RoleVM.terminate: true RoleVM.terminate_hard: true RoleVM.resched: false RoleVM.unresched: false RoleVM.recover: false table_columns: - 0 # Checkbox - 1 # ID - 2 # Owner - 3 # Group - 4 # Name - 5 # State #- 6 # Labels #- 7 # Search data actions: Service.refresh: true Service.create_dialog: true Service.chown: false Service.chgrp: false Service.chmod: true Service.rename: true Service.shutdown: true Service.recover: true Service.delete: true Service.edit_labels: true Service.menu_labels: true vrouters-tab: panel_tabs: virtual_router_info_tab: true virtual_router_vms_tab: true table_columns: - 0 # Checkbox - 1 # ID - 2 # Owner - 3 # Group - 4 # Name #- 5 # Labels #- 6 # Search data actions: VirtualRouter.refresh: true VirtualRouter.create_dialog: true VirtualRouter.rename: true VirtualRouter.chown: true VirtualRouter.chgrp: true VirtualRouter.chmod: true VirtualRouter.delete: true VirtualRouter.attachnic: true VirtualRouter.detachnic: true VirtualRouter.edit_labels: true VirtualRouter.menu_labels: true infrastructure-top-tab: panel_tabs: actions: clusters-tab: panel_tabs: cluster_info_tab: true cluster_host_tab: true cluster_vnet_tab: true cluster_datastore_tab: true table_columns: - 0 # Checkbox - 1 # ID - 2 # Name - 3 # Hosts - 4 # VNets - 5 # Datastores #- 6 # Labels actions: Cluster.refresh: true Cluster.create_dialog: true Cluster.update_dialog: true Cluster.delete: true Cluster.rename: true Cluster.edit_labels: true Cluster.menu_labels: true hosts-tab: panel_tabs: host_info_tab: true host_monitoring_tab: true host_vms_tab: true host_wilds_tab: true host_zombies_tab: true host_esx_tab: true host_pci_tab: true table_columns: - 0 # Checkbox - 1 # ID - 2 # Name - 3 # Cluster - 4 # RVMs #- 5 # Real CPU - 6 # Allocated CPU #- 7 # Real MEM - 8 # Allocated MEM - 9 # Status #- 10 # IM MAD #- 11 # VM MAD #- 12 # Last monitored on #- 13 # Labels #- 14 # Search data actions: Host.refresh: true Host.create_dialog: true Host.addtocluster: true Host.rename: true Host.enable: true Host.disable: true Host.offline: true Host.delete: true Host.edit_labels: true Host.menu_labels: true zones-tab: panel_tabs: zone_info_tab: true table_columns: - 0 # Checkbox - 1 # ID - 2 # Name - 3 # Endpoint #- 4 # Labels actions: Zone.refresh: true Zone.create_dialog: true Zone.rename: true Zone.delete: true Zone.edit_labels: true Zone.menu_labels: true storage-top-tab: panel_tabs: actions: datastores-tab: panel_tabs: datastore_info_tab: false datastore_image_tab: true datastore_clusters_tab: false table_columns: - 0 # Checkbox - 1 # ID - 2 # Owner - 3 # Group - 4 # Name - 5 # Capacity - 6 # Cluster #- 7 # Basepath #- 8 # TM #- 9 # DS #- 10 # Type #- 11 # Status #- 12 # Labels #- 13 # Search data actions: Datastore.refresh: true Datastore.create_dialog: false Datastore.import_dialog: false Datastore.addtocluster: false Datastore.rename: true Datastore.chown: false Datastore.chgrp: false Datastore.chmod: true Datastore.delete: false Datastore.enable: false Datastore.disable: false Datastore.edit_labels: true Datastore.menu_labels: true images-tab: panel_tabs: image_info_tab: true image_vms_tab: true image_snapshots_tab: true table_columns: - 0 # Checkbox - 1 # ID - 2 # Owner - 3 # Group - 4 # Name - 5 # Datastore #- 6 # Size - 7 # Type #- 8 # Registration time #- 9 # Persistent - 10 # Status - 11 # #VMs #- 12 # Target #- 13 # Labels #- 14 # Search data actions: Image.refresh: true Image.create_dialog: true Image.import_dialog: false Image.upload_marketplace_dialog: true Image.rename: true Image.chown: false Image.chgrp: false Image.chmod: true Image.enable: true Image.disable: true Image.persistent: true Image.nonpersistent: true Image.clone_dialog: true Image.delete: true Image.snapshot_flatten: true Image.snapshot_revert: true Image.snapshot_delete: true Image.edit_labels: true Image.menu_labels: true files-tab: panel_tabs: file_info_tab: true table_columns: - 0 # Checkbox - 1 # ID - 2 # Owner - 3 # Group - 4 # Name - 5 # Datastore #- 6 # Size - 7 # Type #- 8 # Registration time #- 9 # Persistent - 10 # Status #- 11 # #VMs #- 12 # Target #- 13 # Labels #- 14 # Search data actions: File.refresh: true File.create_dialog: true File.rename: true File.chown: false File.chgrp: false File.chmod: true File.enable: true File.disable: true File.delete: true File.edit_labels: true File.menu_labels: true marketplaces-tab: panel_tabs: marketplace_info_tab: true marketplace_apps_tab: true table_columns: - 0 # Checkbox - 1 # ID - 2 # Owner - 3 # Group - 4 # Name - 5 # Capacity - 6 # Apps - 7 # Driver - 8 # Zone #- 9 # Labels #- 10 # Search data actions: MarketPlace.refresh: true MarketPlace.create_dialog: true MarketPlace.update_dialog: true MarketPlace.rename: true MarketPlace.chown: true MarketPlace.chgrp: true MarketPlace.chmod: true MarketPlace.delete: true MarketPlace.edit_labels: true MarketPlace.menu_labels: true marketplaceapps-tab: panel_tabs: marketplaceapp_info_tab: true marketplaceapp_templates_tab: true table_columns: - 0 # Checkbox - 1 # ID - 2 # Owner - 3 # Group - 4 # Name #- 5 # Version - 6 # Size - 7 # State #- 8 # Type - 9 # Registration - 10 # Marketplace - 11 # Zone #- 12 # Labels #- 13 # Search data actions: MarketPlaceApp.refresh: true MarketPlaceApp.create_dialog: true MarketPlaceApp.download_opennebula_dialog: true MarketPlaceApp.download_local: true MarketPlaceApp.rename: true MarketPlaceApp.chown: true MarketPlaceApp.chgrp: true MarketPlaceApp.chmod: true MarketPlaceApp.enable: true MarketPlaceApp.disable: true MarketPlaceApp.delete: true MarketPlaceApp.edit_labels: true MarketPlaceApp.menu_labels: true network-top-tab: panel_tabs: actions: vnets-tab: panel_tabs: vnet_info_tab: true vnet_ar_list_tab: true vnet_leases_tab: true vnet_sg_list_tab: true vnet_vr_list_tab: true vnet_clusters_tab: false table_columns: - 0 # Checkbox - 1 # ID - 2 # Owner - 3 # Group - 4 # Name - 5 # Reservation - 6 # Cluster #- 7 # Bridge - 8 # Leases #- 9 # VLAN ID #- 10 # Labels #- 11 # Search data actions: Network.refresh: true Network.create_dialog: false Network.import_dialog: false Network.update_dialog: true Network.reserve_dialog: true Network.addtocluster: false Network.rename: true Network.chown: false Network.chgrp: false Network.chmod: true Network.delete: true Network.hold_lease: true Network.release_lease: true Network.add_ar: false Network.remove_ar: true Network.update_ar: true Network.edit_labels: true Network.menu_labels: true vnets-topology-tab: panel_tabs: actions: NetworkTopology.refresh: true NetworkTopology.fit: true NetworkTopology.collapseVMs: true NetworkTopology.openVMs: true secgroups-tab: panel_tabs: security_group_info_tab: true security_group_vms_tab: true table_columns: - 0 # Checkbox - 1 # ID - 2 # Owner - 3 # Group - 4 # Name #- 5 # Labels #- 6 # Search data actions: SecurityGroup.refresh: true SecurityGroup.create_dialog: true SecurityGroup.update_dialog: true SecurityGroup.rename: true SecurityGroup.chown: true SecurityGroup.chgrp: true SecurityGroup.chmod: true SecurityGroup.clone_dialog: true SecurityGroup.commit_dialog: true SecurityGroup.delete: true SecurityGroup.edit_labels: true SecurityGroup.menu_labels: true support-tab: panel_tabs: support_info_tab: true table_columns: #- 0 # Checkbox - 1 # ID - 2 # Subject - 3 # Created at - 4 # Status actions: Support.refresh: true Support.create_dialog: true settings-tab: panel_tabs: settings_info_tab: true settings_config_tab: false settings_quotas_tab: true settings_group_quotas_tab: true settings_accounting_tab: true settings_showback_tab: true actions: # Buttons for settings_info_tab User.update_password: true User.login_token: true # Buttons for settings_config_tab Settings.change_language: true Settings.change_password: true Settings.change_view: true Settings.ssh_key: true Settings.login_token: true # Edit button in settings_quotas_tab User.quotas_dialog: false upgrade-top-tab: panel_tabs: actions:
sudo service opennebula-sunstone restart
LDAP
/etc/one/auth/ldap_auth.conf
server 1: # Ldap authentication method :auth_method: :simple # Ldap server :host: ldap.informatik.tu-muenchen.de :port: 389 # base hierarchy where to search for users and groups :base: 'ou=Personen,ou=IN,o=TUM,c=DE' # group the users need to belong to. If not set any user will do #:group: 'cn=il11,ou=Gruppen,ou=IN,o=TUM,c=DE' # field that holds the user name, if not set 'cn' will be used :user_field: 'uid' # field name for group membership, by default it is 'member' :group_field: 'memberUid' # user field that that is in in the group group_field, if not set 'dn' will be used :user_group_field: 'cn' # Generate mapping file from group template info :mapping_generate: true # Seconds a mapping file remain untouched until the next regeneration :mapping_timeout: 300 # Name of the mapping file in OpenNebula var diretory :mapping_filename: server1.yaml # Key from the OpenNebula template to map to an AD group :mapping_key: GROUP_DN # Default group ID used for users in an AD group not mapped :mapping_default: 1
SSL Certificates
#### OpenNebula Sunstone upstream upstream sunstone {
server 127.0.0.1:9869;
} upstream appserver {
server 127.0.0.1:29877; # appserver_ip:ws_port
} map $http_upgrade $connection_upgrade {
default upgrade; '' close;
}
#### cloudserver.org HTTP virtual host server {
listen 80; server_name one.cm.in.tum.de;
### Permanent redirect to HTTPS (optional) return 301 https://one.cm.in.tum.de:443;
}
#### cloudserver.org HTTPS virtual host server {
listen 443; server_name one.cm.in.tum.de;
### SSL Parameters ssl on; ssl_certificate /etc/ssl/certs/emu10.fullchain.cert.pem; ssl_certificate_key /etc/ssl/private/emu10.private.key;
### Proxy requests to upstream location / { proxy_pass http://sunstone; }
}
server {
listen 29876; server_name one.cm.in.tum.de;
### SSL Parameters ssl on; ssl_certificate /etc/ssl/certs/emu10.fullchain.cert.pem; ssl_certificate_key /etc/ssl/private/emu10.private.key;
### Proxy requests to upstream location / { proxy_pass http://appserver; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; }
}
… only VNC part….. :vnc_proxy_port: 29876 :vnc_proxy_support_wss: yes :vnc_proxy_cert: /etc/ssl/certs/emu10.fullchain.cert.pem :vnc_proxy_key: /etc/ssl/private/emu10.private.key :vnc_proxy_ipv6: false :vnc_request_password: false …..
sunstone-server.conf
to 29877 and restart sunstone noVNC sudo service opennebula-novnc restart
sudo service nginx restart
sudo service opennebula-sunstone restart
ONE CLI
Log in to emu10 and use opennebula commands to perform certain tasks. Here is the documentation about possible commands:
https://docs.opennebula.org/5.6/operation/references/cli.html To use the commands you need to perform the following steps:
mkdir ~/.one touch ~/.one/one_auth
oneuser token-create oneadmin --time 3600 > ~/.one/one_auth
Import other images (KVM/Virtualbox)
You can also import other images and directly boot them. Opennebula uses KVM as a Hypervisor therefore all kvm compatible images can be used. If you have a virtualbox image you can convert it to a raw image with this command:
VBoxManage clonehd --format RAW debian.vdi debian.img
To import it to Opennebula copy the image to the sunstone gui (emu10) to this directory /var/tmp/. The directory is important as images can only be imported from trusted/safe directories. Now use the one cli to import the image. First authenticate as described above in “ONE CLI” now use:
oneimage create -d ceph_img --name gbs_image --path /var/tmp/gbs.img --prefix hd --type OS --driver raw --description "Virtualbox GBS Image"
to import it.
Make sure that the access rights are correct (go+r) when copying it to /var/tmp/ otherwise the import will fail