Giter VIP home page Giter VIP logo

sumaform's People

Contributors

aaannz avatar bischoff avatar brejoc avatar cbbayburt avatar cbosdo avatar elariekerboull avatar hustodemon avatar jgleissner avatar jordimassaguerpla avatar juliogonzalez avatar kkaempf avatar ktsamis avatar m-czernek avatar mallozup avatar maximenoel8 avatar mbologna avatar mbussolotto avatar mcalmer avatar meaksh avatar moio avatar namelessone91 avatar ncounter avatar nodeg avatar renner avatar ricardoasmarques avatar rjmateus avatar srbarrios avatar vandabarata avatar witekest avatar ycedres avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sumaform's Issues

libvirt_domain.domain: diffs didn't match during apply.

Description howto reproduce:

provider "libvirt" {
  uri = "qemu+tcp://$MYSERVER/system"
}

module "base" {
  source = "./modules/libvirt/base"

  cc_username = "UC7"
  cc_password = "mysecret"

  // optional parameters with defaults below
  // pool = "default"
  network_name = "FOO"
  // bridge = "br1"
  name_prefix = "GINO"
}

and using the terraform.tf-testsuite.tf file

Expected behaviour:

sumaform should spawn machines remotely.

Actual behaviour:

We got one machine, but sumaform block with terraform exception.

Workaround

terraform apply 

Unfortunately, the workaround doesn't fix the issue. i had then

1 error(s) occurred:

* dial tcp 192.168.121.253:22: i/o timeout

Additional Info

 Terraform Version: 0.7.8
    Resource ID: libvirt_domain.domain
    Mismatch reason: attribute mismatch: network_interface.0.bridge
    Diff One (usually from plan): *terraform.InstanceDiff{mu:sync.Mutex{state:0, sema:0x0}, Attributes:map[string]*terraform.ResourceAttrDiff{"disk.#":*terraform.ResourceAttrDiff{Old:"", New:"", NewComputed:true, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "network_interface.0.network_name":*terraform.ResourceAttrDiff{Old:"", New:"${var.base_configuration[\"network_name\"]}", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Sensitive:false, Type:0x0}, "memory":*terraform.ResourceAttrDiff{Old:"", New:"4096", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Sensitive:false, Type:0x0}, "network_interface.0.wait_for_lease":*terraform.ResourceAttrDiff{Old:"", New:"true", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "network_interface.0.mac":*terraform.ResourceAttrDiff{Old:"", New:"", NewComputed:true, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Sensitive:false, Type:0x0}, "vcpu":*terraform.ResourceAttrDiff{Old:"", New:"2", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Sensitive:false, Type:0x0}, "name":*terraform.ResourceAttrDiff{Old:"", New:"${var.base_configuration[\"name_prefix\"]}${var.name}${element(list(\"\", \"-${count.index  + 1}\"), signum(var.count - 1))}", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Sensitive:false, Type:0x0}, "network_interface.0.addresses.#":*terraform.ResourceAttrDiff{Old:"", New:"", NewComputed:true, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "network_interface.0.bridge":*terraform.ResourceAttrDiff{Old:"", New:"${var.base_configuration[\"bridge\"]}", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Sensitive:false, Type:0x0}, "network_interface.0.hostname":*terraform.ResourceAttrDiff{Old:"", New:"", NewComputed:true, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "running":*terraform.ResourceAttrDiff{Old:"", New:"true", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "network_interface.#":*terraform.ResourceAttrDiff{Old:"", New:"1", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "network_interface.0.network_id":*terraform.ResourceAttrDiff{Old:"", New:"", NewComputed:true, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Sensitive:false, Type:0x0}}, Destroy:false, DestroyTainted:false}
    Diff Two (usually from apply): *terraform.InstanceDiff{mu:sync.Mutex{state:0, sema:0x0}, Attributes:map[string]*terraform.ResourceAttrDiff{"network_interface.0.addresses.#":*terraform.ResourceAttrDiff{Old:"", New:"", NewComputed:true, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "network_interface.0.network_name":*terraform.ResourceAttrDiff{Old:"", New:"vagrant-libvirt", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Sensitive:false, Type:0x0}, "network_interface.0.network_id":*terraform.ResourceAttrDiff{Old:"", New:"", NewComputed:true, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Sensitive:false, Type:0x0}, "network_interface.0.wait_for_lease":*terraform.ResourceAttrDiff{Old:"", New:"true", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "disk.0.volume_id":*terraform.ResourceAttrDiff{Old:"", New:"/var/lib/libvirt/images/2pac-suma3pg-main-disk", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Sensitive:false, Type:0x0}, "network_interface.0.hostname":*terraform.ResourceAttrDiff{Old:"", New:"", NewComputed:true, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "disk.0.%":*terraform.ResourceAttrDiff{Old:"", New:"1", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Sensitive:false, Type:0x0}, "disk.#":*terraform.ResourceAttrDiff{Old:"", New:"1", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Sensitive:false, Type:0x0}, "network_interface.0.mac":*terraform.ResourceAttrDiff{Old:"", New:"", NewComputed:true, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Sensitive:false, Type:0x0}, "network_interface.#":*terraform.ResourceAttrDiff{Old:"", New:"1", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "vcpu":*terraform.ResourceAttrDiff{Old:"", New:"2", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Sensitive:false, Type:0x0}, "running":*terraform.ResourceAttrDiff{Old:"", New:"true", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "name":*terraform.ResourceAttrDiff{Old:"", New:"2pac-suma3pg", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Sensitive:false, Type:0x0}, "memory":*terraform.ResourceAttrDiff{Old:"", New:"4096", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Sensitive:false, Type:0x0}}, Destroy:false, DestroyTainted:false}

support released update

Atm we are have a functionality called "unreleased update", but we don't support released update

Missing aaa package

A nice enhancement would be the addition of:

aaa_base-extras | SUSE Linux Base Package (recommended part) | package

(Shell aliases, bash completions
and convenience hacks)

sumaform should use terraform 0.8.x newer features

REQUIRES: https://github.com/moio/sumaform/issues/71

In many places throughout the codebase we use 0.7 language hacks to simulate conditionals, which were added in 0.8:

https://github.com/moio/sumaform/blob/master/modules/libvirt/client/main.tf#L17-L18
https://github.com/moio/sumaform/blob/master/modules/libvirt/host/main.tf#L8
https://github.com/moio/sumaform/blob/master/modules/libvirt/host/main.tf#L15
https://github.com/moio/sumaform/blob/master/modules/libvirt/host/main.tf#L49
https://github.com/moio/sumaform/blob/master/modules/libvirt/host/main.tf#L72
https://github.com/moio/sumaform/blob/master/modules/libvirt/minion/main.tf#L17-L18
https://github.com/moio/sumaform/blob/master/modules/libvirt/suse_manager/main.tf#L16
https://github.com/moio/sumaform/blob/master/modules/libvirt/suse_manager/main.tf#L32-L33
https://github.com/moio/sumaform/blob/master/modules/libvirt/suse_manager_proxy/main.tf#L27
https://github.com/moio/sumaform/blob/master/modules/openstack/host/main.tf#L35

All of these should be re-written for clarity.

We also make use of a different language hack to work around an issue with interpolated dictionaries, this should also be removed:

https://github.com/moio/sumaform/blob/master/modules/libvirt/base/main.tf#L44
https://github.com/moio/sumaform/blob/master/modules/libvirt/host/main.tf#L68

Note that the above should be tested thoroughly as the new behaviour was not really reliable in 0.8 betas.

Yet other hacks are used to work around the fact it was not possible to declare dependencies on whole modules (now it is). Those should be removed.

https://github.com/moio/sumaform/blob/master/modules/openstack/host/main.tf#L64

Sumaform fails, by refreshing repos

How to Reproduce:

1)Take the testsuite.main.tf as an example.
2) In the client module we add a build repo.

  1. Take the refresh-client-repo state
    https://github.com/moio/sumaform/blob/master/salt/client/repos.sls#L20.

This state, require the recently added default state. In this default, we have pkg.uptodate cmd.

The issue is following:
In the state we add a new repos, and we have first to accept the gpg-keys and metadata of repos, before trying to update the repo, otherwise we have a zypper failure, that break the wholes states.

Remarks:

I cited the client module, but this issue concern all repos.sls state for each module, where we add repos.

HOW i Fixed it:
refresh-accept-client-repos:
  cmd.run:
    - name: zypper --non-interactive --gpg-auto-import-keys refresh
    - require:
      - file: testsuite-build-repo
      - file: testsuite-suse-manager-repo

update-client-pkgs:
  cmd.run:
    - name: zypper --non-interactive --gpg-auto-import-keys refresh
    - require:
      - sls: default

@moio , i didn't look to much in the detail of your new machinery of adding/refreshing repos.

This is how i fixed it, and works stable for each module. So we need to separe the update pkgs from the new-repos installation, and we need to add this 2 state-cmd for each repo.sls (minion, server, client)
If you have a more elegant way, but imho is quite elegant.

If you like the approach i will do a pr next week

Create disks/images on demand

At moment we create all the images/disk, like sle12sp1, opensuse42.1, centos,

we should be able to specifiy that we want only to create a leap42.1 for example.

Specified hostname should not be prefixed

I would like to set a hostname for my machine via the name option in main.tf. Additionally I want to make use of prefixing in order to have my images not clashing with someone else's so I use a name_prefix as well. The issue is that I want my libvirt image and domain prefixed, but not the hostname. So maybe could we have a separate hostname setting and use this for only the hostname in case it is set, otherwise fall back to the prefixed name?

My configuration looks more or less like this and I want the actual hostname to be just hoag instead of renner-hoag:

provider "libvirt" {
  ...
}

module "base" {
  source = "./modules/libvirt/base"

  // optional parameters with defaults below
  ...
  domain = "my.domain.com"
  name_prefix = "renner-"
}

module "suma31pg" {
  source = "./modules/libvirt/suse_manager"
  base_configuration = "${module.base.configuration}"

  name = "hoag"
  version = "head"
  image = "sles12sp2"
  // see modules/libvirt/suse_manager/variables.tf for possible values
  mac = "de:ad:be:ef:01:11"
}

I currently work around the problem by patching modules/libvirt/host/main.tf like this but I guess this won't work for everyone:

diff --git a/modules/libvirt/host/main.tf b/modules/libvirt/host/main.tf
index 180556b..ec38c29 100644
--- a/modules/libvirt/host/main.tf
+++ b/modules/libvirt/host/main.tf
@@ -46,7 +46,7 @@ resource "libvirt_domain" "domain" {
   provisioner "file" {
     content = <<EOF
 
-hostname: ${var.base_configuration["name_prefix"]}${var.name}${element(list("", "-${count.index  + 1}"), signum(var.count - 1))}
+hostname: ${var.name}${element(list("", "-${count.index + 1}"), signum(var.count - 1))}
 domain: ${var.base_configuration["domain"]}
 use-avahi: ${var.base_configuration["use_avahi"]}
 ${var.grains}
@@ -69,7 +69,7 @@ output "configuration" {
   value = "${
     map(
       "id", "${libvirt_domain.domain.0.id}",
-      "hostname", "${var.base_configuration["name_prefix"]}${var.name}${element(list("", "-1"), signum(var.count - 1))}.${var.base_configuration["domain"]}"
+      "hostname", "${var.name}${element(list("", "-1"), signum(var.count - 1))}.${var.base_configuration["domain"]}"
     )
   }"
 }

Allow bridged networks

Currently, the documented way to do a bridged setup is to do

network_name=""
bridge = "br0"

For one, it's not a very natural syntax (the magic empty string must be there), but that's not the issue I want to raise. The point is, it is quite natural to declare a bridged network like this:

network_name="mynet"

where mynet is a libvirt network on top of existing br0, like explained here:
https://libvirt.org/formatnetwork.html#examplesBridge

I tried to do it but got cryptic errors like:

  libvirt_domain.domain: diffs didn't match during apply. This is a bug with Terraform and should be reported as a GitHub Issue.

Unlike stated in the documentation, doing terraform apply was not enough to clear them.

I'm suggested this is made possible, as using a libvirt network is cleaner than directly hooking on the bridge.
It would also be more consistent with non-bridged setups.

sle12 sp1 minions are unreacheable after reboot

This issue is somehow about a problem with sle12 sp1, but can be worked around easily by sumaform.

I noticed that my sle 12 sp1 minions could not be pinged after a reboot. After investigation, here is what happens:

  • /etc/fstab does not contain a declaration for root partition /
  • on sle 12 sp1, this leads to / being mounted read-only
  • when / is mounted read-only, the network does not start

The following solves the problem:

  • mount -o remount,rw /dev/vda1
  • add a line like /dev/vda1 / ext4 defaults 1 2 to /etc/fstab
  • reboot

Adding this line from sumaform would work around the problem.

Problem does not happen with sle12 sp2 (/etc/fstab still does not contain a declaration for /, but / gets mounted read-write).

New centos7 Image prerequisite.

The location of the centos7 image, bring terraform to fail with x509: certificate signed by unknown authority.

This need a fix, or a workaround for moment.

Timezone settings

My machines and also the testing ones have the timezone set up to UTC.
Customers OTOH have correctly set up timezones (not only in the bare system, other TZ-related settings (like postgres) are also different against sumaform machines). These differences should be examined, corresponding settings should be made configurable in sumaform and our test infra should be adjusted accordingly.

Deviations

(There might be more of them.)

timedatectl

  • sumaform:
      Local time: Do 2017-01-19 10:07:37 UTC
  Universal time: Do 2017-01-19 10:07:37 UTC
        RTC time: Do 2017-01-19 10:07:44
        Timezone: Etc/UTC (UTC, +0000)
     NTP enabled: n/a
NTP synchronized: no
 RTC in local TZ: no
  • manager.suse.de
      Local time: Do 2017-01-19 11:08:21 CET
  Universal time: Do 2017-01-19 10:08:21 UTC
        RTC time: Do 2017-01-19 10:08:21
        Timezone: Europe/Berlin (CET, +0100)
     NTP enabled: n/a
NTP synchronized: no
 RTC in local TZ: no

postgresql.conf

  • sumaform machine: timezone = 'UTC
  • manager.suse.de: timezone = 'posixrules

Uniform naming scheme

  • (upstream) terraform modules and variables are separated_by_underscore
  • (upstream) Salt grains are separated_by_underscore (but some of those that are set internally by sumaform are separated-by-hyphen)
  • Salt states are typically separated_by_underscore, but we have them separated-by-hyphen
  • DNS host names are separated-by-hyphen. It's good to keep DNS names equal to their respective resources, but DNS host names can't have underscores
  • we try to keep module and resource names separated_by_underscore, but sometimes that's not true (notably: control-node)

This currently creates confusion, especially when using terraform state list or terraform taint. We should:

  • change all of the internal grains to be separated_by_underscore to be be aligned to Salt Best Practices
  • change all of the internal Salt states to be separated_by_underscore to be aligned to Salt Best Practices
  • rename package_mirror/package-mirror to just mirror, control-node/control_node to just controller

Race condition by creating disk images.

How to reproduce.

take for example the main.tf.testing from sumaform

The machines aren't successful created, usually we have a problem with the control-node, but can happens also with other disks

  • libvirt_volume.main_disk: Can't retrieve volume dma-sles12sp1.

@moio
I don't know if this issue is due to sumaform, i'm thinking that is due to the libvirt plugin, even, this issue is a part of sumaform, because it affects it's basic usage. As a user, we have now a 50% race condition in a Server or Locally.

Terraform taint can be a workaround, but is not a valid one for daily usage

package-mirror can fail in mirroring repos

This can happen in case an RPM that has been already mirrored changes its content without changing the modified date and the size.

In that case, lftp will skip downloading it, while repo metatada contain a newer checksum, so that RPM will fail at install time.

We would need a custom program for downloading repos that:

  • downloads metadata first
  • checks for RPMs in the metadata vs. RPMs already in the mirrored directory
    • if an RPM is missing locally, download it
    • if an RPM exists locally, calculate its checksum
      • if the local checksum and the metadata checksum match, continue
      • else, download the new file and swap it with the existing one
  • deletes any local RPM not in the metadata

This program should take one repo URL as mandatory argument and an optional switch to specify the local directory, and it should run noninteractively.

Current idea is that this program could either be written in Python and hosted directly in the sumaform project, or written in Go and be a separate packaged project.

minions registered with a bootstrap script do not appear on "onboarding"

I know the title sounds like a SUSE Manager bug ;-) , but the issue really seems to be in sumaform.

How to reproduce:

Deploy a server and a sle12sp1 minion with sumaform. In SUSE manager, create a bootstrap script with "bootstrap with salt" option enabled, and an activation key. Run the script on the minion.

The minion will never submit its key to the master, i.e. appear on "onboarding".

Diagnostics:

A command like "salt-call test.ping" on the minion is enough to trigger the key submission.

Removing files in /srv/salt/* and restarting salt services before the bootstrap does not help.

The problem is 100% reproducible on a minion created with sumaform, but does not happen with a "vanilla" host (also created by sumaform). This is the reason why I say the issue is linked to sumaform and not a SUSE manager bug.

main.tf file:

The minion is created with something like:

module "minsles12sp1"
{
  source = "sumaform/modules/libvirt/minion"
  base_configuration = "${module.base.configuration}"
  name = "minsles12sp1"
  image = "sles12sp1"
  version = "3.1-stable"
  server_configuration = { hostname = "ebi-suma3pg.tf.local" }
  for_development_only = false
  for_testsuite_only = true
}

Accessing someone else VM with sumaform

Description:

Usually i put in the browser suma3pg.tf.local to access my server.
I noticed that i was accessing someone other else server, with 5 clients.

This happened also with ssh [email protected]

Discussing with @mbologna , he sayed ", you can access it because of avahi published to the internal network"

sumaform restart the database server during state application (bug)

After setup a 30 server "nightly" version, the server is fine, but we got a stacktrace.
looking on taskomatic:

FATAL  | jvm 1    | 2017/04/04 15:30:28 | null
java.lang.reflect.UndeclaredThrowableException
	at com.sun.proxy.$Proxy0.rollback(Unknown Source)
	at org.quartz.impl.jdbcjobstore.JobStoreSupport.rollbackConnection(JobStoreSupport.java:3604)
	at org.quartz.impl.jdbcjobstore.JobStoreSupport.executeInNonManagedTXLock(JobStoreSupport.java:3773)
	at org.quartz.impl.jdbcjobstore.JobStoreSupport.executeInNonManagedTXLock(JobStoreSupport.java:3725)
	at org.quartz.impl.jdbcjobstore.JobStoreSupport.recoverJobs(JobStoreSupport.java:802)
	at org.quartz.impl.jdbcjobstore.JobStoreSupport.schedulerStarted(JobStoreSupport.java:625)
	at org.quartz.core.QuartzScheduler.start(QuartzScheduler.java:494)
	at org.quartz.impl.StdScheduler.start(StdScheduler.java:143)
	at com.redhat.rhn.taskomatic.core.SchedulerKernel.startup(SchedulerKernel.java:146)
	at com.redhat.rhn.taskomatic.core.TaskomaticDaemon$1.run(TaskomaticDaemon.java:86)
	at java.lang.Thread.run(Thread.java:785)
Caused by: java.lang.reflect.InvocationTargetException
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:95)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
	at java.lang.reflect.Method.invoke(Method.java:508)
	at org.quartz.impl.jdbcjobstore.AttributeRestoringConnectionInvocationHandler.invoke(AttributeRestoringConnectionInvocationHandler.java:71)
	... 11 more
Caused by: org.postgresql.util.PSQLException: This connection has been closed.
	at org.postgresql.jdbc2.AbstractJdbc2Connection.checkClosed(AbstractJdbc2Connection.java:820)
	at org.postgresql.jdbc2.AbstractJdbc2Connection.rollback(AbstractJdbc2Connection.java:837)
	at org.apache.commons.dbcp.DelegatingConnection.rollback(DelegatingConnection.java:347)
	at org.apache.commons.dbcp.PoolingDataSource$PoolGuardConnectionWrapper.rollback(PoolingDataSource.java:322)
	... 16 more

[CLOUD] Our images need a to big flavor

Recently i did a test with a friend from cloud, and we tried out , sumaform for openstack a cloud.

The problem was, that image want a xl.flavor in small cloud.

Byside of this, with a small other image the test went well.

Prevent "resource already exists" or "resource not found" errors in libvirt

It is not clear whether those errors from terraform-provider-libvirt are always actually correctly reported.

It boils down to the following question: if a resource had to be created but was found already, should the provider simply proceed doing nothing and reporting no error? Conversely: if a resource had to be destroyed but was found not to be there in the first place, is an error appropriate?

I have the impression the AWS provider does not report errors in those cases - but that has to be double checked.

The result of this investigation could be a PR to terraform-provider-libvirt.

Configuring an external Database

Would it make sense to have the sumaform test DB external to the terraform setup? When we use terraform destroy we lose all our synced channel data. Using an list of external DB could save developers and I time when rebuilding a test env.

1.)We could point to presynced databases containing all suse repositories (head, nightly, stable etc)
2.)When syncing channels for the first time the external is checked and only new packages are downloaded and added to DB tables
3.) I believe this means we would only need to wait for tables to be written? which could cut waiting time in half-ish

Or is this idea completely mad?

don't copy SSL keys if for_development_only = false

Sorry, no time to do a PR, doing an issue instead ;-).

Sumaform tries to copy SSL key files from master to proxy, even when for_development_only = false:

  module.proxy.suse_manager_proxy.libvirt_domain.domain (remote-exec):      Comment: Unable to manage file: Error: HTTP 404: Not Found reading http://mix-suma3pg.tf.local/pub/bootstrap/bootstrap.sh.sha512
  module.proxy.suse_manager_proxy.libvirt_domain.domain (remote-exec):      Comment: Unable to manage file: Error: HTTP 404: Not Found reading http://mix-suma3pg.tf.local/pub/RHN-ORG-PRIVATE-SSL-KEY.sha512

Problem is, in salt/suse_manager_proxy/init.sls, some file.managed states should be declared only conditionally.

Get rid of var server.configuration for client and minion

At moment, we have this variable that expect :

https://github.com/moio/sumaform/blob/master/main.tf.libvirt-testsuite.example#L34

the serverhostname.

With this variable, we loose like 20 Min in automation, because the server, client, minion are not created in parallel, because of this dependency, that in the testsuite mode isn't used.

We should get rid of this variable, in order to spawn the server,client, minion in parallel. (and other new-coming machines) control-node need the other machines, this is ok

Re-evaluate decision of using nested modules

We use nested modules in the libvirt backend to share code in the host module, but that comes with an additional burden on users because tainting becomes more difficult.

In fact to taint a host we currently have to do the following:

$ terraform state list
...
module.suma3pg.module.suse_manager.libvirt_domain.domain
module.suma3pg.module.suse_manager.libvirt_volume.main_disk
$ 
$ terraform taint -module=suma3pg.suse_manager. libvirt_domain.domain
$ terraform taint -module=suma3pg.suse_manager. libvirt_volume.main_disk

Note that:

  1. one has to run terraform state list to remember the names of modules and resources
  2. terraform taint has a "module path" syntax that is different from the "state path", so it needs some counter-intuitive copypasting

We could flatten modules for easier tainting, at the expense of some extra code duplication.

Re-evaluate decision of using maps to specify domain-volume associations in libvirt

REQUIRES: https://github.com/moio/sumaform/issues/75

Using maps is handy for module code reuse, and was implemented in terraform-provider-libvirt for this reason.

On the other hand, it has the downside of always needing different resources for volumes and domains, even in case one doesn't make sense without the other (eg. root disks). terraform taint must always be executed twice, terraform state list is more complicated than it should be, etc.

If we decide not to use nested modules anymore we might contribute code to terraform-provider-libvirt for resources that represent "essential" domain-volume associations as it happens eg. in the AWS, Azure and Google Cloud providers.

Expected behaviour is to have only one resource that accounts for the VM and its root volume(s), that is created, destroyed, tainted as a whole.

[testsuite-branch] disk.0 object error.

Something is wrong with this var.

terraform get
terraform plan
Errors:

  * libvirt_domain.domain: disk.0: expected object, got string
  * libvirt_domain.domain: disk.0: expected object, got string
  * libvirt_domain.domain: disk.0: expected object, got string
  * libvirt_domain.domain: disk.0: expected object, got string

Workaround for moment, but it suppress the multi-disk support.

There is a small bug in the map of terraform

--- a/modules/libvirt/host/main.tf
+++ b/modules/libvirt/host/main.tf
@@ -18,13 +18,9 @@ resource "libvirt_domain" "domain" {
   running = "${var.running}"
   count = "${var.count}"
 
-  // base disk + additional disks if any
-  disk = ["${concat(
-    list(
-      map("volume_id", "${element(libvirt_volume.main_disk.*.id, count.index)}")
-    ),
-    var.additional_disk
-  )}"]
+  disk {
+    volume_id = "${element(libvirt_volume.main_disk.*.id, count.index)}"
+  }

General performance improvements via Postgres unsafety

There are options in PostgreSQL that allow trading speed for safety. Since most sumaform machines are for testing purposes, we could enable them without worrying too much about data loss in case of power failures.

See: http://stackoverflow.com/questions/9407442/optimise-postgresql-for-fast-testing/9407940#9407940

Specifically:

  • synchronous_commit=off
  • commit_delay=2000

Or even:

  • fsync = off
  • full_page_writes=off

@MalloZup I think those should be enabled on testsuites too, and I expect gains there to be significant.

Cannot access to spice display of VMs running on a remote host

When accessing from virt-manager to VMs spawn by sumaform running on a remote KVM host, I can see the list of VMs, but not their display. Following message appears:

Error connecting to the graphical console: the guest is on a distant host, but is
configured only for connexions to local file descriptors

A workaround is to go to the settings tab of the VM, choose "Spice display" in the left side menu, and in front of "Listen to:", change "None" into "Address". Then, after a reboot of the VM (needed to take into account the new setting), it works, I can see the display.

Translated into XML settings, it means that:

<graphics type='spice'>
    <listen type='none'/>
</graphics>

should be changed into:

<graphics type='spice' autoport='yes' listen='0.0.0.0'>
    <listen type='address' address='0.0.0.0'/>
</graphics>

The error is most probably in the libvirt plugin for terraform, but it seems it is already fixed there, but the new version is not used in sumaform yet.

Hardcode an SSL certificate using the "bring-your-own-certificate" mechanism in SUSE Manager

This would be nice to have in order not to have to accept the SSL certificates all the time.

The following environment variables:

https://www.suse.com/documentation/suse-manager-3/singlehtml/book_suma_best_practices/book_suma_best_practices.html#bp.chap.bring.your.own.cert

should be set here:

https://github.com/moio/sumaform/blob/master/salt/suse-manager/init.sls#L73

See the following link to set environment variables to cmd.run:

https://docs.saltstack.com/en/latest/ref/states/all/salt.states.cmd.html#salt.states.cmd.run

The certificate files could be copied from an existing server.

@mseidl you might be interested ๐Ÿ˜„

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.