Giter VIP home page Giter VIP logo

konveyor / move2kube Goto Github PK

View Code? Open in Web Editor NEW
362.0 362.0 119.0 16.84 MB

Move2Kube is a command-line tool for automating creation of Infrastructure as code (IaC) artifacts. It has inbuilt support for creating IaC artifacts for replatforming to Kubernetes/Openshift.

Home Page: https://move2kube.konveyor.io/

License: Apache License 2.0

Dockerfile 0.86% Makefile 0.62% Go 95.89% Shell 1.16% JavaScript 0.18% Batchfile 0.45% HTML 0.07% CSS 0.02% Python 0.76%
hacktoberfest kubernetes modernization move2kube replatform

move2kube's People

Contributors

akagami-harsh avatar akash-nayak avatar akhil-ghatiki avatar ashokponkumar avatar bart-vandongen avatar chillorb avatar deewhyweb avatar dependabot[bot] avatar gabriel-farache avatar harikrishnanbalagopal avatar hrittikhere avatar jan200101 avatar jmontleon avatar jwmatthews avatar kmehant avatar luukvdm avatar morrislaw avatar pabloloyola avatar parthiba-hazra avatar prakhar-agarwal-byte avatar rmarting avatar sanket-0510 avatar satyazzz123 avatar seshapad avatar shashank381 avatar soumil-07 avatar svkrep avatar tarun8718 avatar venkatbandarupalli avatar vovapi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

move2kube's Issues

Create install script

Create install script which will download the latest release and place it in the right folders which are in path.

  1. Script that will download the latest release (not from source) from https://github.com/konveyor/move2kube/releases depending on the operating system and install it in the right path. A simpler version of https://github.com/helm/helm/blob/master/scripts/get-helm-3.
  2. Also, please place the script in scripts folder and update the USAGE.md and README.md.

Install process should be as simple as curl https://raw.githubusercontent.com/konveyor/move2kube/master/scripts/install.sh | bash

Pull traversal of source directory up to a higher level and consolidate m2kignore logic. Also reconsider service naming logic.

Right now each source translator does their own traversal and their own m2kignore logic:

err := filepath.Walk(inputPath, func(fullpath string, info os.FileInfo, err error) error {
if err != nil {
log.Warnf("Skipping path %s due to error: %s", fullpath, err)
return nil
}
if info.IsDir() {
path, _ := plan.GetRelativePath(fullpath)
if common.IsStringPresent(preContainerizedSourcePaths, path) {
return filepath.SkipDir //TODO: Should we go inside the directory in this case?
}
fullcleanpath, err := filepath.Abs(fullpath)
if err != nil {
log.Errorf("Unable to resolve full path of directory %s : %s", fullcleanpath, err)
}
if common.IsStringPresent(ignoreDirectories, fullcleanpath) {
if common.IsStringPresent(ignoreContents, fullcleanpath) {
return filepath.SkipDir
}
return nil
}
containerizationoptions := containerizers.GetContainerizationOptions(plan, fullpath)
if len(containerizationoptions) == 0 {
log.Debugf("No known containerization approach is supported for %s", fullpath)
if common.IsStringPresent(ignoreContents, fullcleanpath) {
return filepath.SkipDir
}
return nil
}
for _, co := range containerizationoptions {
expandedPath, err := filepath.Abs(fullpath) // If fullpath is "." it will expand to the absolute path.
if err != nil {
log.Warnf("Failed to get the absolute path for %s", fullpath)
continue
}
service := c.newService(filepath.Base(expandedPath))
service.ContainerBuildType = co.ContainerizationType
service.ContainerizationTargetOptions = co.TargetOptions
if !common.IsStringPresent(service.BuildArtifacts[plantypes.SourceDirectoryBuildArtifactType], path) {
service.SourceArtifacts[plantypes.SourceDirectoryArtifactType] = append(service.SourceArtifacts[plantypes.SourceDirectoryArtifactType], path)
service.BuildArtifacts[plantypes.SourceDirectoryBuildArtifactType] = append(service.BuildArtifacts[plantypes.SourceDirectoryBuildArtifactType], path)
}
if foundRepo, err := service.GatherGitInfo(fullpath, plan); foundRepo && err != nil {
log.Warnf("Error while parsing the git repo at path %q Error: %q", fullpath, err)
}
services = append(services, service)
}
//return nil
return filepath.SkipDir // Skipping all subdirectories when base directory is a valid package
}
return nil
})

Pull the traversal code to a higher level, maybe here:

for _, l := range translationPlanners {
log.Infof("[%T] Planning translation", l)
services, err := l.GetServiceOptions(inputPath, p)
if err != nil {
log.Warnf("[%T] Failed : %s", l, err)
} else {
p.AddServicesToPlan(services)
log.Infof("[%T] Done", l)
}
}

This avoids duplicating the m2kignore logic and should simplify the logic inside each source translator's GetServiceOptions

Refactor path manipulations to use only absolute paths internally.

Description

Currently we are doing a bunch of path manipulations converting absolute paths to paths relative to the source directory plan.Spec.Inputs.RootDir, converting absolute paths to paths relative to the present working directory pwd, and vice versa.

In addition to being a source of source for bugs this also makes it harder to reason about the correctness of the code.

Examples:
https://github.com/konveyor/move2kube/blob/master/internal/source/any2kube.go#L64-L92
https://github.com/konveyor/move2kube/blob/master/internal/containerizer/dockerfilecontainerizer.go#L62-L75
https://github.com/konveyor/move2kube/blob/master/internal/containerizer/dockerfilecontainerizer.go#L101-L109

Proposal

  1. Use absolute paths everywhere internally.
  2. Convert the absolute paths to relative paths when marshalling.
  3. Clean and convert relative paths to absolute paths when unmarshalling.
  4. Remove the AbsRootDir and RelRootDir fields from plan.Spec.Inputs.
  5. Remove plan.GetRelativePath.

Consider enhancing the qacache file to become a config file

Config file format

More advanced users might want to configure move2kube.
Need to consider fixing the format of the qacache file so it can become a config file.
The format also needs to be clearly documented so people can use it.

Config filename convention

Also consider enforcing a filename convention for the config files something like m2kconfig-*.yaml
Currently we look at any and all yaml files and try to parse them into the qacache struct.

files, err := common.GetFilesByExt(inputPath, []string{".yml", ".yaml"})
if err != nil {
log.Warnf("Unable to fetch yaml files and recognize qacache metadata yamls : %s", err)
return err
}
for _, path := range files {
cm := new(qatypes.Cache)
if common.ReadYaml(path, &cm) == nil && cm.Kind == string(qatypes.QACacheKind) {

This is very hard to debug. If a problem occurs (a service is not getting exposed, etc.) a user would have to go through every single yaml file to find out which ones are valid config files.

Most software already have a fixed filename or some filename format for their config files, so users will expect this when it comes to config files.

As a bonus it also allows you to easily enable/disable a config by simply renaming it. No need to delete it or move it to a different location.

Cloud foundry sample in move2kube-demos results in wrong plan with duplicates and wrong containerization options

apiVersion: move2kube.konveyor.io/v1alpha1
kind: Plan
metadata:
  name: myproject
spec:
  inputs:
    rootDir: samples/cloud-foundry
    services:
      move2kube-demo-cf:
        - serviceName: move2kube-demo-cf
          serviceRelPath: /move2kube-demo-cf
          image: move2kube-demo-cf:latest
          translationType: Cfmanifest2Kube
          containerBuildType: NewDockerfile
          sourceType:
            - Directory
            - CfManifest
          targetOptions:
            - m2kassets/dockerfiles/golang
            - m2kassets/dockerfiles/php
          sourceArtifacts:
            CfRunningManifest:
              - cfapps.yaml
            SourceCode:
              - ../../.
          buildArtifacts:
            SourceCode:
              - ../../.
          updateContainerBuildPipeline: true
          updateDeployPipeline: true
        - serviceName: move2kube-demo-cf
          serviceRelPath: /move2kube-demo-cf
          image: move2kube-demo-cf:latest
          translationType: Cfmanifest2Kube
          containerBuildType: S2I
          sourceType:
            - Directory
            - CfManifest
          targetOptions:
            - m2kassets/s2i/golang
            - m2kassets/s2i/java
            - m2kassets/s2i/php
          sourceArtifacts:
            CfRunningManifest:
              - cfapps.yaml
            SourceCode:
              - ../../.
          buildArtifacts:
            SourceCode:
              - ../../.
          updateContainerBuildPipeline: true
          updateDeployPipeline: true
        - serviceName: move2kube-demo-cf
          serviceRelPath: /move2kube-demo-cf
          image: move2kube-demo-cf:latest
          translationType: Cfmanifest2Kube
          containerBuildType: CNB
          sourceType:
            - Directory
            - CfManifest
          targetOptions:
            - cloudfoundry/cnb:cflinuxfs3
            - gcr.io/buildpacks/builder
          sourceArtifacts:
            CfRunningManifest:
              - cfapps.yaml
            SourceCode:
              - ../../.
          buildArtifacts:
            SourceCode:
              - ../../.
          updateContainerBuildPipeline: true
          updateDeployPipeline: true
        - serviceName: move2kube-demo-cf
          serviceRelPath: /move2kube-demo-cf
          image: move2kube-demo-cf:latest
          translationType: Cfmanifest2Kube
          containerBuildType: NewDockerfile
          sourceType:
            - Directory
            - CfManifest
          targetOptions:
            - m2kassets/dockerfiles/nodejs
          sourceArtifacts:
            CfManifest:
              - manifest.yml
            CfRunningManifest:
              - cfapps.yaml
            SourceCode:
              - .
          buildArtifacts:
            SourceCode:
              - .
          updateContainerBuildPipeline: true
          updateDeployPipeline: true
        - serviceName: move2kube-demo-cf
          serviceRelPath: /move2kube-demo-cf
          image: move2kube-demo-cf:latest
          translationType: Cfmanifest2Kube
          containerBuildType: S2I
          sourceType:
            - Directory
            - CfManifest
          targetOptions:
            - m2kassets/s2i/nodejs
          sourceArtifacts:
            CfManifest:
              - manifest.yml
            CfRunningManifest:
              - cfapps.yaml
            SourceCode:
              - .
          buildArtifacts:
            SourceCode:
              - .
          updateContainerBuildPipeline: true
          updateDeployPipeline: true
        - serviceName: move2kube-demo-cf
          serviceRelPath: /move2kube-demo-cf
          image: move2kube-demo-cf:latest
          translationType: Cfmanifest2Kube
          containerBuildType: CNB
          sourceType:
            - Directory
            - CfManifest
          targetOptions:
            - cloudfoundry/cnb:cflinuxfs3
            - gcr.io/buildpacks/builder
          sourceArtifacts:
            CfManifest:
              - manifest.yml
            CfRunningManifest:
              - cfapps.yaml
            SourceCode:
              - .
          buildArtifacts:
            SourceCode:
              - .
          updateContainerBuildPipeline: true
          updateDeployPipeline: true
    targetInfoArtifacts:
      KubernetesCluster:
        - cluster.yaml
  outputs:
    kubernetes:
      artifactType: Yamls
      clusterType: |
        default/c114-e-us-south-containers-cloud-ibm-com:32230/IAM#[email protected]
      ignoreUnsupportedKinds: true

Test to check if all cluster names are properly parsed in generated constants

Description

Write unit tests to check the generated files are valid.

Add the below test just before

t.Run("update plan with some empty files", func(t *testing.T) {

	t.Run("check if all clusters in constant were loaded", func(t *testing.T) {
		p := plantypes.NewPlan()
		loader := metadata.ClusterMDLoader{}
		cmMap := loader.GetClusters(p)
                //TODO: Read all .yaml files in internal/metadata/clusters, and find the value in metadata.name using say a regex
		for clustername := range <ALL NAMES IDENTIFIED IN ABOVE STEP> {
			if _, ok := cmMap[clustername]; !ok {
				t.Fatal("Missing builtin "+clustername+" cluster metadata. The returned cluster info:", cmMap)
			}
		}
	})

Add support for Google's cloud native buildpack builders

Google recently announced support for cnb (https://cloud.google.com/blog/products/containers-kubernetes/google-cloud-now-supports-buildpacks/).

Move2kube can easily support it out of the box as part of its cnb containerisation support.

The change required would be to

d.builders = []string{"cloudfoundry/cnb:cflinuxfs3"}

Add gcr.io/buildpacks/builder to the above array and then fix any tests that might need updates.

Refactor string normalization function and test it.

Description

This package normalizes certain strings in the intermediate representation.
It strips any whitespace and quotations/commas from each container's environment variables.

Two things need to be done:

  1. The package is not written in the most idiomatic way.
  2. The package needs unit tests.

Some examples of code that needs to be refactored:

How to get started

For this issue you should start with some simple tests that creates simple IR objects and calls the function on them.
Next step would be to make slight changes to the IR objects and create a separate subtest for each scenario.
Some scenarios that are good to test:

  • Function called on an IR with no services.
  • An IR containing services that have no containers.
  • An IR containing services and containers but the containers have no environment variables.
  • An IR containing services and containers and all the environment variables are valid.
  • An IR containing services and containers and some of the environment variables are invalid (containing spaces and quotes).
  • An IR containing services and containers and some of the environment variables are invalid but their names contain the string affinity.

How to add unit tests

Some guidelines:

  • Look at the other tests in the package/project and write similar tests.
  • Use subtests to test different paths through the function.
  • Don't worry about testing every single path through the function. Focus on common use cases and failure modes.

Some helpful resources on how to write unit tests in Go:

Code to be tested

func (po normalizeCharacterOptimizer) optimize(ir irtypes.IR) (irtypes.IR, error) {
//TODO: Make this generic to ensure all fields have valid names
for k := range ir.Services {
scObj := ir.Services[k]
for _, serviceContainer := range scObj.Containers {
var tmpEnvArray []corev1.EnvVar
for _, env := range serviceContainer.Env {
if !strings.Contains(env.Name, "affinity") {
env.Name = strings.Trim(env.Name, "\t \n")
env.Value = strings.Trim(env.Value, "\t \n")
tmpString, err := stripQuotation(env.Name)
if err == nil {
env.Name = tmpString
}
tmpString, err = stripQuotation(env.Value)
if err == nil {
env.Value = tmpString
}
tmpEnvArray = append(tmpEnvArray, env)
}
}
serviceContainer.Env = tmpEnvArray
}
ir.Services[k] = scObj
}
return ir, nil
}
func stripQuotation(inputString string) (string, error) {
regex := regexp.MustCompile(`^[',"](.*)[',"]$`)
return regex.ReplaceAllString(inputString, `$1`), nil
}

Add collector for Amazon ECS and Azure Container Service

Move2Kube collectors which can be invoked using move2kube collect help extract runtime information.

To allow translation from Amazon ECS and Azure Container Service, similar to how we have structured for Cloud Foundry in here (https://github.com/konveyor/move2kube/blob/master/internal/collector/cfappscollector.go).

We can write a collector which will access the running container instances, and extract metadata out of it and serialize it in a format similar to https://github.com/konveyor/move2kube/blob/master/types/collection/cfinstanceapps.go.

Requirements:

  1. Access to Amazon ECS or Azure Container Service

Tasks involved:

  1. Create a serialization format similar to https://github.com/konveyor/move2kube/blob/master/types/collection/cfinstanceapps.go
  2. Write a collector to access the runtime using api and serialize the data similar to https://github.com/konveyor/move2kube/blob/master/internal/collector/cfappscollector.go

Test the code that implements the question answer logic.

Description

This file implements our CLI using this library https://github.com/AlecAivazis/survey

To ask the user a question we use one of these NewXProblem functions to create a Problem object:
https://github.com/konveyor/move2kube/blob/master/types/qaengine/problem.go#L260-L288

We pass the Problem object to FetchAnswer to get the answer from the user. Example:
https://github.com/konveyor/move2kube/blob/master/internal/customizer/registrycustomizer.go#L120-L131

We need unit tests for the functions in this file.

How to get started

โ— Please read the contribution guidelines first https://github.com/konveyor/move2kube/blob/master/contributing.md

Since these functions print to stdout and take input from stdin we need to use the goexpect library:
https://github.com/google/goexpect

For this issue you should start by running move2kube on some of the samples in your samples folder.
This will give you a good idea of what the output looks like and what the input it expects is.

Next write some simple tests that call the functions in this file.
Look at the examples https://github.com/google/goexpect#basic-examples and check if the functions are producing the correct output.

How to add unit tests

Some guidelines:

  • Look at the other tests in the package/project and write similar tests.
  • Use subtests to test different paths through the function.
  • Don't worry about testing every single path through the function. Focus on common use cases and failure modes.

Some helpful resources on how to write unit tests in Go:

Code to be tested

func NewCliEngine() Engine {
ce := new(CliEngine)
return ce
}
// StartEngine starts the cli engine
func (c *CliEngine) StartEngine() error {
return nil
}
// FetchAnswer fetches the answer using cli
func (c *CliEngine) FetchAnswer(prob qatypes.Problem) (answer qatypes.Problem, err error) {
switch prob.Solution.Type {
case qatypes.SelectSolutionFormType:
return c.fetchSelectAnswer(prob)
case qatypes.MultiSelectSolutionFormType:
return c.fetchMultiSelectAnswer(prob)
case qatypes.ConfirmSolutionFormType:
return c.fetchConfirmAnswer(prob)
case qatypes.InputSolutionFormType:
return c.fetchInputAnswer(prob)
case qatypes.MultilineSolutionFormType:
return c.fetchMultilineAnswer(prob)
case qatypes.PasswordSolutionFormType:
return c.fetchPasswordAnswer(prob)
default:
log.Fatalf("Unknown type found: %s", prob.Solution.Type)
}
return prob, fmt.Errorf("Unknown type found : %s", prob.Solution.Type)
}
func (c *CliEngine) fetchSelectAnswer(prob qatypes.Problem) (answer qatypes.Problem, err error) {
var ans, d string
if len(prob.Solution.Default) > 0 {
d = prob.Solution.Default[0]
} else {
d = prob.Solution.Options[0]
}
prompt := &survey.Select{
Message: fmt.Sprintf("%d. %s \nHints: \n %s\n", prob.ID, prob.Desc, prob.Context),
Options: prob.Solution.Options,
Default: d,
}
err = survey.AskOne(prompt, &ans)
if err != nil {
log.Fatalf("Error while asking a question : %s", err)
return prob, err
}
err = prob.SetAnswer([]string{ans})
return prob, err
}
func (c *CliEngine) fetchMultiSelectAnswer(prob qatypes.Problem) (answer qatypes.Problem, err error) {
ans := []string{}
prompt := &survey.MultiSelect{
Message: fmt.Sprintf("%d. %s \nHints: \n %s\n", prob.ID, prob.Desc, prob.Context),
Options: prob.Solution.Options,
Default: prob.Solution.Default,
}
err = survey.AskOne(prompt, &ans, survey.WithIcons(func(icons *survey.IconSet) {
icons.MarkedOption.Text = "[\u2713]"
}))
if err != nil {
log.Fatalf("Error while asking a question : %s", err)
return prob, err
}
err = prob.SetAnswer(ans)
return prob, err
}
func (c *CliEngine) fetchConfirmAnswer(prob qatypes.Problem) (answer qatypes.Problem, err error) {
var ans, d bool
if len(prob.Solution.Default) > 0 {
d, err = strconv.ParseBool(prob.Solution.Default[0])
if err != nil {
log.Warnf("Unable to parse default value : %s", err)
}
}
prompt := &survey.Confirm{
Message: fmt.Sprintf("%d. %s \nHints: \n %s\n", prob.ID, prob.Desc, prob.Context),
Default: d,
}
err = survey.AskOne(prompt, &ans)
if err != nil {
log.Fatalf("Error while asking a question : %s", err)
return prob, err
}
err = prob.SetAnswer([]string{fmt.Sprintf("%v", ans)})
return prob, err
}
func (c *CliEngine) fetchInputAnswer(prob qatypes.Problem) (answer qatypes.Problem, err error) {
var ans string
d := ""
if len(prob.Solution.Default) > 0 {
d = prob.Solution.Default[0]
}
prompt := &survey.Input{
Message: fmt.Sprintf("%d. %s \nHints: \n %s\n", prob.ID, prob.Desc, prob.Context),
Default: d,
}
err = survey.AskOne(prompt, &ans)
if err != nil {
log.Fatalf("Error while asking a question : %s", err)
return prob, err
}
err = prob.SetAnswer([]string{ans})
return prob, err
}
func (c *CliEngine) fetchMultilineAnswer(prob qatypes.Problem) (answer qatypes.Problem, err error) {
var ans string
d := ""
if len(prob.Solution.Default) > 0 {
d = prob.Solution.Default[0]
}
prompt := &survey.Multiline{
Message: fmt.Sprintf("%d. %s \nHints: \n %s\n", prob.ID, prob.Desc, prob.Context),
Default: d,
}
err = survey.AskOne(prompt, &ans)
if err != nil {
log.Fatalf("Error while asking a question : %s", err)
return prob, err
}
err = prob.SetAnswer([]string{ans})
return prob, err
}
func (c *CliEngine) fetchPasswordAnswer(prob qatypes.Problem) (answer qatypes.Problem, err error) {
var ans string
prompt := &survey.Password{
Message: fmt.Sprintf("%d. %s \nHints: \n %s\n", prob.ID, prob.Desc, prob.Context),
}
err = survey.AskOne(prompt, &ans)
if err != nil {
log.Fatalf("Error while asking a question : %s", err)
return prob, err
}
err = prob.SetAnswer([]string{ans})
return prob, err
}

Test the QA cache loader.

Description

The move2kube tool when run asks the user some questions about how to translate the project.
The answers gathered from this Question Answer (QA) session are stored in a yaml file that we call the QA cache.
The global variable qaengine is used to prompt the user and fetch answers during the QA session.

This package looks in the source folder for the QA cache yaml files and loads the filepaths into the plan.
It parses the yaml files it finds to check if they are qa cache files or not before adding them.
During the translate phase it also loads those caches into the QA engine.

How to get started

โ— Please read the contribution guidelines first https://github.com/konveyor/move2kube/blob/master/contributing.md

To gather some QA caches files to use as testdata, you can run move2kube on any of the projects in the samples folder.
You will find a file called m2kqacache.yaml in the output directory.

For this issue you should start with some simple tests that creates temporary directories and calls the functions on it.
Next step would be to make slight changes to the files in the directory and create a separate subtest for each scenario.
Some scenarios that are good to test:

  • Function called on non existent directory.
  • Function called on a file.
  • Empty directory.
  • Directory we don't have permission to read.
  • Directory with only non yaml files.
  • Directory with some yaml files that aren't QA cache files.
  • Directory with some QA cache files and other random files.

How to add unit tests

Some guidelines:

  • Look at the other tests in the package/project and write similar tests.
  • Use subtests to test different paths through the function.
  • Don't worry about testing every single path through the function. Focus on common use cases and failure modes.

Some helpful resources on how to write unit tests in Go:

Code to be tested

func (i QACacheLoader) UpdatePlan(inputPath string, plan *plantypes.Plan) error {
files, err := common.GetFilesByExt(inputPath, []string{".yml", ".yaml"})
if err != nil {
log.Warnf("Unable to fetch yaml files and recognize qacache metadata yamls : %s", err)
}
for _, path := range files {
cm := new(qatypes.Cache)
if common.ReadYaml(path, &cm) == nil && cm.Kind == string(qatypes.QACacheKind) {
relpath, _ := plan.GetRelativePath(path)
plan.Spec.Inputs.QACaches = append(plan.Spec.Inputs.QACaches, relpath)
}
}
return nil
}
// LoadToIR starts the cache responders
func (i QACacheLoader) LoadToIR(p plantypes.Plan, ir *irtypes.IR) error {
cachepaths := []string{}
for i := len(p.Spec.Inputs.QACaches) - 1; i >= 0; i-- {
cachepaths = append(cachepaths, p.GetFullPath(p.Spec.Inputs.QACaches[i]))
}
qaengine.AddCaches(cachepaths)
return nil
}

Add support for more default dockerfile containerizers

Adding a new docker file based containerization technique for move2kube involves the following:

  1. Check out other existing dockerfile containerizers in https://github.com/konveyor/move2kube/tree/master/internal/assets/dockerfiles
  2. Write a detect script called m2kdfdetect.sh in a new folder similar to https://github.com/konveyor/move2kube/blob/master/internal/assets/dockerfiles/python/m2kdfdetect.sh
  3. The detect script should return any parameters that will be required by the dockerfile as a json
  4. Write a dockerfile (similar to https://github.com/konveyor/move2kube/blob/master/internal/assets/dockerfiles/python/Dockerfile) which uses the paramters passed above.
  5. Do a make generate
  6. Do a make build
  7. Test whether move2kube translate -s srcfolder, where srcfolder is the folder where a folder containing the application that can be containerized using the above code is present. Check whether move2kube detects it.
  8. Check into https://github.com/konveyor/move2kube/blob/master/internal/assets/dockerfiles/
  9. Create a pull request.

Better error messages in unit tests.

Description

There are several unit tests that use reflect.DeepEqual for comparing the expected output with the actual output of the function. Currently the error messages when the equality check fails is not very helpful.

Example:

if !reflect.DeepEqual(services, want) {
t.Fatal("Failed to get the services properly. Expected:", want, "actual:", services)
}

The error message simply prints the expected and actual outputs to the console with no indication of what is different between the two.

This is especially painful for large structs. The difference could be in a single field but finding that field can take ages.

Fix

Use the cmp package and https://pkg.go.dev/github.com/google/go-cmp/cmp#Diff to print more "human friendly" error messages.

Example:
https://github.com/HarikrishnanBalagopal/move2kube/blob/tekton2/internal/source/any2kube_test.go#L415-L417
Ignore the cmpopts.EquateEmpty() part, that is specific to that unit test.

How to get started

For this issue you should look into as many of the *_test.go files as possible and change the error messages.
Good places to start:

Test the k8s metadata loaders.

Description

This package looks in the source folder for k8s yaml files and loads the filepaths into the plan.
It parses the yaml files it finds to check if they are k8s resources.
It also transforms the files into runtime objects during translate phase.

How to get started

For this issue you should start with some simple tests that creates temporary directories and calls the functions on it.
Next step would be to make slight changes to the files in the directory and create a separate subtest for each scenario.
Some scenarios that are good to test:

  • Function called on an empty directory.
  • Directory we don't have permission to read.
  • Directory with only non yaml files.
  • Directory with some non k8s yaml files.
  • Directory with some k8s yamls and other non k8s files.

How to add unit tests

Some guidelines:

  • Look at the other tests in the package/project and write similar tests.
  • Use subtests to test different paths through the function.
  • Don't worry about testing every single path through the function. Focus on common use cases and failure modes.

Some helpful resources on how to write unit tests in Go:

Code to be tested

func (i K8sFilesLoader) UpdatePlan(inputPath string, plan *plantypes.Plan) error {
codecs := serializer.NewCodecFactory((&apiresourceset.K8sAPIResourceSet{}).GetScheme())
files, err := common.GetFilesByExt(inputPath, []string{".yml", ".yaml"})
if err != nil {
log.Warnf("Unable to fetch yaml files and recognize k8 yamls : %s", err)
}
for _, path := range files {
relpath, _ := plan.GetRelativePath(path)
data, err := ioutil.ReadFile(path)
if err != nil {
log.Debugf("ignoring file %s", path)
continue
}
_, _, err = codecs.UniversalDeserializer().Decode(data, nil, nil)
if err != nil {
log.Debugf("ignoring file %s since serialization failed", path)
continue
} else {
plan.Spec.Inputs.K8sFiles = append(plan.Spec.Inputs.K8sFiles, relpath)
}
}
return nil
}
// LoadToIR loads k8s files as cached objects
func (i K8sFilesLoader) LoadToIR(p plantypes.Plan, ir *irtypes.IR) error {
codecs := serializer.NewCodecFactory((&apiresourceset.K8sAPIResourceSet{}).GetScheme())
for _, path := range p.Spec.Inputs.K8sFiles {
fullpath := p.GetFullPath(path)
data, err := ioutil.ReadFile(fullpath)
if err != nil {
log.Debugf("ignoring file %s", path)
continue
}
obj, _, err := codecs.UniversalDeserializer().Decode(data, nil, nil)
if err != nil {
log.Debugf("ignoring file %s since serialization failed", path)
continue
} else {
ir.CachedObjects = append(ir.CachedObjects, obj)
}
}
return nil
}

Split the CI/CD pipeline resource set into multiple apiresources

Refactor the CI/CD pipeline creation into multiple apiresource files instead of having it all in one single file.

func (*TektonAPIResourceSet) CreateResources(ir irtypes.IR) []runtime.Object {
projectName := ir.Name
// Prefix the project name and make the name a valid k8s name.
p := func(name string) string {
name = fmt.Sprintf("%s-%s", projectName, name)
return common.MakeStringDNSSubdomainNameCompliant(name)
}
pipelineName := p(basePipelineName)
gitSecretNamePrefix := p(baseGitSecretName)
clonePushServiceAccountName := p(baseClonePushServiceAccountName)
registrySecretName := p(baseRegistrySecretName)
gitEventIngressName := p(baseGitEventIngressName)
gitEventListenerName := p(baseGitEventListenerName)
triggerBindingName := p(baseTriggerBindingName)
tektonTriggersServiceAccountName := p(baseTektonTriggersServiceAccountName)
triggerTemplateName := p(baseTriggerTemplateName)
workspaceName := p(baseWorkspaceName)
gitSecrets := createGitSecrets(gitSecretNamePrefix, ir)
role, serviceAccount, roleBinding := createTektonTriggersRBAC("tekton-triggers-admin-role", tektonTriggersServiceAccountName, "tekton-triggers-admin-role-binding")
objs := []runtime.Object{
role, serviceAccount, roleBinding,
createCloneBuildPushPipeline(pipelineName, workspaceName, ir),
createClonePushServiceAccount(clonePushServiceAccountName, gitSecrets, registrySecretName),
createGitEventIngress(gitEventIngressName, gitEventListenerName),
createGitEventTriggerBinding(triggerBindingName),
createRegistrySecret(registrySecretName),
createGitEventListener(gitEventListenerName, tektonTriggersServiceAccountName, triggerBindingName, triggerTemplateName),
createTriggerTemplate(triggerTemplateName, pipelineName, clonePushServiceAccountName, workspaceName, ir),
}
for _, gitSecret := range gitSecrets {
objs = append(objs, gitSecret)
}
return objs
}
func createGitSecrets(gitSecretNamePrefix string, ir irtypes.IR) [](*corev1.Secret) {
secrets := [](*corev1.Secret){}
gitDomains := []string{}
for _, container := range ir.Containers {
gitRepoURL, err := giturls.Parse(container.RepoInfo.GitRepoURL)
if err != nil {
log.Warnf("Failed to parse git repo url %q Error: %q", container.RepoInfo.GitRepoURL, err)
continue
}
if gitRepoURL.Hostname() == "" {
continue
}
gitDomains = append(gitDomains, gitRepoURL.Hostname())
}
gitDomains = common.UniqueStrings(gitDomains)
if len(gitDomains) == 0 {
log.Warn("No remote git repos found. CI/CD pipeline requires a remote git repo to pull the source code from.")
gitSecretName := common.MakeStringDNSSubdomainNameCompliant(gitSecretNamePrefix)
secrets = append(secrets, createGitSecret(gitSecretName, ""))
return secrets
}
for _, gitDomain := range gitDomains {
gitSecretName := fmt.Sprintf("%s-%s", gitSecretNamePrefix, gitDomain)
gitSecretName = common.MakeStringDNSSubdomainNameCompliant(gitSecretName)
secrets = append(secrets, createGitSecret(gitSecretName, gitDomain))
}
return secrets
}
func createGitSecret(name, gitRepoDomain string) *corev1.Secret {
gitPrivateKey := gitPrivateKeyPlaceholder
knownHosts := knownHostsPlaceholder
if gitRepoDomain == "" {
gitRepoDomain = gitDomainPlaceholder
} else {
if pubKeys, ok := sshkeys.DomainToPublicKeys[gitRepoDomain]; ok { // Check in our hardcoded set of keys and their ~/.ssh/known_hosts file.
knownHosts = strings.Join(pubKeys, "\n")
} else if pubKeyLine, err := knownhosts.GetKnownHostsLine(gitRepoDomain); err == nil { // Check online by connecting to the host.
knownHosts = pubKeyLine
} else {
problemDesc := fmt.Sprintf("Unable to find the public key for the domain %s from known_hosts, please enter it. If you are not sure what this is press Enter and you will be able to edit it later: ", gitRepoDomain)
example := sshkeys.DomainToPublicKeys["github.com"][0]
problem, err := qatypes.NewInputProblem(problemDesc, []string{"Ex : " + example}, knownHostsPlaceholder)
if err != nil {
log.Fatalf("Unable to create problem : %s", err)
}
problem, err = qaengine.FetchAnswer(problem)
if err != nil {
log.Fatalf("Unable to fetch answer : %s", err)
}
newline, err := problem.GetStringAnswer()
if err != nil {
log.Fatalf("Unable to get answer : %s", err)
}
knownHosts = newline
}
if key, ok := sshkeys.GetSSHKey(gitRepoDomain); ok {
gitPrivateKey = key
}
}
secret := new(corev1.Secret)
secret.TypeMeta = metav1.TypeMeta{
Kind: string(internaltypes.SecretKind),
APIVersion: corev1.SchemeGroupVersion.String(),
}
secret.ObjectMeta = metav1.ObjectMeta{
Name: name,
Annotations: map[string]string{
"tekton.dev/git-0": gitRepoDomain,
},
}
secret.Type = corev1.SecretTypeSSHAuth
secret.StringData = map[string]string{
corev1.SSHAuthPrivateKey: gitPrivateKey,
"known_hosts": knownHosts,
}
return secret
}
func createTektonTriggersRBAC(roleName string, serviceAccountName string, roleBindingName string) (runtime.Object, runtime.Object, runtime.Object) {
role := new(rbacv1.Role)
role.TypeMeta = metav1.TypeMeta{
Kind: roleKind,
APIVersion: rbacv1.SchemeGroupVersion.String(),
}
role.ObjectMeta = metav1.ObjectMeta{Name: roleName}
role.Rules = []rbacv1.PolicyRule{
rbacv1.PolicyRule{APIGroups: []string{triggersv1alpha1.SchemeGroupVersion.Group}, Resources: []string{"eventlisteners", "triggerbindings", "triggertemplates"}, Verbs: []string{"get"}},
rbacv1.PolicyRule{APIGroups: []string{v1beta1.SchemeGroupVersion.Group}, Resources: []string{"pipelineruns"}, Verbs: []string{"create"}},
rbacv1.PolicyRule{APIGroups: []string{""}, Resources: []string{"configmaps"}, Verbs: []string{"get", "list", "watch"}},
}
serviceAccount := new(corev1.ServiceAccount)
serviceAccount.TypeMeta = metav1.TypeMeta{
Kind: rbacv1.ServiceAccountKind,
APIVersion: corev1.SchemeGroupVersion.String(),
}
serviceAccount.ObjectMeta = metav1.ObjectMeta{Name: serviceAccountName}
roleBinding := new(rbacv1.RoleBinding)
roleBinding.TypeMeta = metav1.TypeMeta{
Kind: roleBindingKind,
APIVersion: rbacv1.SchemeGroupVersion.String(),
}
roleBinding.ObjectMeta = metav1.ObjectMeta{Name: roleBindingName}
roleBinding.Subjects = []rbacv1.Subject{
rbacv1.Subject{Kind: rbacv1.ServiceAccountKind, Name: serviceAccountName},
}
roleBinding.RoleRef = rbacv1.RoleRef{APIGroup: rbacv1.SchemeGroupVersion.Group, Kind: roleKind, Name: roleName}
return role, serviceAccount, roleBinding
}
func createCloneBuildPushPipeline(name, workspaceName string, ir irtypes.IR) runtime.Object {
pipeline := new(v1beta1.Pipeline)
pipeline.TypeMeta = metav1.TypeMeta{
Kind: pipelineKind,
APIVersion: v1beta1.SchemeGroupVersion.String(),
}
pipeline.ObjectMeta = metav1.ObjectMeta{Name: name}
pipeline.Spec.Params = []v1beta1.ParamSpec{
v1beta1.ParamSpec{Name: "image-registry-url", Description: "registry-domain/namespace where the output image should be pushed.", Type: v1beta1.ParamTypeString},
}
pipeline.Spec.Workspaces = []v1beta1.PipelineWorkspaceDeclaration{
v1beta1.PipelineWorkspaceDeclaration{Name: workspaceName, Description: "This workspace will receive the cloned git repo and be passed to the kaniko task for building the image."},
}
tasks := []v1beta1.PipelineTask{}
firstTask := true
prevTaskName := ""
for i, container := range ir.Containers {
if container.ContainerBuildType == plantypes.ManualContainerBuildTypeValue || container.ContainerBuildType == plantypes.ReuseContainerBuildTypeValue {
log.Debugf("Manual or reuse containerization. We will skip this for CICD.")
continue
}
if container.ContainerBuildType == plantypes.DockerFileContainerBuildTypeValue || container.ContainerBuildType == plantypes.ReuseDockerFileContainerBuildTypeValue {
cloneTaskName := "clone-" + fmt.Sprint(i)
gitRepoURL := container.RepoInfo.GitRepoURL
if gitRepoURL == "" {
gitRepoURL = gitRepoURLPlaceholder
}
branchName := container.RepoInfo.GitRepoBranch
if branchName == "" {
branchName = defaultGitRepoBranch
}
cloneTask := v1beta1.PipelineTask{
Name: cloneTaskName,
TaskRef: &v1beta1.TaskRef{Name: "git-clone"},
Workspaces: []v1beta1.WorkspacePipelineTaskBinding{
v1beta1.WorkspacePipelineTaskBinding{Name: "output", Workspace: workspaceName},
},
Params: []v1beta1.Param{
v1beta1.Param{Name: "url", Value: v1beta1.ArrayOrString{Type: v1beta1.ParamTypeString, StringVal: gitRepoURL}},
v1beta1.Param{Name: "revision", Value: v1beta1.ArrayOrString{Type: v1beta1.ParamTypeString, StringVal: branchName}},
v1beta1.Param{Name: "deleteExisting", Value: v1beta1.ArrayOrString{Type: v1beta1.ParamTypeString, StringVal: "true"}},
},
}
if !firstTask {
cloneTask.RunAfter = []string{prevTaskName}
}
imageName := container.ImageNames[0]
// Assume there is no git repo. If there is no git repo we can't do CI/CD.
dockerfilePath := dockerfilePathPlaceholder
contextPath := contextPathPlaceholder
// If there is a git repo, set the correct context and dockerfile paths.
if container.RepoInfo.GitRepoDir != "" {
relDockerfilePath, err := filepath.Rel(container.RepoInfo.GitRepoDir, container.RepoInfo.TargetPath)
if err != nil {
log.Errorf("Failed to make the path %q relative to the path %q Error %q", container.RepoInfo.GitRepoDir, container.RepoInfo.TargetPath, err)
} else {
dockerfilePath = relDockerfilePath
// We can't figure out the context from the source. So assume the context is the directory containing the dockerfile.
contextPath = filepath.Dir(relDockerfilePath)
}
}
buildPushTaskName := "build-push-" + fmt.Sprint(i)
buildPushTask := v1beta1.PipelineTask{
RunAfter: []string{cloneTaskName},
Name: buildPushTaskName,
TaskRef: &v1beta1.TaskRef{Name: "kaniko"},
Workspaces: []v1beta1.WorkspacePipelineTaskBinding{
v1beta1.WorkspacePipelineTaskBinding{Name: "source", Workspace: workspaceName},
},
Params: []v1beta1.Param{
v1beta1.Param{Name: "IMAGE", Value: v1beta1.ArrayOrString{Type: v1beta1.ParamTypeString, StringVal: "$(params.image-registry-url)/" + imageName}},
v1beta1.Param{Name: "DOCKERFILE", Value: v1beta1.ArrayOrString{Type: v1beta1.ParamTypeString, StringVal: dockerfilePath}},
v1beta1.Param{Name: "CONTEXT", Value: v1beta1.ArrayOrString{Type: v1beta1.ParamTypeString, StringVal: contextPath}},
},
}
tasks = append(tasks, cloneTask, buildPushTask)
firstTask = false
prevTaskName = buildPushTaskName
} else if container.ContainerBuildType == plantypes.S2IContainerBuildTypeValue {
log.Errorf("TODO: Implement support for S2I")
} else if container.ContainerBuildType == plantypes.CNBContainerBuildTypeValue {
log.Errorf("TODO: Implement support for CNB")
} else {
log.Errorf("Unknown containerization method: %v", container.ContainerBuildType)
}
}
pipeline.Spec.Tasks = tasks
return pipeline
}
func createClonePushServiceAccount(name string, gitSecrets [](*corev1.Secret), registrySecretName string) runtime.Object {
serviceAccount := new(corev1.ServiceAccount)
serviceAccount.TypeMeta = metav1.TypeMeta{
Kind: rbacv1.ServiceAccountKind,
APIVersion: corev1.SchemeGroupVersion.String(),
}
serviceAccount.ObjectMeta = metav1.ObjectMeta{Name: name}
serviceAccount.Secrets = []corev1.ObjectReference{
corev1.ObjectReference{Name: registrySecretName},
}
for _, gitSecret := range gitSecrets {
serviceAccount.Secrets = append(serviceAccount.Secrets, corev1.ObjectReference{Name: gitSecret.ObjectMeta.Name})
}
return serviceAccount
}
func createGitEventIngress(name, gitEventListenerName string) runtime.Object {
secretName := "<TODO: insert name of TLS secret>"
hostName := "<TODO: insert subdomain where you want to receive git events>"
serviceName := "el-" + gitEventListenerName // https://github.com/tektoncd/triggers/blob/master/docs/eventlisteners.md#how-does-the-eventlistener-work
servicePort := int32(8080)
ingress := new(networkingv1beta1.Ingress)
ingress.TypeMeta = metav1.TypeMeta{
Kind: ingressKind,
APIVersion: networkingv1beta1.SchemeGroupVersion.String(),
}
ingress.ObjectMeta = metav1.ObjectMeta{Name: name}
ingress.Spec = networkingv1beta1.IngressSpec{
TLS: []networkingv1beta1.IngressTLS{
networkingv1beta1.IngressTLS{Hosts: []string{hostName}, SecretName: secretName},
},
Rules: []networkingv1beta1.IngressRule{
networkingv1beta1.IngressRule{Host: hostName, IngressRuleValue: networkingv1beta1.IngressRuleValue{HTTP: &networkingv1beta1.HTTPIngressRuleValue{
Paths: []networkingv1beta1.HTTPIngressPath{
networkingv1beta1.HTTPIngressPath{Backend: networkingv1beta1.IngressBackend{
ServiceName: serviceName,
ServicePort: intstr.IntOrString{Type: intstr.Int, IntVal: servicePort},
}},
},
}}},
},
}
return ingress
}
func createGitEventTriggerBinding(name string) runtime.Object {
triggerBinding := new(triggersv1alpha1.TriggerBinding)
triggerBinding.TypeMeta = metav1.TypeMeta{
Kind: string(triggersv1alpha1.NamespacedTriggerBindingKind),
APIVersion: triggersv1alpha1.SchemeGroupVersion.String(),
}
triggerBinding.ObjectMeta = metav1.ObjectMeta{Name: name}
return triggerBinding
}
func createRegistrySecret(name string) runtime.Object {
registryURL := "index.docker.io"
dockerConfigJSON := "<TODO: insert your docker config json>"
secret := new(corev1.Secret)
secret.TypeMeta = metav1.TypeMeta{
Kind: string(internaltypes.SecretKind),
APIVersion: corev1.SchemeGroupVersion.String(),
}
secret.ObjectMeta = metav1.ObjectMeta{
Name: name,
Annotations: map[string]string{
"tekton.dev/docker-0": registryURL,
},
}
secret.Type = corev1.SecretTypeDockerConfigJson
secret.StringData = map[string]string{
corev1.DockerConfigJsonKey: dockerConfigJSON,
}
return secret
}
func createGitEventListener(name, serviceAccountName, triggerBindingName, triggerTemplateName string) runtime.Object {
eventListener := new(triggersv1alpha1.EventListener)
eventListener.TypeMeta = metav1.TypeMeta{
Kind: eventListenerKind,
APIVersion: triggersv1alpha1.SchemeGroupVersion.String(),
}
eventListener.ObjectMeta = metav1.ObjectMeta{Name: name}
eventListener.Spec = triggersv1alpha1.EventListenerSpec{
ServiceAccountName: serviceAccountName,
Triggers: []triggersv1alpha1.EventListenerTrigger{
triggersv1alpha1.EventListenerTrigger{
Bindings: []*triggersv1alpha1.EventListenerBinding{
&triggersv1alpha1.EventListenerBinding{Ref: triggerBindingName},
},
Template: &triggersv1alpha1.EventListenerTemplate{Name: triggerTemplateName},
},
},
}
return eventListener
}
func createTriggerTemplate(name, pipelineName, serviceAccountName, workspaceName string, ir irtypes.IR) runtime.Object {
storageClassName := "<TODO: insert the storage class you want to use>"
registryURL := ir.Kubernetes.RegistryURL
registryNamespace := ir.Kubernetes.RegistryNamespace
if registryURL == "" {
registryURL = common.DefaultRegistryURL
}
if registryNamespace == "" {
registryNamespace = "<TODO: insert your registry namespace>"
}
// pipelinerun
pipelineRun := new(v1beta1.PipelineRun)
pipelineRun.TypeMeta = metav1.TypeMeta{
Kind: pipelineRunKind,
APIVersion: v1beta1.SchemeGroupVersion.String(),
}
pipelineRun.ObjectMeta = metav1.ObjectMeta{Name: name}
pipelineRun.Spec = v1beta1.PipelineRunSpec{
PipelineRef: &v1beta1.PipelineRef{Name: pipelineName},
ServiceAccountName: serviceAccountName,
Workspaces: []v1beta1.WorkspaceBinding{
v1beta1.WorkspaceBinding{
Name: workspaceName,
VolumeClaimTemplate: &corev1.PersistentVolumeClaim{
Spec: corev1.PersistentVolumeClaimSpec{
StorageClassName: &storageClassName,
AccessModes: []corev1.PersistentVolumeAccessMode{corev1.ReadWriteOnce},
Resources: corev1.ResourceRequirements{Requests: corev1.ResourceList{"storage": resource.MustParse("1Gi")}},
},
},
},
},
Params: []v1beta1.Param{
v1beta1.Param{Name: "image-registry-url", Value: v1beta1.ArrayOrString{Type: "string", StringVal: registryURL + "/" + registryNamespace}},
},
}
// trigger template
triggerTemplate := new(triggersv1alpha1.TriggerTemplate)
triggerTemplate.TypeMeta = metav1.TypeMeta{
Kind: triggerTemplateKind,
APIVersion: triggersv1alpha1.SchemeGroupVersion.String(),
}
triggerTemplate.ObjectMeta = metav1.ObjectMeta{Name: name}
triggerTemplate.Spec = triggersv1alpha1.TriggerTemplateSpec{
ResourceTemplates: []triggersv1alpha1.TriggerResourceTemplate{
triggersv1alpha1.TriggerResourceTemplate{
RawExtension: runtime.RawExtension{Object: pipelineRun},
},
},
}
return triggerTemplate
}

Note: Also look at removing RepoInfo from the plan. Try to move the git repo detection to translate phase.

Support the ".env" file in move2kube

move2kube translate does not support the .env file while parsing the docker-compose.yaml file. It returns an error-

Error while loading docker compose config : 1 error(s) decoding:

* error decoding 'Ports': No port specified: :<empty> 
ERRO[0009] Unable to parse docker compose file dock-com/docker-compose.yaml using *compose.V3Loader : 1 error(s) decoding:

* error decoding 'Ports': No port specified: :<empty> 

The below files can be used to reproduce the error.

# this is the docker-compose.yaml file
version: '3.4'
services:
    goodbot:
        image: python
        env_file:
            - .env
        ports:
            - ${PORT}:${PORT}
# this is .env file
PORT=123

(kompose convert also has similar issue- kubernetes/kompose#1289)

Bug where object is appended even when error occurs.

Description

Bug

https://github.com/konveyor/move2kube/blob/master/internal/apiresource/networkpolicy.go#L56-L60
The object gets appended even when there is an error.

Fix: Add a continue inside the if block.

Too many checks and error messages

https://github.com/konveyor/move2kube/blob/master/internal/apiresource/networkpolicy.go#L53-L64
The variables supportedKinds and networkPolicyKind do not change inside the loop. However it keeps checking
if the string networkPolicyKind is present in the array supportedKinds over and over.
Every time the check fails it prints an error message resulting in lots of error messages about the same thing.

Fix: Move the check and error outside the loop.

Add unit tests.

Add unit tests for all these functions:
https://github.com/konveyor/move2kube/blob/master/internal/apiresource/networkpolicy.go#L43-L113

How to get started

โ— Please read the contribution guidelines first https://github.com/konveyor/move2kube/blob/master/contributing.md

First fix the bugs mentioned above, then write the unit tests for the fixed functions.
Since the functions really only have 1 or 2 paths through them the unit tests can also be very simple.

How to add unit tests

Some guidelines:

  • Look at the other tests in the package/project and write similar tests.
  • Use subtests to test different paths through the function.
  • Don't worry about testing every single path through the function. Focus on common use cases and failure modes.

Some helpful resources on how to write unit tests in Go:

Code to be tested

func (i K8sFilesLoader) UpdatePlan(inputPath string, plan *plantypes.Plan) error {
codecs := serializer.NewCodecFactory((&apiresourceset.K8sAPIResourceSet{}).GetScheme())
files, err := common.GetFilesByExt(inputPath, []string{".yml", ".yaml"})
if err != nil {
log.Warnf("Unable to fetch yaml files and recognize k8 yamls : %s", err)
}
for _, path := range files {
relpath, _ := plan.GetRelativePath(path)
data, err := ioutil.ReadFile(path)
if err != nil {
log.Debugf("ignoring file %s", path)
continue
}
_, _, err = codecs.UniversalDeserializer().Decode(data, nil, nil)
if err != nil {
log.Debugf("ignoring file %s since serialization failed", path)
continue
} else {
plan.Spec.Inputs.K8sFiles = append(plan.Spec.Inputs.K8sFiles, relpath)
}
}
return nil
}
// LoadToIR loads k8s files as cached objects
func (i K8sFilesLoader) LoadToIR(p plantypes.Plan, ir *irtypes.IR) error {
codecs := serializer.NewCodecFactory((&apiresourceset.K8sAPIResourceSet{}).GetScheme())
for _, path := range p.Spec.Inputs.K8sFiles {
fullpath := p.GetFullPath(path)
data, err := ioutil.ReadFile(fullpath)
if err != nil {
log.Debugf("ignoring file %s", path)
continue
}
obj, _, err := codecs.UniversalDeserializer().Decode(data, nil, nil)
if err != nil {
log.Debugf("ignoring file %s since serialization failed", path)
continue
} else {
ir.CachedObjects = append(ir.CachedObjects, obj)
}
}
return nil
}

Set default value for copysources

Update copysources.sh with the default value of the source path, so that it works even if no parameters are provided.

This will require changing the copysources.sh template and feeding it with the value of source folder relative to the output folder.

Don't ignore error and more idiomatic code.

Description

  1. As mentioned in the TODO https://github.com/konveyor/move2kube/blob/master/internal/metadata/k8sfiles_test.go#L77
    we should not continue when the directory has incorrect permissions and we can't read it.

  2. The functions we are testing https://github.com/konveyor/move2kube/blob/master/internal/metadata/k8sfiles.go#L36-L80 may not be the most idiomatic way to write them. For example: we have a continue and an else block.

Changes to be made

Return with the error in the if block.

files, err := common.GetFilesByExt(inputPath, []string{".yml", ".yaml"})
if err != nil {
log.Warnf("Unable to fetch yaml files and recognize k8 yamls : %s", err)
}

Change the test to expect the error.

Change the functions to be more idiomatic go.

Make tekton pipeline resource more generic to support use cases outside CI/CD.

Currently the pipeline resource is very much tailored for CI/CD pipeline generation:

func (*Pipeline) createNewResource(irpipeline tekton.Pipeline, ir irtypes.IR) *v1beta1.Pipeline {
pipeline := new(v1beta1.Pipeline)
pipeline.TypeMeta = metav1.TypeMeta{
Kind: pipelineKind,
APIVersion: v1beta1.SchemeGroupVersion.String(),
}
pipeline.ObjectMeta = metav1.ObjectMeta{Name: irpipeline.Name}
pipeline.Spec.Params = []v1beta1.ParamSpec{
v1beta1.ParamSpec{Name: "image-registry-url", Description: "registry-domain/namespace where the output image should be pushed.", Type: v1beta1.ParamTypeString},
}
pipeline.Spec.Workspaces = []v1beta1.PipelineWorkspaceDeclaration{
v1beta1.PipelineWorkspaceDeclaration{Name: irpipeline.WorkspaceName, Description: "This workspace will receive the cloned git repo and be passed to the kaniko task for building the image."},
}
tasks := []v1beta1.PipelineTask{}
firstTask := true
prevTaskName := ""
for i, container := range ir.Containers {
if container.ContainerBuildType == plantypes.ManualContainerBuildTypeValue || container.ContainerBuildType == plantypes.ReuseContainerBuildTypeValue {
log.Debugf("Manual or reuse containerization. We will skip this for CICD.")
continue
}
if container.ContainerBuildType == plantypes.DockerFileContainerBuildTypeValue || container.ContainerBuildType == plantypes.ReuseDockerFileContainerBuildTypeValue {
cloneTaskName := "clone-" + fmt.Sprint(i)
gitRepoURL := container.RepoInfo.GitRepoURL
branchName := container.RepoInfo.GitRepoBranch
if gitRepoURL == "" {
gitRepoURL = gitRepoURLPlaceholder
}
if branchName == "" {
branchName = defaultGitRepoBranch
}
cloneTask := v1beta1.PipelineTask{
Name: cloneTaskName,
TaskRef: &v1beta1.TaskRef{Name: "git-clone"},
Workspaces: []v1beta1.WorkspacePipelineTaskBinding{
v1beta1.WorkspacePipelineTaskBinding{Name: "output", Workspace: irpipeline.WorkspaceName},
},
Params: []v1beta1.Param{
v1beta1.Param{Name: "url", Value: v1beta1.ArrayOrString{Type: v1beta1.ParamTypeString, StringVal: gitRepoURL}},
v1beta1.Param{Name: "revision", Value: v1beta1.ArrayOrString{Type: v1beta1.ParamTypeString, StringVal: branchName}},
v1beta1.Param{Name: "deleteExisting", Value: v1beta1.ArrayOrString{Type: v1beta1.ParamTypeString, StringVal: "true"}},
},
}
if !firstTask {
cloneTask.RunAfter = []string{prevTaskName}
}
imageName := container.ImageNames[0]
// Assume there is no git repo. If there is no git repo we can't do CI/CD.
dockerfilePath := dockerfilePathPlaceholder
contextPath := contextPathPlaceholder
// If there is a git repo, set the correct context and dockerfile paths.
if container.RepoInfo.GitRepoDir != "" {
relDockerfilePath, err := filepath.Rel(container.RepoInfo.GitRepoDir, container.RepoInfo.TargetPath)
if err != nil {
// TODO: Bump up the error after fixing abs path, rel path issues
log.Debugf("ERROR: Failed to make the path %q relative to the path %q Error %q", container.RepoInfo.GitRepoDir, container.RepoInfo.TargetPath, err)
} else {
dockerfilePath = relDockerfilePath
// We can't figure out the context from the source. So assume the context is the directory containing the dockerfile.
contextPath = filepath.Dir(relDockerfilePath)
}
}
buildPushTaskName := "build-push-" + fmt.Sprint(i)
buildPushTask := v1beta1.PipelineTask{
RunAfter: []string{cloneTaskName},
Name: buildPushTaskName,
TaskRef: &v1beta1.TaskRef{Name: "kaniko"},
Workspaces: []v1beta1.WorkspacePipelineTaskBinding{
v1beta1.WorkspacePipelineTaskBinding{Name: "source", Workspace: irpipeline.WorkspaceName},
},
Params: []v1beta1.Param{
v1beta1.Param{Name: "IMAGE", Value: v1beta1.ArrayOrString{Type: v1beta1.ParamTypeString, StringVal: "$(params.image-registry-url)/" + imageName}},
v1beta1.Param{Name: "DOCKERFILE", Value: v1beta1.ArrayOrString{Type: v1beta1.ParamTypeString, StringVal: dockerfilePath}},
v1beta1.Param{Name: "CONTEXT", Value: v1beta1.ArrayOrString{Type: v1beta1.ParamTypeString, StringVal: contextPath}},
},
}
tasks = append(tasks, cloneTask, buildPushTask)
firstTask = false
prevTaskName = buildPushTaskName
} else if container.ContainerBuildType == plantypes.S2IContainerBuildTypeValue {
log.Errorf("TODO: Implement support for S2I")
} else if container.ContainerBuildType == plantypes.CNBContainerBuildTypeValue {
log.Errorf("TODO: Implement support for CNB")
} else {
log.Errorf("Unknown containerization method: %v", container.ContainerBuildType)
}
}
pipeline.Spec.Tasks = tasks
return pipeline
}

Make the creation of the pipeline more generic so it can be used for more things.

Make shell script pass linter shellcheck.

Description

This shell script is used to install the dependencies of the project.
Currently it does not pass checks by the linter shellcheck https://www.shellcheck.net/

elif [ "$(expr substr $(uname -s) 1 5)" == "Linux" ]; then

Line 22:
elif [ "$(expr substr $(uname -s) 1 5)" == "Linux" ]; then
          ^-- SC2003: expr is antiquated. Consider rewriting this using $((..)), ${} or [[ ]].
                      ^-- SC2046: Quote this to prevent word splitting.

How to get started

โ— Please read the contribution guidelines first https://github.com/konveyor/move2kube/blob/master/contributing.md

As the error says rewrite the expression using one of $((..)), ${} or [[ ]]
Also make sure there are no other errors.
You can run shellcheck locally: https://github.com/koalaman/shellcheck
or as a vscode extension: https://marketplace.visualstudio.com/items?itemName=timonwong.shellcheck

Guidelines

CF/Dockerfile: Funky character in PORT env var and duplicate containerPort elements in deployment yaml

I see two weird things in the deployment yaml generated when running the local UI version on the cloud-foundry sample app in move2kube-demos:

  1. funky character for the value of PORT env var
  2. Duplicate containerPort elements for the same 8080 port in the ports section

I targzipped the cloud-foundry sample folder, and uploaded to the UI. Did not run collect. Chose new Dockerfile plan option.

Attaching screen shot.
Q&A cache in the follow up comment below.

image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.