Giter VIP home page Giter VIP logo

tern-tools / tern Goto Github PK

View Code? Open in Web Editor NEW
936.0 31.0 185.0 6.77 MB

Tern is a software composition analysis tool and Python library that generates a Software Bill of Materials for container images and Dockerfiles. The SBOM that Tern generates will give you a layer-by-layer view of what's inside your container in a variety of formats including human-readable, JSON, HTML, SPDX and more.

License: BSD 2-Clause "Simplified" License

Python 98.97% Shell 0.60% Dockerfile 0.43%
python containers oss-compliance sbom docker compliance spdx tool dependencies software-composition-analysis

tern's People

Contributors

aalexanderr avatar abhaykatheria avatar abhigupta4981 avatar coderpatros avatar debbie-leung avatar ericcheatham avatar forgetme17 avatar ivanayov avatar jaindhairyahere avatar jamiemagee avatar jayeritz avatar jeroenknoops avatar johnmm avatar joshuagl avatar kdestasio avatar kerinpithawala avatar laurieroy avatar m1-key avatar manaswinidas avatar mkrohan avatar mukultaneja avatar nishakm avatar prajwalm2212 avatar radmirnovii avatar rnjudge avatar rparikh avatar santiagotorres avatar sayantani11 avatar vargenau avatar yannjor avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tern's Issues

Create utility to calculate a checksum for the contents of each layer

Refers to #30 for context on why this is needed apart from the shasum of the layer tarball which is currently used to identify the filesystem layer.
The utility has to piece together a number of moving parts:
find -type f -exec sha256sum {} \; checksums the contents of each file in the filesystem

There should be some ability to print file attributes along with this as well.
find -type f -printf "%p|%u|%g|%M - " -exec sha256sum {} \;

There still needs to be some ability to document and reproduce the final checksum bit by bit in order to verify that what the utility is doing is accurate.

Some of the files belong to root, so error detection is needed. If reading the node as a regular user fails then it needs to run it as root and add notices when doing so.

File attributes to be included:

  • file size
  • file permissions
  • file uid and gid (not names)
  • file classification
  • security context
  • inode
$ ls -inhFZ usr/bin/attr
1705891 -rwxr-xr-x. 1 1000 1000 unconfined_u:object_r:user_home_t:s0 9.9K Jun  7  2016 usr/bin/attr*

No timestamp is needed.

The sha256sum of the result of this on all the files should be a good identifier for the contents barring any timestamp.

Proposal: Create a database backend with an associated API

It could be useful to have a database backend so that data can be more easily organized and queried. I think SQLite would be a good fit (at least at first) due to its ease of setup and management via the sqlite3 module in the standard library. Eventually we can add support for other databases.

Don't depend on the package manager's dependency tree

There may be circular dependencies or dependencies shared by commands installed with the same package manager. There was also a situation where a package manager did not have the ability to list dependencies (tdnf). To avoid these situations it would be better if the package manager was instead used to list installed packages again after a layer is applied.

Add functionality to unmount layer filesystem

under utils/rootfs.py unmount_rootfs unmounts the mergedir only once. Have it unmount for the number of layers that were overlayed. It's ok to pass the number of layers to the function

Refer to #37 for more information

Convert TODOs into Issues

The code has a lot of TODOs as it was developed without an issue tracker.

  1. Find TODO comment in code
  2. For each TODO create an issue
  3. Remove TODO comment from code

Proposal: Enable web service API

There are situations where Tern may need to run as a service somewhere on the cloud and get access via an external API. For that to happen, it needs to respond to HTTP get and post requests. Some investigation of tools that can create a front facing API for the back end CLI may be required.
https://github.com/proycon/clam might be a good place to start.

Subprocess memory error when trying to save large Docker images

Reproducible on: golang:1.9.4
golang's image size is 735MB.

docker_command uses popen rather than check_output or check_call so the stderr can be captured and recorded for the logger. However, when trying to save a large image this does not work as it uses buffered memory.
Possible solutions:

  1. subprocess.Popen can flush the buffer after it gets filled
  2. subprocess.check_output can be used without shell=True and redirect stderr to a variable instead of the tty's stderr
  3. Ignore the pipe to tty's std error and return some generic message for a failure to 'docker save'

PyCon 2018 Sprint: Implement rudimentary isolated layered filesystem

Currently, Tern is heavily dependent on Docker to run shell scripts against a given root filesystem. It does:

  1. Pulls the base image from Dockerhub
  2. Spins up a container using Docker
  3. Runs docker exec against the container

This isn't very effective as we ultimately want to reference the packages that came with every diff filesystem. It's clunky to use Docker or any other existing tool to disentangle a container image, so let's try to use some rudimentary Linux Kernel system calls to do it instead.

Here is a shell proof of concept that we can use to accomplish this using overlay, unshare and chroot:

# make all the working directories in temp first
$ cd temp
$ mkdir mergedir workdir
# untar all of the layer.tar files - each layer.tar file can be found in the manifest.json
$ cat manifest.json
[{"Config":"dfed51d8bf5d12569254cec5109fd0e5a3ccc3ced1c70647306b93fc2835a37f.json","RepoTags":["photon:3layers"],"Layers":["800692525c0779e33fda674f7a61a4925b453138cba7e0d7748dff691e38e491/layer.tar","092b3b6b9538cfeec7252aa2a95c8ffb9a6677023cac1751bc8948ee0ccedd3a/layer.tar","bf622c56bf154c72c90ecd4742f154abb8cd65757cab88488a173413b3ca63e5/layer.tar"]}]# untar the first layer into the mergedir
# note - the above result may be different for you
# layers are applied in order from first to last in the manifest.json
# untar first layer
$ mkdir 800692525c0779e33fda674f7a61a4925b453138cba7e0d7748dff691e38e491/contents
$ tar xvf 800692525c0779e33fda674f7a61a4925b453138cba7e0d7748dff691e38e491/layer.tar -C 800692525c0779e33fda674f7a61a4925b453138cba7e0d7748dff691e38e491/contents
# bind mount to mergedir 
$ sudo mount -o bind 800692525c0779e33fda674f7a61a4925b453138cba7e0d7748dff691e38e491/contents mergedir 
# mount proc, sys and dev because processes may be looking at these
$ sudo mount -t proc /proc mergedir/proc
$ sudo mount -o bind /sys mergedir/sys
$ sudo mount -o bind /dev mergedir/dev
# execute required shell command
$ sudo unshare -pf --mount-proc=$PWD/mergedir/proc chroot mergedir <shell> -c <command in single quotes''>
# undo proc sys and dev mounts
$ sudo umount mergedir/proc
$ sudo umount mergedir/sys
$ sudo umount mergedir/dev
# overlay next layer
$ mkdir 092b3b6b9538cfeec7252aa2a95c8ffb9a6677023cac1751bc8948ee0ccedd3a/contents
$ tar xvf 092b3b6b9538cfeec7252aa2a95c8ffb9a6677023cac1751bc8948ee0ccedd3a/layer.tar -C 092b3b6b9538cfeec7252aa2a95c8ffb9a6677023cac1751bc8948ee0ccedd3a/contents
$ sudo mount -t overlay overlay -o lowerdir=mergedir,upperdir=092b3b6b9538cfeec7252aa2a95c8ffb9a6677023cac1751bc8948ee0ccedd3a/contents,workdir=workdir mergedir
# remount proc, sys and dev
# run unshare and chroot command again
# unmount again
# repeat for successive layers
# clean up
$ sudo umount -rl mergedir

To resolve issue:

  1. Create some utilities to execute above commands (in utils directory)
  2. Come up with a reliable subroutine to set up an overlay filesystem to work on (common.py)
  3. Implement invoke_in_container module (in command_lib/command_lib.py) using the above utilities
  4. Show layer by layer debugging using the python in interactive mode

This issue is reserved for PyCon 2018 sprint participants. Some issues may be spun off from this main one during the sprint. After the sprint ends, anyone can work either with me on this issue or independently on a sub-issue.

The current development branch for this issue is: https://github.com/vmware/tern/tree/layer-debug

Deep comparison of Package objects

Currently, comparisons of Package objects are only done based on name

p1 = Package(pkg_name1)
p2 = Package(pkg_name2)
if p1.pkg_name1 == p2.pkg_name2:
    # we assume this is enough to say that the two packages are equal but they are not!

Add a method (def is_equal(self, another_pkg_obj)) that will do a deep comparison of the package object

if self.name == another_pkg_obj.name and self.license == another_pkg_obj.licence ...
    return True
else
   return False

Create module to look up a container OS's package manager

Refer to #51 for more information.

Implementation looks something like this:

  1. We need a list of package managers and their names in the form of a python dictionary
    [{path: /usr/bin/tdnf, name: tdnf},...]
    this can live in a .yml file called 'pkg_mgr.yml' in the command_lib folder
  2. A module in common.py for base OS discovery. It should return the name of the package manager

Workaround for Docker images not identifying the true base image

When using FROM in a Dockerfile, the image that gets pulled from the Docker registry does not identify the images that they may have been based off. The only indication that this actually happens is from the 'history' in the json config file of the Docker container. There is, however, no indentifier for each of the layers in this base image other than the layer sha.

Currently the implementation uses a direct lookup in base.yml. Perhaps we can do something like this:

  1. Rather than look up base.yml, look up cache
  2. In cache have an optional repo and tag listing
  3. Use the base.yml listing for that repo and tag for the right method of getting packages
  4. Have a mapping of 'history' and layers to get clues on what commands were used to get the packages

No detection of multiple installations of the same package

For example, if a base OS has a copy of openSSL and the Dockerfile has a command that installs openSSL again, the filesystem will have two copies of the same package. Since we are only comparing package names from previous layers, this goes undetected.

The solution is to collect a master list with package objects and not package names. Comparison should be a deep comparison of the object i.e. all of the properties should be the same in order for two package objects to be the same.

utils/dockerfile.py: accept build arguments if any

Dockerfile's ARG directive can store build arguments given to docker build as --build-arg. Tern cannot get this from Docker itself and hence needs to be accepted through some command line option when providing a Dockerfile. So Tern needs come command line options like --docker-build-args which should then be passed on to get_base_image_tag in dockerfile.py.
It seems that not many dockerfiles use ARG so we will see how badly this is needed.

json formatted structured data output

In order to convert from one form of data representation to another, it makes sense to first convert all of the data gathered by Tern into structured data output i.e. a python dictionary and then convert it into json formatted output file

  1. First figure out what data to output
  2. Create a function in content.py to create a python dictionary with that data
  3. Create a function in content.py to convert the python dictionary to a json object which can be written to a json file

Proposal: Use licensee to parse LICENSE file

Tern mostly relies on a package manager's license field but this may not be filled in. For example dpkg does not have a 'license' field. Instead the snippet reads the contents of the LICENSE file and dumps the result.

Look at https://github.com/benbalter/licensee to do all of the work

Options:

  1. Just use it: This would mean adding ruby and gem as a dependency for Tern and this goes against my general motivation to minimize the number of dependencies Tern uses.
  2. Find a python3 equivalent: I have not been able to find one
  3. Create a simply python3 utility that does what licensee does: This would involve duplicating development work on something far more sophisticated

Convert short names of modules to the full name

For example:
from utils import dockerfile as df

Convert that to:
from utils import dockerfile

It would require substituting df in the module file with dockerfile

Note: multiple people can work on this issue - just note which files you are working on in the comments. If anyone would like to take this on, please look at the comments and see which files have been picked up.

Add coding style documentation

For the most part, simply following PEP8 style leads to readable code.
Other patterns that seem to come up but are not strictly followed:

  • keep files short for better readability. For example, each class resides in its own file
  • import entire module whenever possible. This is not strictly followed though (for example with classes)

Accomodate invoke_in_rootfs in get_pkg_attr_list function

Refers to #37 and #52

current get_pkg_attr_list defaults to running container operations. It should default to 'rootfs' operations. Rename this function to something that reflects the running container prerequisite and make a similar one for rootfs operations

python string object has no Attribute 'format_map' when running through a Docker entrypoint

Dockerfile:

FROM docker:18.02.0-ce

WORKDIR /root/

RUN apk add --no-cache git python3 sudo bash py-pip

RUN python3 -m venv ternenv
WORKDIR /root/ternenv
RUN git clone https://github.com/vmware/tern.git
RUN /bin/bash -c "source bin/activate"
WORKDIR /root/ternenv/tern
RUN pip install -r requirements.txt
ENTRYPOINT ["./tern", "report", "-d"]
CMD ["samples/photon_git/Dockerfile"]

docker commands:
docker build -t dtern .
docker run -v /var/run/docker.sock:/var/run/docker.sock dtern

Traceback:
2018-03-22 12:12:47,614 - ternlog - DEBUG - Running command: sudo docker run -td --name tern-container vmware/photon:1.0
Traceback (most recent call last):
File "./tern", line 87, in
main(args, logger)
File "./tern", line 34, in main
report.execute(args)
File "/root/ternenv/tern/report.py", line 271, in execute
logger)
File "/root/ternenv/tern/report.py", line 127, in print_image_base
package_list = common.get_packages_from_base(base_image_msg[0])
File "/root/ternenv/tern/common.py", line 161, in get_packages_from_base
names = cmds.get_pkg_attr_list(info['shell'], info['names'])
File "/root/ternenv/tern/utils/commands.py", line 382, in get_pkg_attr_list
package=package_name, override=override)
File "/root/ternenv/tern/utils/commands.py", line 347, in invoke_in_container
FormatAwk(package=package)) + ' && '
AttributeError: 'str' object has no attribute 'format_map'

This originates from the file:
https://github.com/vmware/tern/blob/master/utils/commands.py line 346 and 347
full_cmd = full_cmd + snippet_list[index].format_map(FormatAwk(package=package)) + ' && '

The Docker container is running python 3.6.3 so format_map should be supported.

Strangely, this works fine for a docker container without ENTRYPOINT and running with the -it option.

Some checks on the passed argument

Checks on the command line arguments. Currently, the only command line argument is a Docker file. Check on whether the file actually exists or not.

`tern report` fails on OS X

./tern report -i <docker image> fails on OS X with

2018-08-08 16:31:40,419 - DEBUG - rootfs - Running command: sudo mount -o bind temp/b9eddca1757ae022ff3a77cf03ae6cbb221853f83efaef570a76414cd534e7ff/contents temp/mergedir
Traceback (most recent call last):
  File "./tern", line 94, in <module>
    main(args)
  File "./tern", line 53, in main
    report.execute_docker_image(args)
  File "/Users/iyovcheva/go/src/github.com/openfaas/certifier/ternenv/tern/report/report.py", line 321, in execute_docker_image
    analyze_docker_image(full_image)
  File "/Users/iyovcheva/go/src/github.com/openfaas/certifier/ternenv/tern/report/report.py", line 142, in analyze_docker_image
    target = rootfs.mount_base_layer(image_obj.layers[0].tar_file)
  File "/Users/iyovcheva/go/src/github.com/openfaas/certifier/ternenv/tern/utils/rootfs.py", line 108, in mount_base_layer
    root_command(mount, base_rootfs_path, target_dir_path)
  File "/Users/iyovcheva/go/src/github.com/openfaas/certifier/ternenv/tern/utils/rootfs.py", line 56, in root_command
    raise subprocess.CalledProcessError(1, cmd=full_cmd, output=error)
subprocess.CalledProcessError: Command '['sudo', 'mount', '-o', 'bind', 'temp/b9eddca1757ae022ff3a77cf03ae6cbb221853f83efaef570a76414cd534e7ff/contents', 'temp/mergedir']' returned non-zero exit status 1.

Steps to reproduce:

$ python3 -m venv ternenv
$ cd ternenv
$ git clone https://github.com/vmware/tern.git
$ source bin/activate
$ cd tern
$ git checkout -b release v0.1.0
$ pip install -r requirements.tx

$ ./tern report -i my-image:latest

Create an invoke_in_rootfs function in command_lib

Based on the work in #37, we should now be able to do the same operations as in 'invoke_in_container'
No 'override' should be necessary as there is only one location that the rootfs should exist and no running process to refer to.

Refer to #37 for more info.

Some packages are repeated for each layer

When using the dependency list for the package manager, those dependencies may have come in from the previous layer. So some of the packages keep getting listed for each of the layers even though they actually came from a previous layer.

Eg:

FROM vmware/photon:1.0:
	vmware/photon:1.0: 52ef9064d2:
		info: Loading packages from cache for layer 52ef9064d2:
		Package: expat
		Version: 2.2.4-1.ph1
		Project URL: http://expat.sourceforge.net/
		License: MIT
....

	RUN tyum install -y git && tyum clean all -> 832ed18cc3:
	Using invoke listing in command_lib/snippets.yml:
		warning: 
Unrecognized Commands:tyum clean all

	version:
	in container:
	list=`tdnf list installed {package}`
	c=0; for l in $list; do if [ $c == 1 ]; then echo $l; fi; c=$(((c+1)%3)); done;

license:
	in container:
	tdnf info {package} | head -10 | tail -1 | cut -f2 -d":" | xargs

src_url:
	in container:
	tdnf info {package} | head -9 | tail -1 | cut -f2-3 -d":" | xargs

deps:
	in container:
	list=`rpm -qR {package} | cut -f1 -d" "`
	for l in $list; do rpm -qa --queryformat "%{NAME}\n" $l; done;

:
------------------------------------------------

		Package: expat
		Version: 2.2.4-1.ph1
		Project URL: http://expat.sourceforge.net/
		License: MIT

expat is a dependency for git but it has already been satisfied in the previous layer so it wasn't installed with this command.

To get around this, keep track of all the known package names while looking through the layers using a master list.

Tern does not account for modifications to existing methods

Say if tern processes package information and could not get one set of metadata, it will store the package information in the cache with empty data. If next time around, there is an addition to the command library that allows tern to get said metadata, it will not run it because it will read the incomplete information from the cache.

  1. When loading form cache, check for missing metadata. If it finds missing metadata, then run through the retrieval steps again
  2. When saving to the cache, do a deep comparison of all the package metadata rather than the layer. In python one can override the class's inherited eq(self, other) to do the check. If the layers are not equal, save the layer with the packages again

KeyError: openjdk:8

I am getting this error. Where should I look to get more info on whats happening?

Thanks, Steve

2018-03-13 15:47:37,642 - ternlog - DEBUG - Starting...
2018-03-13 15:47:37,642 - ternlog - DEBUG - Creating Report...
2018-03-13 15:47:37,642 - ternlog - DEBUG - Creating a detailed report on components in Docker image...
2018-03-13 15:47:37,644 - ternlog - DEBUG - Running command: sudo docker images openjdk:8
2018-03-13 15:47:37,920 - ternlog - DEBUG - Running command: sudo docker save openjdk:8
2018-03-13 15:48:36,594 - ternlog - DEBUG - Nothing in cache for layer a64142d97d. Invoking from command library
2018-03-13 15:48:36,594 - ternlog - DEBUG - Running command: sudo docker ps -a --filter name=tern-container
2018-03-13 15:48:36,740 - ternlog - DEBUG - Running command: sudo docker run -td --name tern-container openjdk:8
2018-03-13 15:48:39,968 - ternlog - WARNING - No listing of openjdk:8 in the command library. To add one, make an
entry in command_lib/base.yml
2018-03-13 15:48:39,969 - ternlog - DEBUG - Running command: sudo docker ps -a --filter name=tern-container
2018-03-13 15:48:40,172 - ternlog - DEBUG - Running command: sudo docker stop tern-container
2018-03-13 15:48:41,428 - ternlog - DEBUG - Running command: sudo docker rm tern-container
2018-03-13 15:48:44,839 - ternlog - DEBUG - Running command: sudo docker images openjdk:8
2018-03-13 15:48:45,232 - ternlog - DEBUG - Running command: sudo docker rmi -f openjdk:8
2018-03-13 15:48:45,570 - ternlog - DEBUG - Running command: sudo docker images tern-image:1520977657
2018-03-13 15:48:45,951 - ternlog - DEBUG - Running command: sudo docker build -t tern-image:1520977657 -f Dockerfile .
Traceback (most recent call last):
  File "./tern", line 87, in <module>
    main(args, logger)
  File "./tern", line 34, in main
    report.execute(args)
  File "/root/ternenv/tern/report.py", line 280, in execute
    shell = common.get_image_shell(base_image_msg[0])
  File "/root/ternenv/tern/common.py", line 92, in get_image_shell
    'base'][base_image_tag[0]]['tags'][base_image_tag[1]]['shell']
KeyError: 'openjdk'

Here is the Dockerfile

FROM openjdk:8
EXPOSE 8080
ENV DBHome /opt/deployhub
ENV DBUserName  postgres
ENV DBPassword  postgres
ENV DBConnectionString jdbc:postgresql://dh-db.gotdns.com:5432/postgres
ENV DBDriverName org.postgresql.Driver

RUN mkdir /opt/deployhub
RUN mkdir /opt/deployhub/engine
RUN apt-get update
RUN apt-get -y install net-tools
ADD webapp-runner.jar /opt/deployhub
ADD dh-ms-general.war /opt/deployhub
CMD java -jar /opt/deployhub/webapp-runner.jar --path /dmadminweb /opt/deployhub/dh-ms-general.war 2>&1

Create Dockerfile to 'try it out'

The Dockerfile allows for another type of isolation and can be run in two steps if one is using Docker.

Requirements:

  • Import from a known baseOS against which Tern has been run and verified
  • Install packages with specific versions
  • Include a Tern report with a disclaimer to not distribute without going through the licenses of the included packages

See #23 for a Dockerfile that will work with Tern. Try using PhotonOS as their license metadata is concise.

Build containers to inspect them layer by layer

Building a container from a Dockerfile means expressing the union of all of the diff filesystems, which means inspecting them within a container cannot pry apart which packages are installed in which layer. There needs to be some way of 'layering up' an 'empty' container so the filesystems can be inspected one at a time.

For now we can invoke Docker's API in python and implement this (I predict this is going to be more messy than it needs to be but it gets the job done for now)

In the future, it may make sense to do this using a container runtime or something else to spin up an isolated environment to mount the filesystems on (a container runtime seems the most straightforward choice to provide a balance between ease of use, performance and portability). The choice of which container runtime, however, is yet to be made but must be made soon-ish

UBUNTU (DEBIAN TOO?) LIMITS THE NUMBER OF TIMES YOU CAN USE MERGEDIR

Trying to mount a third filesystem layer to mergedir does not work on Ubuntu server.

Output below is from following the manual python commands in issue #37 in iPython.
In [110]: rootfs.mount_diff_layer(d.layers[2].tar_file)

CalledProcessError Traceback (most recent call last)
in ()
----> 1 rootfs.mount_diff_layer(d.layers[2].tar_file)

~/PycharmProjects/tern/utils/rootfs.py in mount_diff_layer(diff_layer_tar)
116 args = 'lowerdir=' + merge_dir_path + ',upperdir=' + upper_dir_path +
117 ',workdir=' + workdir_path
--> 118 root_command(union_mount, args, merge_dir_path)
119 prep_rootfs(merge_dir_path)
120

Refer to #37 for more information

Report should point to the commit that generated it

The report is an artifact produced by the tool, so it makes sense, for the sake of reproducibility, to place something in the report that says what version of the tool created it.

We need something that will find the commit of git HEAD and place that in the report during report generation. If the scripts were copied from somewhere, a warning that says it cannot find the commit and maybe defaults to the version in a release file.

'FROM scratch' check

Tern checks for any imports in a Dockerfile by checking for the FROM directive and then attempting to pull the specified image from Dockerhub. But 'scratch' is just a placeholder to say 'there is no import'. At the minimum, do not attempt to pull from Dockerhub if the word 'scratch' is in the FROM string.

Randomly generate image and container names.

Currently, Image and Container names are not random. This should be updated to prevent any possible conflicts.

Some ideas:

  • Using tern's PID
  • Using a random number generator

We should be randomly generating names for:

  • Docker image
  • Docker image tag
  • Docker container name

Currently https://github.com/vmware/tern/blob/master/utils/constants.py contains all the default names.

  • Change the names into a prefix: ternimage and terncontainer
  • Add a random number at the end of these
  • Create a random number for the ternimage tag

Create a function in utils/general.py to randomly generate these names

Things to consider:
The names of the image, tag and the container are used in various parts of the project. There needs to be some central place where the other functions can access these names. Because they are persistent they cannot be regenerated during the course of execution.

Create install script

Move all the 'try it out' commands into an install script.
Bonus points:

  • install script can check if the docker daemon is running and if not then ask the user to turn it on.
  • Check to see if python 3 is installed and if not ask the user to install it.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.