Giter VIP home page Giter VIP logo

emma's Introduction

Emma

Codacy Badge Codacy Badge Build Status PyPi License: GPL v3 GitHub pages

Emma Memory and Mapfile Analyser (Emma)

Conduct static (i.e. worst case) memory consumption analyses based on arbitrary linker map files. It produces extensive .csv files which are easy to filter and post-process. Optionally .html and markdown reports as well as neat figures help you visualising your results.

Given a map file input (Green Hills map files are the default but others - like GCC - are supported via configuration options; examples are enclosed) Emma maps the addresses of sections (aka images) and/or objects (aka modules) to memory regions (all addresses given via map files must be known during compile time). Those memory regions are classified into two levels of granularity respectively. The first level defines arbitrary groups based on your personal taste (however using names similar to those defined by your microcontroller vendor makes most sense). Later each of those regions (second level) are assigned to one of four generalised predefined memory regions (those are: INT_RAM, INT_FLASH, EXT_RAM, EXT_FLASH). In case of virtual memory objects and sections lying within virtual address spaces (VASes) get translated back into physical memory. This is depicted in the figure above (lower part).

Categorisation can be used to assign consumers (a consumer would usually represent a software component) to each object or section. This is useful for subsequent steps in order to display memory consumption per consumer type. See the upper part of the figure shown above. Mechanisms are provided to batch categorise object and section names. "Objects in sections" provides ways to obtain a finer granularity of the categorisation result. Therefore categorised sections containing (smaller) objects of a different category got split up and result into a more accurate categorisation.

As a result you will get output files in form of a .csv file which sets you up to do later processing on this data easily. In this file additional information is added like:

  • Overlaps (of sections/objects)
  • Containments (e.g. sections containing objects)
  • Duplicates
  • All meta data about the origin of each section/object (mapfile, addess space, ...)
  • ...

Holding the aforementioned augmented data makes it easy to detect issues in linker scripts and get an in-depth understanding of your program's memory consumption. Including a lot of additional and "corrected" data can cause confusion. Thus all original (unmodified) data is preserved in the output files simultaneously.

The Emma visualiser helps you to create nice plots and reports in a .png and .html and markdown file format.

The whole Emma tool suite contains command line options making it convenient to be run on a build server like --Werror (treat all warnings as errors) or --no-prompt (exit and fail on user prompts; user prompts can happen when ambiguous configurations appear such as multiple matches for one configured map files).


Installation

pip3 install pypiemma

Dependencies: Python 3.6 or higher; pip3 install Pygments Markdown matplotlib pandas pypiscout

Optional: Cython For bigger projects escpecially the number of objects will grow. We provide an optional Cython implementation which can speed-up your analysis (you will gain typically about **30 % speed-up**).

For now we do not provide the binaries with Emma, hence you have to compile (make sure a suitable compiler is installed) it yourself (don't worry it is quick and easy):

Install the Cython package (pip install Cython) and (in the Emma top level folder) execute (MSVC is recommended on Windows):

python setup.py build_ext --inplace --compiler=msvc

General Workflow

The following figure shows a possible workflow using Emma:

Emma - as the core component - produces an intermediate .csv file. Inputs are mapfiles and JSON files (for configuration (memory layout, sizes, ...)). From this point you are very flexible to choose your own pipeline. You could

  • use the Emma tools (Emma Visualiser, Emma Deltas, ...) for further processing (data aggregation and analysis),
  • simply your favourite spreadsheet software (like Microsoft Excel or LibreOffice Calc) or
  • use your own tool for the data analysis.

Quick Start Guide

At this point we want to give you a brief overview what to do in the below two scenarios. If you want to play around go to (project files are already present) and use our example projects in ./doc/test_project*.

Example projects (including Emma* outputs/results) can be found in ./doc/test_project*.

Since version 3.1 Emma can be called in two ways (if you want to run it from the installation folder) where the following variant is recommended:

python Emma.py a --project doc/test_project --mapfiles doc/test_project/mapfiles

The following table provides an overview how you call Emma:

Emma module Entry point + <options> (if installed via pip) Top level sub-command (tlsc) (python Emma.py <tlsc>) Module (python -m + <module> <options>)
Analyser emma a Emma.emma
Visualiser emma_vis v Emma.emma_vis
Deltas emma_deltas d Emma.emma_deltas

Project files are already present

Try python Emma.py a --help to see all possible options or refer to the documentation (./doc/*).

  1. Create intermediate .csv from mapfiles with Emma:
python Emma.py a -p .\MyProjectFolder --map .\MyProjectFolder\mapfiles --dir .\MyProjectFolder\analysis --subdir Analysis_1
  1. Generate reports and graphs with Emma Visualiser:
python Emma.py v -p .\MyProjectFolder --dir .\MyProjectFolder\analysis --subdir Analysis_1 -q 

Project files that have to be created

To create a new project, the following files must be created:

  • globalConfig.json
  • budgets.json
  • categories.json
  • categoriesKeywords.json
  • categoriesSections.json
  • categoriesSectionsKeywords.json

You will find example projects in ./doc/test_project*. In-depth documentation can be found in the full documentation (see ./doc/).

A basic configuration can be short per file. For complex systems you can choose from many optional keywords/options that will provide you means to adjust your analysis as fine grained as you wish.

One main concept includes the globalConfig.json. You can see this as meta-config. Each configuration ID (configID) is a separately conducted analysis. Per configID you state individually the configuration files you want to use for this exact analysis. Herewith you can mix and match any combination of subconfigs you prefer.

A globalConfig.json could look like this:

{
    "configID1": {
        "addressSpacesPath": "addressSpaces.json",
        "sectionsPath": "sections.json",
        "patternsPath": "patterns.json"
    }
}

Full documentation

For the full documentation please refer to the ./doc/ directory.

Contribute

We are glad if you want to participate. In ./doc/dev-guide.md you will find a guide telling you everything you need to know including coding conventions and more.

emma-dev (. at) googlegroups.com

Dependencies & Licences

Library (version) pip package name Licence URL
Markdown (v3.0.1+) Markdown BSD-3-Clause https://github.com/Python-Markdown/markdown; https://python-markdown.github.io/
Pandas (v0.23.4+) pandas BSD-3-Clause https://github.com/pandas-dev/pandas/; http://pandas.pydata.org/getpandas.html
Pygments (v2.3.1+) Pygments BSD-2-Clause https://bitbucket.org/birkenfeld/pygments-main/src/default/; http://pygments.org/download/
Matplotlib (v3.0.0+) matplotlib Matplotlib License (BSD compatible) https://matplotlib.org/users/installing.html; https://github.com/matplotlib/matplotlib
SCout (v2.0+) pypiscout MIT https://github.com/holzkohlengrill/SCout
svgwrite (v1.4+) svgwrite MIT License (MIT License) https://github.com/mozman/svgwrite; https://svgwrite.readthedocs.io/en/latest/

Optional dependencies:

Utility scripts in ./doc/ need additional dependencies. As a normal user you can ignore this.

Library (version) pip package name Licence URL
gprof2dot (v2017.9.19+) gprof2dot LGPL-3.0 https://github.com/jrfonseca/gprof2dot
pylint (v2.3.1+) pylint GPL-2.0 https://github.com/PyCQA/pylint
Cython (v0.29.13+) Cython Apache-2.0 https://cython.org/

Please refer to the gprof2dot project site and install its dependencies (this has to be done even if you install Emma via pip).

Note that those modules are invoked via subprocess calls within the ./genDoc/ scripts.

Dependencies used to generate documentation for GitHub pages (separate, independent branch gh-pages):

Utility scripts used to build GitHub pages documentation. As a normal user you can ignore this.

Library (version) pip package name Licence URL
MkDocs (v1.0.4+) mkdocs BSD-3Clause https://github.com/mkdocs/mkdocs
Material for MkDocs (v4.4.1+) mkdocs-material MIT https://github.com/squidfunk/mkdocs-material

Code snippets etc.:

Name (version) Kind Modified? Licence URL
pygmentize (v2.2.0+) Auto-generated .css file Yes BSD-2-Clause http://pygments.org/download/; https://bitbucket.org/birkenfeld/pygments-main/issues/1496/question-licence-of-auto-generated-css
toHumanReadable (--) Code snippet No MIT https://github.com/TeamFlowerPower/kb/wiki/humanReadable

For the full documentation please refer to the ./doc/ directory.

emma's People

Contributors

dariapigasova avatar holzkohlengrill avatar kgergo88 avatar

Stargazers

 avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

emma's Issues

Trap JSON decoding errors

Description

Python will throw an exception when a JSON input file has no valid syntax. This is not pretty.
=> emma + emma_vis

Expected behaviour

A SCout error message should appear combined with a proper exit of Emma.

Observed behaviour

Python JSONDecodeError appears.

Steps to reproduce

Use invalid JSON config file.

===

=> Same issue with FileNotFoundError
=> emma + emma_vis + emma_deltas

Objects in Sections Corner Cases

Description

Following scenario:

# Case 0: The object is completely overlapping the section
# S          |--------------|            or:       |-----|
# O          |--------------|                    |--------|
#       <= --^              ^-- >=
# FIXME: Is the right case valid? (MSc) --------^^^^^^^^^^^^

The FIXME already indicates the first issue:

Is is a valid case that an object could start before and/or end after the section start/end?

Second issue:

S    ------||---------------|
O     |---------|

Can an object span over two different sections?

Test and fix the no-prompt option

The --noprompt command line argument and it´s documentation needs to be checked.

In the readme it was documented the following way: "exit and fail on user prompts".
The code might always be executing the first one.

Test this in the code and make sure it was documented accordingly.

Reset config behaviour

Use -r to reset the config:

  • First time you start the script with or without a path (so either you get prompted or provide the path via command line arg) the config file will be created; *What we still need is an info saying: this path will be from now on used for further runs if not stated"
  • Afterwards it does not prompt until you specify a new path; Config will be updated accordingly; Important is that you cannot just delete and recreate the file but you have to update it in case we will have more options in the future
  • -r will now reset the config meaning the config file will be deleted
  • Alternatively we could remove the -r option and let the user delete the file by hand if neccesary

Verbose command line argument has no effect

Description

The verbose option does not show any effect.

Reason: The command line argument is not passed to the SCout constructor.

Expected behaviour

Control inverse verbosity via -v

Observed behaviour

No change.

Generate .svg visualisations for sec/obj's

Feature Description

As a user I want to output image(s) which visualise objets and sections that I can verify and better understand the results (especially regarding overlap resolution).

Requirements

  • Prompt for address to be plotted (if starts with 0x consider in hex otherwise in dec)
  • Consider colour palette for obj/sec's
  • Print for each obj/sec FQN + start/end address in hex
  • Include obj/sec's which are partially included in the given address range -> use a different shape which visualises this fact (see below for an example)
  • opt: set markers where small overlaps are highlighted
  • ...

Obj/sec continues to the right but is cut/off (ROI)
img

Deltas: Check and handle input behaviour

Description

Incorrect input data causes deltas to crash. For example:

        (info) : Started processing at 14:52:04
Enter project root path >C:\0-repos\Emma\MAdZ-configs\Project
    0: Section_Summary
    1: Object_Summary
    2: Objects_in_Sections
Choose File type >
0
Select two indices seperated by one space >0 0 
Traceback (most recent call last):
  File "C:/0-repos/Emma/Emma/emma_deltas.py", line 153, in <module>
    runEmmaDeltas()
  File "C:/0-repos/Emma/Emma/emma_deltas.py", line 149, in runEmmaDeltas
    main(parsedArguments)
  File "C:/0-repos/Emma/Emma/emma_deltas.py", line 127, in main
    candidates = filePresenter.chooseCandidates()
  File "C:\0-repos\Emma\Emma\emma_delta_libs\FilePresenter.py", line 55, in chooseCandidates
    indices: typing.List[int] = [int(i) for i in indices]
  File "C:\0-repos\Emma\Emma\emma_delta_libs\FilePresenter.py", line 55, in <listcomp>
    indices: typing.List[int] = [int(i) for i in indices]
ValueError: invalid literal for int() with base 10: ''

Process finished with exit code 1

If an invalid root path is entered deltas raises a FileNotFound error.

ToDo's

  • Write unit tests in order to test alls possible behaviour
  • Fix errors

Consider also the expected behaviour in the attachment from #14 .

Regex for GHS map files broken

Description

The regex'es for the GHS parser match Global Symbols which they should not.

Observed behaviour

It should not match for example:

                  00000000+000000 D GetWriteBlock..libs.some.global..symbol.blah.blub.

but currently it does.

I figured this out since the address translation failed warning appeared. From the FQN it seems that only the section regex might be affected.

Possibly we can stop if a line starts with Global Symbols minus leading spaces. However we should check if we always can rely on this order: Sections Summary -> Object Summary -> Global Symbols

Add top level exec script

Feature Description

After the recent changes Emma can only be invoked via module calls (->like python -m Emma.emma --project doc/test_project --mapfiles doc/test_project/mapfiles --dir doc/test_project/results from the top level folder).

This is lengthy. Plus, an overall script for all modules would be convenient and will provide a better overview about the modules (a la git add or svn checkout, ...).

TODOs

  • Add top level script named Emma.py
  • Update documentation

Remove non pypiscout Emma command line prints

The new pypiscout has the feature to disable log printing based on the log levels.
With this functionality it is possible to completely turn off logging.

It was detected, that even when the logging turned off, there are still messages printed. These are messages that are made with the print() python function.

TODO

  • Review all the print commands whether it is possible to replace them with pypiscout calls.

Testing

The following topics need to be done:

Tool evaluations

Project structure

  • The issue needs to be developed on the ISSUE_#3_Testing branch
  • Define the project structure that has a folder for the tests and below that subfolders for unit tests, functional tests and other test relevant tools/data/mapfiles/configurations/scripts
  • Create a script to generate code coverage reports for the unit tests

Unit testing

  • Test first the most important functions and classes:
    • memoryMap.py::emma_libs.memoryMap.caluclateObjectsInSections
    • memoryManager.py::resolveDuplicateContainmentOverlap()
    • memoryManager.py::importData
    • memoryEntry.py::MemEntry
  • Develop full tests for the following files:
    • Files TBD...
  • Further points TBD...

Functional testing

  • Call Emma with various wrong command line arguments
  • Call Emma-vis with various wrong command line arguments
  • Call Emma-Delta with various wrong command line arguments
  • Create functional tests for the Emma with test_project (this is a precondition for #5)
  • Test the dependencies of arguments
  • Make tests with mapfiles that have a wrong structure/syntax
  • Make tests with configurations that are ill formed:
    • globalConfig.json without any configID defined in it
    • A config with an address spaces file that has an ignored memory entries that do not exist in it
  • Categorisation
    • If a category does not match, we shall get a warning
  • Call Emma with no map files at all and check for the error message (exit code -10)
  • TBD...

Corrections needed based on the first review of unit tests

Invalid configIDs were not removed

Description

  1. If a configID is invalid a warning is shown but the analysis runs nevertheless
  2. This warning also appears for valid configIDs

Expected behaviour

  1. Skip invalid configIDs.
  2. Show warnings were there are errors with the configuration

PyCharm inspection detects wrong Markdown picture links

image

Some picture links has an ending width="XY%" /></div> where the / after the % symbol seems to be extra. This will lead to a generated html where the <p> and </p> tags are not matched according to PyCharm.

PyCharm detects the error with the following pictures:

  • readme.html
    • classes_memoryManager.png
    • emma_filtered_profile.png
  • readme-vis.html
    • emma_vis_filtered.profile.png

TODO

  • Examine the issue
  • Try to remove the extra / symbols and rerun PyCharm inspection
  • Check the generated .html files

Check for correct handling of negative offsets

Description

When entering a negative offset in the configuration (addressSpaces.json) check if this is handled correctly.

{
    "offset": "<NEGATIVE_ADDRESS>",
    "memory": {
...

Expected behaviour

  • Negative resulting addresses should be skipped and a warning should be shown
  • The offset should be handled signed (-> a negative offset is added; a positive offset subtracted)

GHS VASes talk about integrate/intex files

GHS Integrate files (.int) used by GHS Intex state VASes and additional information. This is a better alternative than using .ael files.

Add this to the documentation.

Change categorisation (keywords) strategy

Description

This is relatively dangerous (file Emma/emma_libs/categorisation.py):

def __evalCategoryOfAnElement(nameString, categories, categoriesKeywords, keywordCategorisedElements):
    """
    Function to find the category of an element. First the categorisation will be tried with the categories file,
    and if that fails with the categoriesKeywords file. If this still fails a default value will be set for the category.
    If the element was categorised by a keyword then the element will be added to the keywordCategorisedElements list.
...
    """

Also, the *Keywords.json files should only be loaded/used when the command line flag is set.

Expected behaviour

Don't use the keywords fallback for a normal analysis an instead print a info or weak warning to std out that a category ist not in the configuration. If the categorisation file is empty skip the print to std out and display a general weak warning that categorisation is not configured.

Memory Manager Redesign

Preconditions

  • We need to have functional tests for the test_project with Emma (see #3) before we change anything in order to be sure that the same values will come out from Emma after the changes

Postconditions

  • Unused imports needs to be removed
  • Integrate individual mapfile paths feature (issue #16 )
  • Document added with description how to add support for new compilers
    • Evaluate where this would belong: separate document or in the contributions document? It was described in the doc/contribution.md
    • Add description to the doc/contribution.md about the compiler specific configuration design hints
    • Add instructions to the contribution.md which documents and tests need to be added for the new compiler supprt
    • Make it clear in the doc/readme.md which parts of the configuration is general and which is compiler specific
    • Move the current specific parts into a GHS configuration chapter

Branch

  • The issue needs to be developed on the ISSUE_#5_MemoryManagerRedesign branch

Necessary configuration changes:

  • The globalConfig.json needs to contain a new key for every configId that specifies the used compiler. This is needed to be able to instantiate the compiler dependent Compiler and MapfileProcessor objects. This can not be a command line argument because it is possible to have configurations where the configIds have different compilers.

Motivation

  • The MemoryManager class is:
    • simply too big to develop unit test​s for it
    • way too complex to integrate the future changes, like the gcc compiler into it
    • the compiler specific parts are not separated from the generic ones

New design

UML Diagram

Description of the new classes (a slightly different design was implemented, see UML Diagram)

  • MemoryManager:
    • Organizing the process of creating CVS files from the configuration files and the mapfiles
    • It executes each task trough registered objects to be independent from the implementation details of these
    • It contains as minimal code as possible, it only serves the purpose of organization and providing it´s components with data
    • List of registered objects:
      • A specialized configuration object
      • A specialized mapfile processor object
      • MemoryMap
  • Configuration

    • A superclass of compiler specific configuration processors
    • Processes the generic parts of the configuration
    • Provides an interface to use the processed information
    • Tasks that are realized in the superclass:
      • Processing the global_config.json
      • Processing the addressSpaces_*.json
      • Processing the "mapfile" object in the patterns_*.json
      • Processing the categoriesObjects.json
      • Processing the categoriesObjectsKeywords.json
      • Processing the categoriesSections.json
      • Processing the categoriesSectionsKeywords.json
      • List of functions:
        readGlobalConfigJson()
        __addFilesPerConfigID()
        __addMapfilesToGlobalConfig()
        __validateConfigIDs()
        removeUnmatchedFromCategoriesJson()
        createCategoriesJson()
    • Tasks that are realized in the subclasses:
      • Processing the rest of the patterns_*.json (the "mapfile" object is processed in the superclass)
      • Processing of virtualSections_*.json (because we do not know the VAS-es are with other compilers)
      • List of functions:
        checkMonolithSections()
        __addTabularisedMonoliths()
        __addMonolithsToGlobalConfig()
  • MapfileProcessor

    • Superclass of compiler specific mapfile processors
    • Tasks that are realized in the superclass:
      • Doing generic tasks during the mapfile content processing
      • Produces a list of ObjectEntry and a list of SectionEntry objects
      • List of functions:
        __evalMemRegion()
        __addMemEntry()
        __categoriseByKeyword()
        __searchCategoriesJson()
        __evalCategory()
        __evalRegexPattern()
    • Tasks that are realized in the subclasses:
      • Doing specific tasks during the mapfile content processing
      • List of functions:
        __translateAddress()
        importData()
  • MemoryMap

    • Receives a list of ObjectEntry objects and a list of SectionEntry objects
    • Resolves the duplication, containment and overlap of these lists
    • After the resolving it creates the ObjectsInSections MemEntry object list
    • From these three lists it creates the ObjectSummary, the ImageSummary and the ObjectsInSections CVS tables
    • Writes the CVS tables to the HDD
    • List of fuctions:
      writeSummary()
      createMemStatsFilepath()
      consumerCollectionToCSV()
      resolveDuplicateContainmentOverlap()
      calculateObjectsInSections()
      memoryMapToCSV()

Improve user experience

Feature Description

We want a stable and comfortable user interface which can compare quickly and easily Emma analyses.

The current user interface is quite rudamentary.

ToDo's

  • Test suitable libraries
  • Check licence
  • Write unit tests
  • Implement the change according to #14

See also this ancient issue for more details (note: not everything is still up-to-date what is stated there): https://cc-github.bmwgroup.net/marcelschmalzlpartner/MAdZ/issues/45

Consider also the thoughts in: https://cc-github.bmwgroup.net/marcelschmalzlpartner/MAdZ/issues/44

Some links to start with

Clean-up `genDocs` scripts

Description

Since we switched to GitHub pages there is no need for html documentation anymore.

To be discussed if we should keep the html generation part for docs. However it might likely fail at some point since it won't be executed regularly anymore.

  • Tag (git) the last working state of genDocs
  • Then remove this part
  • Question: We could/should keep the call graph and UML part since this might still be useful (-> size would be higher in this case)

Section / address space / ... tester

Feature Description

  • Tests if...
    • something (section, object, ...) is within one section
    • something is contained, overlapped, ...
    • what is before and after the given "something" or address
    • what lies on a given address (hex or dec)
  • The search should be possible for one or more (previously selected) reports
  • Output should be an neat output on the command line (so, no files so far)

Add emma to pip database

Feature Description

For the ease of use we should integrate emma into the pip database. Second it could extend the willingness to use emma.

Tasks for this would be:

  • Bring the project into the required folder structure (see also issue #9 )
  • Create files for pip integration (e.g. setup.py, ...)
  • Extend the CI job to auto-deploy emma to the pip database in case of new versions

Emma is slow for many objects

Description

If you got many objects (~ >100 000) the runtime starts to get significantly high.

Expected behaviour

Acceptable runtime (within minutes).

Move all emma exec files to a subfolder called emma

Due to pypi, pylint and other dependencies it is necessary to move those files one level deeper.

Exceptions:

  • doc/
  • .idea
  • all git related files
  • CONTRIBUTORS
  • COPYING
  • LICENSE
  • .pylintrc
  • .travis.yml
  • Readme's
  • setup.py
  • .coveragerc

Graphviz `.dot` export

Feature Description

In order to give the user a better understanding about overlaps, containments and duplicates a graphical visualisation would help. Currently this is only possible by viewing the .csv file (-> textual representation).

In small projects it is still easy to keep an overview however when projects grow this can happen hundreds of times.

Another alternative or complementary solution would be to reshape or add some information that tools like Gephi could import the .csv and/or .dot file.

The great thing here about the .csv import is that you can switch from graphical to "data laboratory" representation (and back) and get more information about this specific element in this view.

Note about Gephi DOT support:

Gephi currently doesn’t provide a complete support of the DOT format. Subgraphs are not supported, nor custom attributes or size. Only labels and colors are imported if present. Directed and undirected graphs are supported.

From: https://gephi.org/users/supported-graph-formats/graphviz-dot-format/

emma_libs/memoryManager.py:156 is a good place to get started with the implementation.

Dot viewers / extensions

References

TODOs

  • Decide if both or only one solution should be implemented
  • Document if external libraries are used (like graphviz (MIT) for dot files); decide whether we need an external library
  • Write the implementation for generation
  • Include a new command line argument to actiavte the output
  • Document the shape/colour syntax
  • Write tests

Emma-vis --dir parameter handling

The handling of the --dir parameter is quite strange in the Emma-vis:

  • This is not a required parameter, but the memStats input files will be searched for in this folder (See helper.py::getLastModFileOrPrompt())
  • If the path given in the --dir parameter does not exist, then it will be created, which does not make sense, because then it surely does not contain the memStats input files
  • If no --dir is given, then it´s value will be default set to None. This leads that the os.path.join() call fails in emma_vis_libs.helper.getLastModFileOrPrompt()

TODO

  • Please review this part of the code and make a decision how it should be implemented
  • Implement the new design
  • Update documentation

Split Travis installs dependant on stage

Description

The build process could be speed-up splitting installations to the stages where those packages were only needed.

This could be handles with yaml aliases similar to the linux distributions already existing.

Categorisation keywords as Emma dry run

This should be like a "dry run". We need to run Emma in order to get all sections/objects from the map files. Then we auto-generate the categories config file based on the keyword file(s).

Currently .csv files are saved for such a run (which is the default behaviour for a normal run) which is not desired.

For a "keywords-run" no results should be stored.

TODO in: https://github.com/bmwcarit/Emma/blob/master/emma.py#L136 (invalid link: check commit from Jul 2019)

Add GCC compiler support

Feature Description

With the memory manager redesign (issue #5 ) we smoothed the ways for implementing a parser (memory manager) for GCC.

See also this issue: https://cc-github.bmwgroup.net/marcelschmalzlpartner/MAdZ/issues/74

TODOs

  • Implement a GCC parser (analogous to the preexisting one)
  • Add a new test project for GCC
  • Document licences (see also below)
  • Write unit tests and functional tests
  • Test new implementation

==

Infineon XMC 2Go blinky project licence:

| test_project_gcc (v1.3+) | Map files based on XMC 2Go Initial Start project   | No       | BSD-3-Clause  | [Infineon -> Documents -> XMC 2Go Initial Start](https://www.infineon.com/cms/en/product/evaluation-boards/kit_xmc_2go_xmc1100_v1/#!documents)                                                                                                                          |

The supplement folder should be created when not present

Description

Expected behaviour

If no supplement folder is detected a weak warning should inform the user that this folder does not exist in project path and that no supplement files will be attached to the report.

Observed behaviour

An error message is shown if the folder is not present.

Add sanity checks for JSON config files

Description

At least a few sanity check checking for mandatory flags are missing (e.g. "end" in addressSpaces*.json.

Expected behaviour

Sanity checks for all mandatory json flags.

Our of memory error during unit tests

Description

Failure
Traceback (most recent call last):
  File "C:\0-repos\Emma\tests\functional_tests\test__cmd_line.py", line 212, in test_normalRun
    Emma.emma_vis.main(argsEmmaVis)
  File "C:\0-repos\Emma\tests\functional_tests\..\..\Emma\emma_vis.py", line 110, in main
    consumptionModule.appendModuleConsumptionToMarkdownOverview(markdownFilePath)
  File "C:\0-repos\Emma\tests\functional_tests\..\..\Emma\emma_vis_libs\dataVisualiserObjects.py", line 151, in appendModuleConsumptionToMarkdownOverview
    self.plotByCategorisedModules(plotShow=False)  # Re-write .png to ensure up-to-date overview
  File "C:\0-repos\Emma\tests\functional_tests\..\..\Emma\emma_vis_libs\dataVisualiserObjects.py", line 52, in plotByCategorisedModules
    Emma.shared_libs.emma_helper.saveMatplotlibPicture(figure, os.path.join(self.resultsPath, filename), MEMORY_ESTIMATION_PICTURE_FILE_EXTENSION, MEMORY_ESTIMATION_PICTURE_DPI, False)
  File "C:\0-repos\Emma\tests\functional_tests\..\..\Emma\shared_libs\emma_helper.py", line 299, in saveMatplotlibPicture
    pictureData.savefig(fileObject, format=savefigFormat, dpi=savefigDpi, transparent=savefigTransparent)
  File "C:\Program Files (x86)\Python37-32\lib\site-packages\matplotlib\figure.py", line 2180, in savefig
    self.canvas.print_figure(fname, **kwargs)
  File "C:\Program Files (x86)\Python37-32\lib\site-packages\matplotlib\backend_bases.py", line 2082, in print_figure
    **kwargs)
  File "C:\Program Files (x86)\Python37-32\lib\site-packages\matplotlib\backends\backend_agg.py", line 527, in print_png
    FigureCanvasAgg.draw(self)
  File "C:\Program Files (x86)\Python37-32\lib\site-packages\matplotlib\backends\backend_agg.py", line 386, in draw
    self.renderer = self.get_renderer(cleared=True)
  File "C:\Program Files (x86)\Python37-32\lib\site-packages\matplotlib\backends\backend_agg.py", line 399, in get_renderer
    self.renderer = RendererAgg(w, h, self.figure.dpi)
  File "C:\Program Files (x86)\Python37-32\lib\site-packages\matplotlib\backends\backend_agg.py", line 86, in __init__
    self._renderer = _RendererAgg(int(width), int(height), dpi)
MemoryError: In RendererAgg: Out of memory

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Program Files (x86)\Python37-32\lib\unittest\case.py", line 59, in testPartExecutor
    yield
  File "C:\Program Files (x86)\Python37-32\lib\unittest\case.py", line 615, in run
    testMethod()
  File "C:\0-repos\Emma\tests\functional_tests\test__cmd_line.py", line 215, in test_normalRun
    self.fail("Unexpected exception: " + str(e))
  File "C:\Program Files (x86)\Python37-32\lib\unittest\case.py", line 680, in fail
    raise self.failureException(msg)
AssertionError: Unexpected exception: In RendererAgg: Out of memory

Steps to reproduce

Interestingly that does not always happen. I could not reproduce a scenario that always produces this error - even on the same machine.

Mostly this happens during a unit test run.

Python version: 3.7.0
OS: Win10

Installed packages:

pip3 list
Package              Version
-------------------- ----------
-yqt5-sip            4.19.13
Arpeggio             1.9.0
astroid              2.1.0
atomicwrites         1.3.0
attrs                19.1.0
backcall             0.1.0
bleach               3.0.2
certifi              2018.10.15
chardet              3.0.4
Click                7.0
codacy-coverage      1.3.11
colorama             0.4.0
coverage             4.5.3
coveralls            1.8.1
csv2md               1.0.1
csvtomd              0.3.0
cycler               0.10.0
decorator            4.3.0
defusedxml           0.5.0
docopt               0.6.2
docutils             0.14
entrypoints          0.2.3
gprof2dot            2017.9.19
graphviz             0.11.1
idna                 2.7
ipykernel            5.1.0
ipython              7.1.1
ipython-genutils     0.2.0
isort                4.3.17
jedi                 0.13.1
Jinja2               2.10
jsonschema           2.6.0
jupyter-client       5.2.3
jupyter-core         4.4.0
jupyterlab           0.35.4
jupyterlab-server    0.2.0
kiwisolver           1.0.1
lazy-object-proxy    1.3.1
lxml                 4.3.4
Markdown             3.1.1
Markups              3.0.0
MarkupSafe           1.1.0
matplotlib           3.1.0
mccabe               0.6.1
mistune              0.8.4
more-itertools       7.0.0
nbconvert            5.4.0
nbformat             4.4.0
nose                 1.3.7
notebook             5.7.0
numpy                1.15.2
ordered-set          3.1.1
pandas               0.24.2
pandocfilters        1.4.2
parso                0.3.1
pathspec             0.5.9
pickleshare          0.7.5
pip                  19.2.1
pluggy               0.9.0
ply                  3.11
prometheus-client    0.4.2
prompt-toolkit       2.0.7
py                   1.8.0
pydeps               1.7.1
pyecore              0.10.2
pyenchant            2.0.0
Pygments             2.3.1
pylint               2.2.2
pyparsing            2.2.2
pypiscout            2.0
pypiwin32            223
PyQt5                5.12.1
PyQt5-sip            4.19.15
PySide2              5.11.2
pytest               4.4.1
pytest-cov           2.7.1
python-dateutil      2.7.3
python-gitlab        1.6.0
python-markdown-math 0.6
pytz                 2018.5
pywin32              224
pywinpty             0.5.4
PyYAML               5.1.1
pyzmq                17.1.2
requests             2.21.0
RestrictedPython     4.0
scikit-learn         0.20.1
scipy                1.1.0
seaborn              0.9.0
Send2Trash           1.5.0
setuptools           39.0.1
six                  1.11.0
sklearn              0.0
stdlib-list          0.5.0
terminado            0.8.1
testpath             0.4.2
textX                1.8.0
tornado              5.1.1
traitlets            4.3.2
typed-ast            1.3.4
urllib3              1.24.1
wcwidth              0.1.7
webencodings         0.5.1
wrapt                1.11.1
yamllint             1.16.0
WARNING: You are using pip version 19.2.1, however version 19.2.3 is available.
You should consider upgrading via the 'python -m pip install --upgrade pip' command.

StartAddress + Length in ObjectsInSections should be empty in case of a fully filled section

Description

When a section is fully filled by objects/sections only a <Section_Entry> will exist. Is is confusing that it has a start address and a length (of 0 bytes) but no end address (because it would lead to a size of at least 1 byte).

Expected behaviour

No content shown at all for start and end address + length.

TODOs

  • Fix condition that a start + end address + length will be written
  • Only show the original address in address orig entries

Missing pictures in the .md documents

The genDoc folder is ignored currently by git at the moment. This contains some pictures files that are missing from the .md documents becauase of this. They can be found in the .html docs because of the picture embedding. These need to be not ignored anymore.

Todo:

  • Add the missing .png files to the repo
  • Test if all the .md documents are presented right

Unfiltered callgraph pictures will not be created

During the creation of the .png files from the unfiltered call graph dot files a segmentation fault happens. It seems that the current DPI value of 300 causes the problem. With a 200 DPI, the conversion succeeds.

TODO:

  • Find out the max possible DPI value with we can still create pictures
  • Change the DPI setting in the constants
  • Test the changes

Add paths for map files in configID

After recent discussions it makes to state different map file directories per configID.

  • The command line argument for map files will remain -> root folder
  • There will be a new optional parameter for the globalConfig named mapfiles
    • This will be a relative path starting from the root folder
    • If given map files will be searched within this relative path
    • If some configID entries contain this parameter it will be searched in this folder otherwise within the root folder

Change tripple colons code blocks to fenced code (doc)

Since GitHub does not understand code blocks with indentation and tripple colons like

:::bash
echo "blah"

::bash
echo "blah"

it is desirable to change all code blocks in the documentation to fenced markdown code:

echo "blah"

We have to check if the html generation is affected by this change.

`SECTIONS_TO_EXCLUDE` should be in a configuration file

Feature Description

SECTIONS_TO_EXCLUDE (find it here) contains kind of specific data. It would be better placed in the globalConfig file.

Define a default behaviour (= current behaviour with one exception (see below)) and add a weak warning notifying that this setting is not overwritten and the default SECTIONS_TO_EXCLUDE is currently active.

Exception: remove:

".unused_ram",
".mr_rw_NandFlashDataBuffer"

but keep the variable SECTIONS_TO_EXCLUDE.

TeamScale JSON output

Feature Description

In order to enable seamless integration to quality tools (TeamScale will be the first step towards this direction) Emma should generate some JSON output.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.