Giter VIP home page Giter VIP logo

aptk's Introduction

APTK - The Annotation Processor Toolkit

Maven Central Build Status codecov

Why you should use this project?

Nowadays, no one could imagine Java development without annotations. They allow you to provide meta-data in your source code which can either be processed at runtime via reflection or at compile time by using annotation processors.

Annotation processors allow you

  • to validate if your annotations are used correctly
  • to generate source and resource files or even classes

at compilation time.

Validation by using annotation processors can become quite handy, if there are some constraints related to the usage of annotations. Without validation by an annotation processor misuse of the annotation could only be detected on runtime. But in a lot of cases this could already be evaluated at compile time by using annotation processors which could trigger a compilation error in case of a constraint violation. Additionally, annotation processor driven code or resource file generation can also be very useful.

Unfortunately it's quite uncomfortable to develop and test annotation processors. First problem is that you have to cope with both java compile time and run time model, which can be very tricky at the beginning. Another problem is that the tools offered by java only offer some basic support for development. This project supports you by providing utilities that allow you to develop annotation processors in a more comfortable way. It also reduces the complexity of handling compile time and runtime model by shading common pitfalls behind it's api.

Features

  • provides a processor for generating wrapper classes for accessing annotation attributes
  • provides wrappers for Elements and TypeMirror that provide a lot of useful utility functions
  • provides support for Class conversion from runtime to compile time model (Class / FQN to Element and TypeMirror)
  • provides support for accessing the compile time element tree
  • provides generic Element based filters, validator and matchers
  • provides fluent element validation and filtering api
  • provides support for template based creation of java source and resource files
  • compatible with all java versions >=8 (dropped java 7 compatibility with version 0.20.0)
  • higher Java level features like modules, records and sealed classes are accessible (internally handled via reflection)

Getting started

Best way to start is to use the APTK maven archetype to create a basic project based on the APTK stack. The generated code contains some example code demonstrating how the APTK framework can be used. It's generally a good starting point for your annotation processor project or can be used as a sandbox to get in touch with the framework.

How does it work?

This project provides the abstract base class io.toolisticon.annotationprocessortoolkit.AbstractAnnotationProcessor which extends the AbstractProcessor class provided by java. Your annotation processor needs to extends this class to be able to use the utilities offered by this project and to build your annotation processor.

Manually init ToolingProvider if you don't use the AbstractAnnotationProcessor

Nevertheless, you can even use this library if your processor doesn't extend the io.toolisticon.annotationprocessortoolkit.AbstractAnnotationProcessor. You need to initialize the ToolingProvider manually in your processor - best place to do this is either in your processors init or process method:

ToolingProvider.setTooling(processingEnv);

Delivering your processor

In general, you should consider to have as few external dependencies as possible used by your processor. It's a good approach to use the maven shade plugin to repackage and embed the annotation-processor-toolkit and all other 3rd party dependencies into your annotation processor artifact. This can be done by adding the following to your annotation processors pom.xml:

<dependencies>

    <dependency>
        <groupId>io.toolisticon.aptk</groupId>
        <artifactId>aptk-tools</artifactId>
        <version>0.22.5</version>
    </dependency>

    <!-- recommended for testing your annotation processor -->
    <dependency>
        <groupId>io.toolisticon.cute</groupId>
        <artifactId>cute</artifactId>
        <version>0.12.1</version>
        <scope>test</scope>
    </dependency>

</dependencies>

<build>
<plugins>

    <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-shade-plugin</artifactId>
        <version>2.4.3</version>
        <executions>
            <execution>
                <phase>package</phase>
                <goals>
                    <goal>shade</goal>
                </goals>
                <configuration>

                    <!-- remove shaded dependencies from pom.xml -->
                    <createDependencyReducedPom>true</createDependencyReducedPom>

                    <!-- need to relocate used 3rd party dependencies and their transitive dependencies -->
                    <relocations>
                        <relocation>
                            <pattern>io.toolisticon.aptkio.toolisticon.aptk</pattern>
                            <shadedPattern>
                                your.projects.base.package._3rdparty.io.toolisticon.aptk
                            </shadedPattern>
                        </relocation>
                    </relocations>

                </configuration>
            </execution>
        </executions>
    </plugin>
</plugins>
</build>

Please check our example provided in the github.

Examples

Annotation Wrapper

Reading attribute values can be very complicated if it comes to annotation type or Class based attributes. In this case you are often forced to read the attribute values via the AnnotationMirror api. Additionally, you usually have to create some kind of class to store those annotation configurations of the annotation.

The APTK provides an annotation processor that generates wrapper classes that allow you to access the annotation attribiute like if you are accessing the annotation directly. Only difference is that Class type based attributes will be accessible as FQN String, TypeMirror or TypeMirrorWrapper. Annotation type based attributes will be also wrapped to ease access.

A small example:

Annotation :

@Target(ElementType.TYPE)
@Retention(RetentionPolicy.RUNTIME)
public @interface PrettyExample {
    String aStringBasedValue();
    Class<?> typeBasedAttribute();
}

can be accessed the following way:

PrettyExampleWrapper wrapper = PrettyExampleWrapper.wrap(element);

// access annotated element
Element annotatedElement = wrapper._annotatedElement();

// access annotation mirror
AnnotationMirror annotationMirror = wrapper._annotationMirror();

// read type based attributes
TypeMirror typeMirror = wrapper.typeBasedAttributeAsTypeMirror();
TypeMirrorWrapper typeMirrorWrapper = wrapper.typeBasedAttributeAsTypeMirrorWrapper();
String fqn = wrapper.typeBasedAttributeAsFqn();

Annotation based attributes will be accessible via their AnnotationWrappers as well.

Please check annotation wrapper processor for further information.

Element Wrappers

Element wrappers are quite helpful by extending the Element API by enhanced utility functions, which for example help to navigate through the element tree or by providing Element validation support.

Some examples:

// validation - lambda style
Element element = null;
ElementWrapper.wrap(element).validate()
    .asError().withCustomMessage("Annotation must be placed on static inner class with public or protected modifier")
    .check( ElementWrapper::isClass)
    .and(e -> e.hasModifiers(Modifier.STATIC) && (e.hasModifiers(Modifier.PUBLIC) || e.hasModifiers(Modifier.PROTECTED)))
    .validate();

// same validation APTK style - with generic compiler messages
ElementWrapper.wrap(element).validateWithFluentElementValidator()
    .is(AptkCoreMatchers.IS_CLASS)
    .applyValidator(AptkCoreMatchers.BY_MODIFIER).hasAllOf(Modifier.STATIC)
    .applyValidator(AptkCoreMatchers.BY_MODIFIER).hasOneOf(Modifier.PUBLIC, Modifier.PROTECTED)
    .validateAndIssueMessages();

// Navigation / Filtering
List<TypeElementWrapper> allStaticInnerClasses = ElementWrapper.wrap(element).getAllEnclosingElements().stream()
    .filter(ElementWrapper::isClass)
    .filter(e -> e.hasModifiers(Modifier.STATIC) && (e.hasModifiers(Modifier.PUBLIC) || e.hasModifiers(Modifier.PROTECTED)))
    .map(ElementWrapper::toTypeElement)
    .collect(Collectors.toList());

// getting methods of TypeElement
TypeElement typeElement = null;
Optional<ExecutableElementWrapper> method = TypeElementWrapper.wrap(typeElement).getMethod("methodName", String.class, Long.class);

// ...

TypeMirror wrapper

There is a wrapper for TypeMirrors as well. It's called TypeMirrorWrapper and provides a lot of usefull tools like checking Assignability. Usage is similar to the Element wrapper:

TypeMirrorWrapper.wrap(typeMirror);

Enhanced utility support

Java itself provides some tools to support you to build annotation processors. This framework provides some utility classes to add some useful features not covered by these tools:

  • Elements : ElementUtils provides support to navigate through the Element tree
  • Types : TypeUtils provides support to cope with type in java compile time model
  • Messager : MessagerUtils provides support to issue messages during compilation
  • Filer : FilerUtils provides support to access or write java source or resource files

There are some more helpful utility classes:

  • AnnotationUtils : provides support for reading annotation attribute values
  • AnnotationValueUtils : provides support for handling AnnotationValue;
  • InterfaceUtils : provides support for handling generic interfaces and superclasses, for example to determine concrete types of type variables in super classes or parent interfaces (still experimental)

Example:

// Check if TypeMirror is Array
boolean isArray=TypeUtils.CheckTypeKind.isArray(aTypeMirror);

// get TypeElement or TypeMirrors easily
TypeElement typeElement1=TypeUtils.TypeRetrieval.getTypeElement("fqn.name.of.Clazz");
TypeElement typeElement2=TypeUtils.TypeRetrieval.getTypeElement(Clazz.class);
TypeMirror typeMirror1=TypeUtils.TypeRetrieval.getTypeMirror("fqn.name.of.Clazz");
TypeMirror typeMirror2=TypeUtils.TypeRetrieval.getTypeMirror(Clazz.class);

boolean checkAssignability=TypeUtils.TypeComparison.isAssignableTo(typeMirror1,typeMirror2);

// get all enclosed elements annotated with Deprecated annotation
List<?extends Element> enclosedElements=ElementUtils.AccessEnclosedElements.getEnclosedElementsWithAllAnnotationsOf(element,Deprecated.class);

These are just a few examples of the provided tools. Please check the javadoc for more information.

Characteristic matching, validation and filtering of Elements with core matchers and fluent API

The framework provides a set of core matchers that can be used to check if an Element matches a specific characteristic.

Those core matchers can also be used for validation - validators allow you to check if an element matches none, one, at least one or all of the passed characteristics.

Additionally, the core matchers can be used to filter a List of Elements by specific characteristics.

The framework provides a FluentElementValidator and a FluentElementFilter class that allow you to combine multiple filters and validations by providing a simple and powerful fluent api.

Please check following examples:

List<Element> elements=new ArrayList<Element>();

// validator already will print output so additional actions are not necessary
FluentElementValidator.createFluentElementValidator(ElementUtils.CastElement.castToTypeElement(element))
.applyValidator(AptkCoreMatchers.IS_ASSIGNABLE_TO).hasOneOf(SpecificInterface.class)
.validateAndIssueMessages();

// Matcher checks for a single criteria
boolean isPublic=AptkCoreMatchers.BY_MODIFIER.getMatcher().checkForMatchingCharacteristic(element,Modifier.PUBLIC);

// Validator checks for multiple criteria : none of, one of, at least one of or all of
boolean isPublicAndStatic=AptkCoreMatchers.BY_MODIFIER.getValidator().hasAllOf(element,Modifier.PUBLIC,Modifier.STATIC);

// Filter checks for multiple criteria and returns a List that contains all matching elements
List<Element> isPublicAndStaticElements=AptkCoreMatchers.BY_MODIFIER.getFilter().filterByAllOf(elements,Modifier.PUBLIC,Modifier.STATIC);

// Just validates without sending messages
boolean isPublicAndStatic2=FluentElementValidator.createFluentElementValidator(element)
.applyValidator(AptkCoreMatchers.BY_MODIFIER).hasAllOf(Modifier.PUBLIC,Modifier.STATIC)
.justValidate();

// Validate and send messages in case of failing validation
FluentElementValidator.createFluentElementValidator(element)
.applyValidator(AptkCoreMatchers.BY_MODIFIER).hasAllOf(Modifier.PUBLIC,Modifier.STATIC)
.validateAndIssueMessages();


// Filters list by criteria : returns all method Elements that are public and static
List<ExecutableElement> filteredElements=FluentElementFilter.createFluentElementFilter(elements)
.applyFilter(AptkCoreMatchers.IS_METHOD)
.applyFilter(AptkCoreMatchers.BY_MODIFIER).filterByAllOf(Modifier.PUBLIC,Modifier.STATIC)
.getResult();

Template based java source and resource file creation

Template based java source Resource file creation and source file creation is very simple:

Sample template file

The framework provides a rudimentary templating mechanism which can be used to create resource and java source files. It supports dynamic text replacement and for and if control blocks.

!{if textArray != null}
    !{for text:textArray}
        Dynamic text: ${text}<br />
    !{/for}
!{/if}

Sample code : Resource file creation

String[]textArray={"A","B","C"};

// create Model
Map<String, Object> model=new HashMap<String, Object>();
model.put("textArray",textArray);

final String package="io.toolisticon.example";
final String fileName="generatedExample.txt";

try{
    // template is loaded resource
    SimpleResourceWriter resourceWriter=FilerUtils.createResource(StandardLocation.CLASS_OUTPUT,package,fileName);
    resourceWriter.writeTemplate("example.tpl",model);
    resourceWriter.close();
}catch(IOException e){
    MessagerUtils.error(null,"Example file creation failed for package '${0}' and filename '${1}'",package,fileName);
}

Please check template engine for further information.

Alternative way for creating source and resource files

You don't have to use the builtin template library. You can also use any other template library or even java-poet or kotlin-poet for source and resource file creation. The FilerUtils utility class also provides a way to write Strings to Source and Resource files.

Projects using this toolkit library

  • bean-builder : An annotation processor to generate fluent instance builder classes for bean classes
  • SPIAP : An annotation processor that helps you to generate SPI configuration files and service locator classes
  • FluApiGen : An annotation processor that generates fluent api implementation based on annotated interfaces
  • APTK itself provides some annotation processors based on the APTK for generating wrappers for annotations or compiler messages

Useful links

Compile time testing of annotation processors

  • Toolisticon CUTE : A simple compile testing framework that allows you to test annotation processors. It was extracted from this project is a great help to unit test your annotation processor code. It supports both unit and black box testing of the processors code.
  • google compile-testing : Another compile testing framework which was used in the past by this framework. It has some flaws like missing compatibility with different Java versions, is binding a lot of common 3rd party libraries, and has almost no documentation.

Contributing

We welcome any kind of suggestions and pull requests.

Building and developing annotation-processor-toolkit

The annotation-processor-toolkit is built using Maven (at least version 3.0.0).

To build the annotation-processor-toolkit on the commandline, just run mvn or mvn clean install

Requirements

The likelihood of a pull request being accepted rises with the following properties:

  • You have used a feature branch.
  • You have included tests that demonstrate the functionality added or fixed.
  • You adhered to the code conventions.

Contributions

  • (2017) Tobias Stamann (Holisticon AG)

License

This project is released under the revised MIT License.

aptk's People

Contributors

dependabot-preview[bot] avatar dependabot[bot] avatar jangalinski avatar tobiasstamann avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

aptk's Issues

Java >= 9 compatibility

The APTK internally uses the google compile test library.

But we are in a kind of dilemma. Only versions <= 0.9 are compatible with Java versions 6 and 7.
But these versions are using the tools.jar and are therefore incompatible with Java >= 9.

We need to replace the library by some custom code. We can use the existing Fluent api and just replace the implementation.

Add support for changing classes

Annotation processors usually aren't allowed to alter or overwrite existing classes.

Nevertheless it's possible to do that by using loopholes in the JDK.
Lombok is a prominient example for that.

So it would be nice to provide such support as well because it extends the possibilities of things can do with annotation processors drastically.

Lombok compatibility

By default, execution order of annotation processors is random.

So in some cases we need to check for lombok annotations.

An example the CoreMatcher.HAS_NOARG_CONSTRUCTOR might detect the default noarg constructor at a class annotated with RequiredArgsConstructor or AllArgsConstructor leading to a false positive result...

Fix FuentElementValidator

The FuentElementValidator class was accidentally committed and does not work at all.
This must be fixed.

Stop supporting Java 6

Extended support for java 6 ended in december 2018.
With Java 12 the source and target compatibility with java 6 was removed.

So i guess it's time to drop support for Java 6.

It also allows the use of SafeVarArgs annotation to get rid of some annoying warning messages when using varargs.

Add transitions to FluentElementFilter

It might be nice to have the possibility to do transitions on FilterElements.

For example you could access all child elements from within the fluent filter.

transition results must replace the current Element filter base (internal Element list).

Simplify API by removing FrameworkToolWrapper from API

We should remove the FrameworkTooWrapper from the API to be able to simplify the API drastically.

This can be done by using a ThreadLocal ToolingProvider class. This must be initialized / cleaned up in the AbstractProcessor base class.

By doing this most Utiity classes can the be accessed in a static way.

Enhanced Support for Kotlin

Some things are not working properly in Kotlin

  • AnnotationWrapper annotation can just be placed on PACKAGES which can't be annotated in Kotlin => Workaround via Class

Better Support of incremental builds by CompilerMessagesProcessor

Some IDES like Intellij are just doing incremental compilation of changed classes.

With current Compiler mesaage processors implementation this could lead to IDE internal build issues because of missing enum values.
To fix this the compiler should collect the package names of annotated messages and then scan for all annotations in processingOver flagged phase. By doing this it wopuld pick up even the precompiled classes from classpath.

Use spiap in examples

Get rid of manually created META-INF/services files in examples for registering annotation processors to allow annotation processing in annotation processor projects.

This allows us to use annotation processor based tools like lombok.

Enhance documentation

Documentation is kind of outdated since it only describes the usage of the fluent type validator.

There are already a lot more features included in the toolkit which need to be described.

It will be a good opportunity to do that not in the README.MD file but to use github pages with jekyll instead.

Enable Checkstyle Checks

Base functionality is working right now.
So it's time to polish the code a bit by enabling checkstyle checks and by fixing existing checkstyle violations.

Add support for class creation

The toolkit should provide an API to create classes easily.

This could be done by using some kind of templating engine.

That templating engine should support the following features:

  • control blocks (if, loop)
  • templating processors (insert variable/text, insert block)
  • handle imports, packages and class names

MessagerUtils: Support ValidationMessage as parameter

We need to support ValidationMessage as parameter to be able to move message generation based on code and message in a more centralized way.

This has several benefits like

  • code output can be enabled by MessagerUtils.setPrintMessageCodes
  • reduces complexity of Message enums
  • ...

Create utility class for reading AnnotationValues

We need some kind of utility class to ease reading of annotation attribute values.
This can become quite handy if we want to read encapsulated annotations in annotation attributes.
It's also useful to read class (Array) values.

Enhance templating by include command

It would be good to to support an include command to provide reusable templates without the need to copy and paste them. It also may help to make template creation more structured.

The command has to take two parameters:

  1. the template resource
  2. the access path that will be used as "model" for template

Need to add support to handle generic types

Handling generic types in classes and interfaces is really tricky to realize. Usually have to handle multiple Type Elements in the type hierarchy to find the correct type for return types and parameters. Otherwise it's impossible to create a valid implementation of a type.

Add enhanced support for Repeatable annotations

Java 8s Repeatable annotations are reducing boilerplate code on user side.

But they are complicating writing of annotation processors, since you have to process both annotation marked as repeatable and its wrapper annotation.

It should be great to provide a method in AbstractAnnotationProcessor with allows getting all elements marked with either of the the two annotations.

Additionally there should be an enhancement of annotation wrappers to access all value.

Switch to Java 8 Source level

Extended support of Java 7 will end soon, so it's definitly time to get rid of it and switch to source level 8.

Additionally further support of Java 8 streaming API should be added.
This mainly affects AptkCoreMatchers and FluentFilter FluentValidator API.

remove testhelper submodule

The compiletesting framework was created based on the testhelper code.

Therefore the testhelper submodule can be removed when all tests are migrated to use compiletesting framework.

This has also other benefits:

  • compiletesting framework configuration is simpler and far more readable
  • compiletesting framework is quite mature right now - extended support during failing tests,...
  • No more paramerterized tests

Records not properly detectable

Although APT creates TypeElement instances for Java records, the wrapper APIs of APTK do not properly handle those as they do not consider the element kind RECORD. Thus, for example, AptkCoreMatchers.IS_TYPE_ELEMENT filters out TypeElement instances created for records.

It would be nice if TypeElementWrapper exposed a getRecordComponents() that would internally defensively inspect whether the APT runs on Java 16 or above (presence of getRecordComponents() on TypeElement).

Does it make sense to rather implement โ€ฆ.isTypeElement() on ElementWrapper by doing a type check of the underlying Element rather than inspecting the element kind?

Fix issues related with generic types

The method matchers aren't working 100% correctly when generic parameters are used.

Additionally getting the FQN of a TypeMirror is broken in this case.

Plans for a new release?

I'm using and enjoying this toolkit, but I'd really like to pick up the new TypeMirror helpers, so I can completely drop auto-common. Do you plan on doing an official release soon? Thanks!

Increase variation in tests

Test do need some more variation since there are a lot of special cases for different kind of types.

This especially affects all atomic types, arrays and classes that are using generics. (Mainly all type that are not extending Object)

Test coverage might be misleading here. Because it doesn't describe if all special cases are handled coorectly

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.