Giter VIP home page Giter VIP logo

resilience4j-spring-boot2-demo's Introduction

Fault tolerance library designed for functional programming

1. Introduction

Resilience4j is a lightweight fault tolerance library designed for functional programming. Resilience4j provides higher-order functions (decorators) to enhance any functional interface, lambda expression or method reference with a Circuit Breaker, Rate Limiter, Retry or Bulkhead. You can stack more than one decorator on any functional interface, lambda expression or method reference. The advantage is that you have the choice to select the decorators you need and nothing else.

Resilience4j 2 requires Java 17.

// Create a CircuitBreaker with default configuration
CircuitBreaker circuitBreaker = CircuitBreaker.ofDefaults("backendService");

// Create a Retry with default configuration
// 3 retry attempts and a fixed time interval between retries of 500ms
Retry retry = Retry.ofDefaults("backendService");

// Create a Bulkhead with default configuration
Bulkhead bulkhead = Bulkhead.ofDefaults("backendService");

Supplier<String> supplier = () -> backendService
  .doSomething(param1, param2);

// Decorate your call to backendService.doSomething()
// with a Bulkhead, CircuitBreaker and Retry
// **note: you will need the resilience4j-all dependency for this
Supplier<String> decoratedSupplier = Decorators.ofSupplier(supplier)
  .withCircuitBreaker(circuitBreaker)
  .withBulkhead(bulkhead)
  .withRetry(retry)
  .decorate();

// Execute the decorated supplier and recover from any exception
String result = Try.ofSupplier(decoratedSupplier)
  .recover(throwable -> "Hello from Recovery").get();

// When you don't want to decorate your lambda expression,
// but just execute it and protect the call by a CircuitBreaker.
String result = circuitBreaker
  .executeSupplier(backendService::doSomething);

// You can also run the supplier asynchronously in a ThreadPoolBulkhead
 ThreadPoolBulkhead threadPoolBulkhead = ThreadPoolBulkhead
  .ofDefaults("backendService");

// The Scheduler is needed to schedule a timeout on a non-blocking CompletableFuture
ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(3);
TimeLimiter timeLimiter = TimeLimiter.of(Duration.ofSeconds(1));

CompletableFuture<String> future = Decorators.ofSupplier(supplier)
    .withThreadPoolBulkhead(threadPoolBulkhead)
    .withTimeLimiter(timeLimiter, scheduler)
    .withCircuitBreaker(circuitBreaker)
    .withFallback(asList(TimeoutException.class, CallNotPermittedException.class, BulkheadFullException.class),
      throwable -> "Hello from Recovery")
    .get().toCompletableFuture();
ℹ️
With Resilience4j you don’t have to go all-in, you can pick what you need.

3. Overview

Resilience4j provides several core modules:

  • resilience4j-circuitbreaker: Circuit breaking

  • resilience4j-ratelimiter: Rate limiting

  • resilience4j-bulkhead: Bulkheading

  • resilience4j-retry: Automatic retrying (sync and async)

  • resilience4j-timelimiter: Timeout handling

  • resilience4j-cache: Result caching

There are also add-on modules for metrics, Feign, Kotlin, Spring, Ratpack, Vertx, RxJava2 and more.

ℹ️
Find out full list of modules in our User Guide.
💡
For core modules package or Decorators builder see resilience4j-all.

4. Resilience patterns

name how does it work? description links

Retry

repeats failed executions

Many faults are transient and may self-correct after a short delay.

overview, documentation, Spring

Circuit Breaker

temporary blocks possible failures

When a system is seriously struggling, failing fast is better than making clients wait.

overview, documentation, Feign, Spring

Rate Limiter

limits executions/period

Limit the rate of incoming requests.

overview, documentation, Feign, Spring

Time Limiter

limits duration of execution

Beyond a certain wait interval, a successful result is unlikely.

documentation, Spring

Bulkhead

limits concurrent executions

Resources are isolated into pools so that if one fails, the others will continue working.

overview, documentation, Spring

Cache

memorizes a successful result

Some proportion of requests may be similar.

documentation

Fallback

provides an alternative result for failures

Things will still fail - plan what you will do when that happens.

Try::recover, Spring, Feign

Above table is based on Polly: resilience policies.

ℹ️
To find more information about resilience patterns check Talks section. Find out more about components in our User Guide.

5. Spring Boot

Setup and usage in Spring Boot 2 is demonstrated here.

6. Usage examples

6.1. CircuitBreaker, Retry and Fallback

The following example shows how to decorate a lambda expression (Supplier) with a CircuitBreaker and how to retry the call at most 3 times when an exception occurs. You can configure the wait interval between retries and also configure a custom backoff algorithm.

The example uses Vavr’s Try Monad to recover from an exception and invoke another lambda expression as a fallback, when even all retries have failed.

// Simulates a Backend Service
public interface BackendService {
    String doSomething();
}

// Create a CircuitBreaker (use default configuration)
CircuitBreaker circuitBreaker = CircuitBreaker.ofDefaults("backendName");
// Create a Retry with at most 3 retries and a fixed time interval between retries of 500ms
Retry retry = Retry.ofDefaults("backendName");

// Decorate your call to BackendService.doSomething() with a CircuitBreaker
Supplier<String> decoratedSupplier = CircuitBreaker
    .decorateSupplier(circuitBreaker, backendService::doSomething);

// Decorate your call with automatic retry
decoratedSupplier = Retry
    .decorateSupplier(retry, decoratedSupplier);

// Use of Vavr's Try to
// execute the decorated supplier and recover from any exception
String result = Try.ofSupplier(decoratedSupplier)
    .recover(throwable -> "Hello from Recovery").get();

// When you don't want to decorate your lambda expression,
// but just execute it and protect the call by a CircuitBreaker.
String result = circuitBreaker.executeSupplier(backendService::doSomething);

6.1.1. CircuitBreaker and RxJava2

The following example shows how to decorate an Observable by using the custom RxJava operator.

CircuitBreaker circuitBreaker = CircuitBreaker.ofDefaults("testName");
Observable.fromCallable(backendService::doSomething)
    .compose(CircuitBreakerOperator.of(circuitBreaker))
ℹ️
Resilience4j also provides RxJava operators for RateLimiter, Bulkhead, TimeLimiter and Retry. Find out more in our User Guide.

6.1.2. CircuitBreaker and Spring Reactor

The following example shows how to decorate a Mono by using the custom Reactor operator.

CircuitBreaker circuitBreaker = CircuitBreaker.ofDefaults("testName");
Mono.fromCallable(backendService::doSomething)
    .transformDeferred(CircuitBreakerOperator.of(circuitBreaker))
ℹ️
Resilience4j also provides Reactor operators for RateLimiter, Bulkhead, TimeLimiter and Retry. Find out more in our User Guide.

6.2. RateLimiter

The following example shows how to restrict the calling rate of some method to be not higher than 1 request/second.

// Create a custom RateLimiter configuration
RateLimiterConfig config = RateLimiterConfig.custom()
    .timeoutDuration(Duration.ofMillis(100))
    .limitRefreshPeriod(Duration.ofSeconds(1))
    .limitForPeriod(1)
    .build();
// Create a RateLimiter
RateLimiter rateLimiter = RateLimiter.of("backendName", config);

// Decorate your call to BackendService.doSomething()
Supplier<String> restrictedSupplier = RateLimiter
    .decorateSupplier(rateLimiter, backendService::doSomething);

// First call is successful
Try<String> firstTry = Try.ofSupplier(restrictedSupplier);
assertThat(firstTry.isSuccess()).isTrue();

// Second call fails, because the call was not permitted
Try<String> secondTry = Try.of(restrictedSupplier);
assertThat(secondTry.isFailure()).isTrue();
assertThat(secondTry.getCause()).isInstanceOf(RequestNotPermitted.class);

6.3. Bulkhead

There are two isolation strategies and bulkhead implementations.

6.3.1. SemaphoreBulkhead

The following example shows how to decorate a lambda expression with a Bulkhead. A Bulkhead can be used to limit the amount of parallel executions. This bulkhead abstraction should work well across a variety of threading and io models. It is based on a semaphore, and unlike Hystrix, does not provide "shadow" thread pool option.

// Create a custom Bulkhead configuration
BulkheadConfig config = BulkheadConfig.custom()
    .maxConcurrentCalls(150)
    .maxWaitDuration(100)
    .build();

Bulkhead bulkhead = Bulkhead.of("backendName", config);

Supplier<String> supplier = Bulkhead
    .decorateSupplier(bulkhead, backendService::doSomething);

6.3.2. ThreadPoolBulkhead

The following example shows how to use a lambda expression with a ThreadPoolBulkhead which uses a bounded queue and a fixed thread pool.

// Create a custom ThreadPoolBulkhead configuration
ThreadPoolBulkheadConfig config = ThreadPoolBulkheadConfig.custom()
    .maxThreadPoolSize(10)
    .coreThreadPoolSize(2)
    .queueCapacity(20)
    .build();

ThreadPoolBulkhead bulkhead = ThreadPoolBulkhead.of("backendName", config);

// Decorate or execute immediately a lambda expression with a ThreadPoolBulkhead.
Supplier<CompletionStage<String>> supplier = ThreadPoolBulkhead
    .decorateSupplier(bulkhead, backendService::doSomething);

CompletionStage<String> execution = bulkhead
    .executeSupplier(backendService::doSomething);

7. Consume emitted events

CircuitBreaker, RateLimiter, Cache, Bulkhead, TimeLimiter and Retry components emit a stream of events. It can be consumed for logging, assertions and any other purpose.

7.1. Examples

A CircuitBreakerEvent can be a state transition, a circuit breaker reset, a successful call, a recorded error or an ignored error. All events contains additional information like event creation time and processing duration of the call. If you want to consume events, you have to register an event consumer.

circuitBreaker.getEventPublisher()
    .onSuccess(event -> logger.info(...))
    .onError(event -> logger.info(...))
    .onIgnoredError(event -> logger.info(...))
    .onReset(event -> logger.info(...))
    .onStateTransition(event -> logger.info(...));
// Or if you want to register a consumer listening to all events, you can do:
circuitBreaker.getEventPublisher()
    .onEvent(event -> logger.info(...));

You can use RxJava or Spring Reactor Adapters to convert the EventPublisher into a Reactive Stream. The advantage of a Reactive Stream is that you can use RxJava’s observeOn operator to specify a different Scheduler that the CircuitBreaker will use to send notifications to its observers/consumers.

RxJava2Adapter.toFlowable(circuitBreaker.getEventPublisher())
    .filter(event -> event.getEventType() == Type.ERROR)
    .cast(CircuitBreakerOnErrorEvent.class)
    .subscribe(event -> logger.info(...))
ℹ️
You can also consume events from other components. Find out more in our User Guide.

8. Talks

0:34

Battle of the Circuit Breakers: Resilience4J vs Istio

Nicolas Frankel

GOTO Berlin

0:33

Battle of the Circuit Breakers: Istio vs. Hystrix/Resilience4J

Nicolas Frankel

JFuture

0:42

Resilience patterns in the post-Hystrix world

Tomasz Skowroński

Cloud Native Warsaw

0:52

Building Robust and Resilient Apps Using Spring Boot and Resilience4j

David Caron

SpringOne

0:22

Hystrix is dead, now what?

Tomasz Skowroński

DevoxxPL

9. Companies that use Resilience4j

  • Deutsche Telekom (In an application with over 400 million requests per day)

  • AOL (In an application with low latency requirements)

  • Netpulse (In a system with 40+ integrations)

  • wescale.de (In a B2B integration platform)

  • Topia (In an HR application built with microservices architecture)

  • Auto Trader Group plc (The largest Britain digital automotive marketplace)

  • PlayStation Network (A platform backend)

  • TUI InfoTec GmbH (Backend applications inside of reservation booking workflow streams for accommodations)

10. License

Copyright 2020 Robert Winkler, Bohdan Storozhuk, Mahmoud Romeh, Dan Maas and others

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

resilience4j-spring-boot2-demo's People

Contributors

ackintosh avatar csessa avatar dependabot[bot] avatar dkowis avatar dlsrb6342 avatar hexmind avatar mkurek avatar rlitto avatar robwin avatar rtnidumolu avatar silviobuss avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

resilience4j-spring-boot2-demo's Issues

Error in AbstractIntegrationTest

Getting error in line, @RunWith(SpringRunner.class)
It says :Type mismatch: cannot convert from Class to Class<? extends Runner>

For junit using the below dependency:

junit junit 4.4 test

How to run the demo?

docker-compose -f docker-compose.yml up

The above command doesn't give me access to anything. The local host links mentioned in the README all lead to no found pages

CircuitBreaker metric question

Hello

I am trying to get the same information shown in the /health endpoint without enabling the health indicator. I guess this can be done through metrics.

In the doc at http://resilience4j.github.io/resilience4j/#_circuitbreaker_5 it says:

Metrics are automatically published on the Metrics endpoint.
For example:

{
    "names": [
        "resilience4j.circuitbreaker.backendA.successful",
        "resilience4j.circuitbreaker.backendA.failed",
        "resilience4j.circuitbreaker.backendA.buffered",
        "resilience4j.circuitbreaker.backendA.buffered_max",
        "resilience4j.circuitbreaker.backendA.not_permitted",
        "resilience4j.circuitbreaker.backendB.successful",
        "resilience4j.circuitbreaker.backendB.failed",
        "resilience4j.circuitbreaker.backendB.buffered",
        "resilience4j.circuitbreaker.backendB.buffered_max",
        "resilience4j.circuitbreaker.backendB.not_permitted"
        ]
}

If I run the code and hit http://localhost:9080/actuator/metrics those names are not present. what I am missing?

Provide aspectj definitions

It would be very useful to provide Aspectj definitions for supporting non-runtime weaving. For example, I would like to be able to do the following and do post-compile weaving of my classes. That would allow for much faster execution and for better usage flexibility (e.g. being able to weave methods inside the same class).

<plugin>
                <groupId>org.codehaus.mojo</groupId>
                <artifactId>aspectj-maven-plugin</artifactId>
                <version>${aspectj-maven-plugin.version}</version>
                <configuration>
                    <complianceLevel>17</complianceLevel>
                    <source>17</source>
                    <target>17</target>
                    <showWeaveInfo>true</showWeaveInfo>
                    <forceAjcCompile>true</forceAjcCompile>
                    <sources/>
                    <testSources/>
                    <weaveDirectories>
                        <weaveDirectory>${project.build.directory}/classes</weaveDirectory>
                    </weaveDirectories>
                    <verbose>true</verbose>
                    <Xlint>warning</Xlint>
                    <aspectLibraries>
                        <aspectLibrary>
                            <groupId>org.springframework</groupId>
                            <artifactId>spring-aspects</artifactId>
                        </aspectLibrary>
                        <aspectLibrary>
                            <groupId>io.github.resilience4j</groupId>
                            <artifactId>resilience4j-spring</artifactId>
                        </aspectLibrary>
                    </aspectLibraries>
                </configuration>
                <executions>
                    <execution>
                        <goals>
                            <goal>compile</goal>
                            <goal>test-compile</goal>
                        </goals>
                    </execution>
                </executions>
                <dependencies>
                    <dependency>
                        <groupId>org.aspectj</groupId>
                        <artifactId>aspectjtools</artifactId>
                        <version>${aspectj.version}</version>
                    </dependency>
                </dependencies>
 </plugin>

Grafana (3000) port not accessible after docker compose.

This may really not be an issue, however I love this demo so much and Thank you guys for having reslience4j, prometheus and grafana integrated. I ran docker up and for some reason I am not able to access grafana port localhost:3000. Any pointers on why I could not access grafana port, Here are my logs,

_ docker-compose -f docker-compose.yml up
prometheus is up-to-date
grafana is up-to-date
Attaching to prometheus, grafana
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Starting Grafana" logger=server version=6.4.2 commit=443a0ba branch=HEAD compiled=2019-10-08T09:10:35+0000
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Config loaded from" logger=settings file=/usr/share/grafana/conf/defaults.ini
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Config loaded from" logger=settings file=/etc/grafana/grafana.ini
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Config overridden from command line" logger=settings arg="default.paths.data=/var/lib/grafana"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Config overridden from command line" logger=settings arg="default.paths.logs=/var/log/grafana"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Config overridden from command line" logger=settings arg="default.paths.plugins=/var/lib/grafana/plugins"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Config overridden from command line" logger=settings arg="default.paths.provisioning=/etc/grafana/provisioning"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Config overridden from command line" logger=settings arg="default.log.mode=console"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_DATA=/var/lib/grafana"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_LOGS=/var/log/grafana"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Path Home" logger=settings path=/usr/share/grafana
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Path Data" logger=settings path=/var/lib/grafana
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Path Logs" logger=settings path=/var/log/grafana
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Path Plugins" logger=settings path=/var/lib/grafana/plugins
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Path Provisioning" logger=settings path=/etc/grafana/provisioning
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="App mode production" logger=settings
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Initializing SqlStore" logger=server
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Connecting to DB" logger=sqlstore dbtype=sqlite3
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Starting DB migration" logger=migrator
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="create migration_log table"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="create user table"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index user.login"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index user.email"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="drop index UQE_user_login - v1"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="drop index UQE_user_email - v1"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Rename table user to user_v1 - v1"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="create user table v2"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_user_login - v2"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_user_email - v2"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="copy data_source v1 to v2"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Drop old table user_v1"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add column help_flags1 to user table"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Update user table charset"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add last_seen_at column to user"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add missing user data"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add is_disabled column to user"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="create temp user table v1-7"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_temp_user_email - v1-7"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_temp_user_org_id - v1-7"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_temp_user_code - v1-7"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_temp_user_status - v1-7"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Update temp_user table charset"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="create star table"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index star.user_id_dashboard_id"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="create org table v1"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_org_name - v1"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="create org_user table v1"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_org_user_org_id - v1"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_org_user_org_id_user_id - v1"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Update org table charset"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Update org_user table charset"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Migrate all Read Only Viewers to Viewers"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="create dashboard table"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index dashboard.account_id"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index dashboard_account_id_slug"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="create dashboard_tag table"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index dashboard_tag.dasboard_id_term"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="drop index UQE_dashboard_tag_dashboard_id_term - v1"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Rename table dashboard to dashboard_v1 - v1"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="create dashboard v2"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_dashboard_org_id - v2"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_dashboard_org_id_slug - v2"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="copy dashboard v1 to v2"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="drop table dashboard_v1"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="alter dashboard.data to mediumtext v1"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add column updated_by in dashboard - v2"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add column created_by in dashboard - v2"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add column gnetId in dashboard"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add index for gnetId in dashboard"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add column plugin_id in dashboard"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add index for plugin_id in dashboard"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add index for dashboard_id in dashboard_tag"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Update dashboard table charset"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Update dashboard_tag table charset"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add column folder_id in dashboard"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add column isFolder in dashboard"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add column has_acl in dashboard"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add column uid in dashboard"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Update uid column values in dashboard"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add unique index dashboard_org_id_uid"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Remove unique index org_id_slug"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Update dashboard title length"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add unique index for dashboard_org_id_title_folder_id"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="create dashboard_provisioning"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="create dashboard_provisioning v2"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_dashboard_provisioning_dashboard_id - v2"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_dashboard_provisioning_dashboard_id_name - v2"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="copy dashboard_provisioning v1 to v2"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="drop dashboard_provisioning_tmp_qwerty"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add check_sum column"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="create data_source table"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index data_source.account_id"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index data_source.account_id_name"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="drop index IDX_data_source_account_id - v1"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="drop index UQE_data_source_account_id_name - v1"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Rename table data_source to data_source_v1 - v1"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="create data_source table v2"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_data_source_org_id - v2"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_data_source_org_id_name - v2"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="copy data_source v1 to v2"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Drop old table data_source_v1 #2"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add column with_credentials"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add secure json data column"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Update data_source table charset"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Update initial version to 1"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add read_only data column"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Migrate logging ds to loki ds"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Update json_data with nulls"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="create api_key table"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index api_key.account_id"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index api_key.key"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index api_key.account_id_name"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="drop index IDX_api_key_account_id - v1"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="drop index UQE_api_key_key - v1"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="drop index UQE_api_key_account_id_name - v1"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Rename table api_key to api_key_v1 - v1"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="create api_key table v2"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_api_key_org_id - v2"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_api_key_key - v2"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_api_key_org_id_name - v2"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="copy api_key v1 to v2"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Drop old table api_key_v1"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Update api_key table charset"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add expires to api_key table"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="create dashboard_snapshot table v4"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="drop table dashboard_snapshot_v4 #1"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="create dashboard_snapshot table v5 #2"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_dashboard_snapshot_key - v5"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_dashboard_snapshot_delete_key - v5"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_dashboard_snapshot_user_id - v5"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="alter dashboard_snapshot to mediumtext v2"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Update dashboard_snapshot table charset"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add column external_delete_url to dashboard_snapshots table"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="create quota table v1"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_quota_org_id_user_id_target - v1"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Update quota table charset"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="create plugin_setting table"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_plugin_setting_org_id_plugin_id - v1"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add column plugin_version to plugin_settings"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Update plugin_setting table charset"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="create session table"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Drop old table playlist table"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Drop old table playlist_item table"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="create playlist table v2"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="create playlist item table v2"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Update playlist table charset"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Update playlist_item table charset"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="drop preferences table v2"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="drop preferences table v3"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="create preferences table v3"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Update preferences table charset"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add column team_id in preferences"
grafana | t=2019-10-14T12:05:49+0000 lvl=info msg="Executing migration" logger=migrator id="Update team_id column values in preferences"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="create alert table v1"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="add index alert org_id & id "
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="add index alert state"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="add index alert dashboard_id"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="Create alert_rule_tag table v1"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="Add unique index alert_rule_tag.alert_id_tag_id"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="create alert_notification table v1"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="Add column is_default"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="Add column frequency"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="Add column send_reminder"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="Add column disable_resolve_message"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="add index alert_notification org_id & name"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="Update alert table charset"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="Update alert_notification table charset"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="create notification_journal table v1"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="add index notification_journal org_id & alert_id & notifier_id"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="drop alert_notification_journal"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="create alert_notification_state table v1"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="add index alert_notification_state org_id & alert_id & notifier_id"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="Add for to alert table"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="Add column uid in alert_notification"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="Update uid column values in alert_notification"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="Add unique index alert_notification_org_id_uid"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="Remove unique index org_id_name"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="Drop old annotation table v4"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="create annotation table v5"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="add index annotation 0 v3"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="add index annotation 1 v3"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="add index annotation 2 v3"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="add index annotation 3 v3"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="add index annotation 4 v3"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="Update annotation table charset"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="Add column region_id to annotation table"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="Drop category_id index"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="Add column tags to annotation table"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="Create annotation_tag table v2"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="Add unique index annotation_tag.annotation_id_tag_id"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="Update alert annotations and set TEXT to empty"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="Add created time to annotation table"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="Add updated time to annotation table"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="Add index for created in annotation table"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="Add index for updated in annotation table"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="Convert existing annotations from seconds to milliseconds"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="Add epoch_end column"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="Add index for epoch_end"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="Make epoch_end the same as epoch"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="Move region to single row"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="create test_data table"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="create dashboard_version table v1"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="add index dashboard_version.dashboard_id"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index dashboard_version.dashboard_id and dashboard_version.version"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="Set dashboard version to 1 where 0"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="save existing dashboard data in dashboard_version table v1"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="alter dashboard_version.data to mediumtext v1"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="create team table"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="add index team.org_id"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index team_org_id_name"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="create team member table"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="add index team_member.org_id"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index team_member_org_id_team_id_user_id"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="Add column email to team table"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="Add column external to team_member table"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="Add column permission to team_member table"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="create dashboard acl table"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="add index dashboard_acl_dashboard_id"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index dashboard_acl_dashboard_id_user_id"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index dashboard_acl_dashboard_id_team_id"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="save default acl rules in dashboard_acl table"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="create tag table"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="add index tag.key_value"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="create login attempt table"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="add index login_attempt.username"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="drop index IDX_login_attempt_username - v1"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="Rename table login_attempt to login_attempt_tmp_qwerty - v1"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="create login_attempt v2"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_login_attempt_username - v2"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="copy login_attempt v1 to v2"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="drop login_attempt_tmp_qwerty"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="create user auth table"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_user_auth_auth_module_auth_id - v1"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="alter user_auth.auth_id to length 190"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="Add OAuth access token to user_auth"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="Add OAuth refresh token to user_auth"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="Add OAuth token type to user_auth"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="Add OAuth expiry to user_auth"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="Add index to user_id column in user_auth"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="create server_lock table"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="add index server_lock.operation_uid"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="create user auth token table"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index user_auth_token.auth_token"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index user_auth_token.prev_auth_token"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="create cache_data table"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index cache_data.cache_key"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Created default admin" logger=sqlstore user=admin
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Initializing HTTPServer" logger=server
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Initializing InternalMetricsService" logger=server
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Initializing RemoteCache" logger=server
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Initializing QuotaService" logger=server
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Initializing ServerLockService" logger=server
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Initializing UserAuthTokenService" logger=server
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Initializing PluginManager" logger=server
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Starting plugin search" logger=plugins
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Initializing RenderingService" logger=server
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Initializing AlertEngine" logger=server
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Initializing DatasourceCacheService" logger=server
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Initializing HooksService" logger=server
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Initializing LoginService" logger=server
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Initializing SearchService" logger=server
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Initializing TracingService" logger=server
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Initializing UsageStatsService" logger=server
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Initializing CleanUpService" logger=server
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Initializing NotificationService" logger=server
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Initializing provisioningServiceImpl" logger=server
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Backend rendering via phantomJS" logger=rendering
grafana | t=2019-10-14T12:05:50+0000 lvl=warn msg="phantomJS is deprecated and will be removed in a future release. You should consider migrating from phantomJS to grafana-image-renderer plugin." logger=rendering
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="Initializing Stream Manager"
grafana | t=2019-10-14T12:05:50+0000 lvl=info msg="HTTP Server Listen" logger=http.server address=0.0.0.0:3000 protocol=http subUrl= socket=
grafana | t=2019-10-14T12:12:45+0000 lvl=info msg="Starting Grafana" logger=server version=6.4.2 commit=443a0ba branch=HEAD compiled=2019-10-08T09:10:35+0000
grafana | t=2019-10-14T12:12:45+0000 lvl=info msg="Config loaded from" logger=settings file=/usr/share/grafana/conf/defaults.ini
grafana | t=2019-10-14T12:12:45+0000 lvl=info msg="Config loaded from" logger=settings file=/etc/grafana/grafana.ini
grafana | t=2019-10-14T12:12:45+0000 lvl=info msg="Config overridden from command line" logger=settings arg="default.paths.data=/var/lib/grafana"
grafana | t=2019-10-14T12:12:45+0000 lvl=info msg="Config overridden from command line" logger=settings arg="default.paths.logs=/var/log/grafana"
grafana | t=2019-10-14T12:12:45+0000 lvl=info msg="Config overridden from command line" logger=settings arg="default.paths.plugins=/var/lib/grafana/plugins"
grafana | t=2019-10-14T12:12:45+0000 lvl=info msg="Config overridden from command line" logger=settings arg="default.paths.provisioning=/etc/grafana/provisioning"
grafana | t=2019-10-14T12:12:45+0000 lvl=info msg="Config overridden from command line" logger=settings arg="default.log.mode=console"
grafana | t=2019-10-14T12:12:45+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_DATA=/var/lib/grafana"
grafana | t=2019-10-14T12:12:45+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_LOGS=/var/log/grafana"
grafana | t=2019-10-14T12:12:45+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
grafana | t=2019-10-14T12:12:45+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
grafana | t=2019-10-14T12:12:45+0000 lvl=info msg="Path Home" logger=settings path=/usr/share/grafana
grafana | t=2019-10-14T12:12:45+0000 lvl=info msg="Path Data" logger=settings path=/var/lib/grafana
grafana | t=2019-10-14T12:12:45+0000 lvl=info msg="Path Logs" logger=settings path=/var/log/grafana
grafana | t=2019-10-14T12:12:45+0000 lvl=info msg="Path Plugins" logger=settings path=/var/lib/grafana/plugins
grafana | t=2019-10-14T12:12:45+0000 lvl=info msg="Path Provisioning" logger=settings path=/etc/grafana/provisioning
grafana | t=2019-10-14T12:12:45+0000 lvl=info msg="App mode production" logger=settings
grafana | t=2019-10-14T12:12:45+0000 lvl=info msg="Initializing SqlStore" logger=server
grafana | t=2019-10-14T12:12:45+0000 lvl=info msg="Connecting to DB" logger=sqlstore dbtype=sqlite3
grafana | t=2019-10-14T12:12:45+0000 lvl=info msg="Starting DB migration" logger=migrator
grafana | t=2019-10-14T12:12:45+0000 lvl=info msg="Initializing HTTPServer" logger=server
grafana | t=2019-10-14T12:12:45+0000 lvl=info msg="Initializing InternalMetricsService" logger=server
grafana | t=2019-10-14T12:12:45+0000 lvl=info msg="Initializing RemoteCache" logger=server
grafana | t=2019-10-14T12:12:45+0000 lvl=info msg="Initializing QuotaService" logger=server
grafana | t=2019-10-14T12:12:45+0000 lvl=info msg="Initializing ServerLockService" logger=server
grafana | t=2019-10-14T12:12:45+0000 lvl=info msg="Initializing UserAuthTokenService" logger=server
grafana | t=2019-10-14T12:12:45+0000 lvl=info msg="Initializing PluginManager" logger=server
grafana | t=2019-10-14T12:12:45+0000 lvl=info msg="Starting plugin search" logger=plugins
grafana | t=2019-10-14T12:12:45+0000 lvl=info msg="Initializing RenderingService" logger=server
grafana | t=2019-10-14T12:12:45+0000 lvl=info msg="Initializing AlertEngine" logger=server
grafana | t=2019-10-14T12:12:45+0000 lvl=info msg="Initializing DatasourceCacheService" logger=server
grafana | t=2019-10-14T12:12:45+0000 lvl=info msg="Initializing HooksService" logger=server
grafana | t=2019-10-14T12:12:45+0000 lvl=info msg="Initializing LoginService" logger=server
grafana | t=2019-10-14T12:12:45+0000 lvl=info msg="Initializing SearchService" logger=server
grafana | t=2019-10-14T12:12:45+0000 lvl=info msg="Initializing TracingService" logger=server
grafana | t=2019-10-14T12:12:45+0000 lvl=info msg="Initializing UsageStatsService" logger=server
grafana | t=2019-10-14T12:12:45+0000 lvl=info msg="Initializing CleanUpService" logger=server
grafana | t=2019-10-14T12:12:45+0000 lvl=info msg="Initializing NotificationService" logger=server
grafana | t=2019-10-14T12:12:45+0000 lvl=info msg="Initializing provisioningServiceImpl" logger=server
grafana | t=2019-10-14T12:12:45+0000 lvl=info msg="Backend rendering via phantomJS" logger=rendering
grafana | t=2019-10-14T12:12:45+0000 lvl=warn msg="phantomJS is deprecated and will be removed in a future release. You should consider migrating from phantomJS to grafana-image-renderer plugin." logger=rendering
grafana | t=2019-10-14T12:12:45+0000 lvl=info msg="Initializing Stream Manager"
grafana | t=2019-10-14T12:12:45+0000 lvl=info msg="HTTP Server Listen" logger=http.server address=0.0.0.0:3000 protocol=http subUrl= socket=
grafana | t=2019-10-14T12:14:04+0000 lvl=info msg="Starting Grafana" logger=server version=6.4.2 commit=443a0ba branch=HEAD compiled=2019-10-08T09:10:35+0000
grafana | t=2019-10-14T12:14:04+0000 lvl=info msg="Config loaded from" logger=settings file=/usr/share/grafana/conf/defaults.ini
grafana | t=2019-10-14T12:14:04+0000 lvl=info msg="Config loaded from" logger=settings file=/etc/grafana/grafana.ini
grafana | t=2019-10-14T12:14:04+0000 lvl=info msg="Config overridden from command line" logger=settings arg="default.paths.data=/var/lib/grafana"
grafana | t=2019-10-14T12:14:04+0000 lvl=info msg="Config overridden from command line" logger=settings arg="default.paths.logs=/var/log/grafana"
grafana | t=2019-10-14T12:14:04+0000 lvl=info msg="Config overridden from command line" logger=settings arg="default.paths.plugins=/var/lib/grafana/plugins"
grafana | t=2019-10-14T12:14:04+0000 lvl=info msg="Config overridden from command line" logger=settings arg="default.paths.provisioning=/etc/grafana/provisioning"
grafana | t=2019-10-14T12:14:04+0000 lvl=info msg="Config overridden from command line" logger=settings arg="default.log.mode=console"
grafana | t=2019-10-14T12:14:04+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_DATA=/var/lib/grafana"
grafana | t=2019-10-14T12:14:04+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_LOGS=/var/log/grafana"
grafana | t=2019-10-14T12:14:04+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
grafana | t=2019-10-14T12:14:04+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
grafana | t=2019-10-14T12:14:04+0000 lvl=info msg="Path Home" logger=settings path=/usr/share/grafana
grafana | t=2019-10-14T12:14:04+0000 lvl=info msg="Path Data" logger=settings path=/var/lib/grafana
grafana | t=2019-10-14T12:14:04+0000 lvl=info msg="Path Logs" logger=settings path=/var/log/grafana
grafana | t=2019-10-14T12:14:04+0000 lvl=info msg="Path Plugins" logger=settings path=/var/lib/grafana/plugins
grafana | t=2019-10-14T12:14:04+0000 lvl=info msg="Path Provisioning" logger=settings path=/etc/grafana/provisioning
grafana | t=2019-10-14T12:14:04+0000 lvl=info msg="App mode production" logger=settings
grafana | t=2019-10-14T12:14:04+0000 lvl=info msg="Initializing SqlStore" logger=server
grafana | t=2019-10-14T12:14:04+0000 lvl=info msg="Connecting to DB" logger=sqlstore dbtype=sqlite3
grafana | t=2019-10-14T12:14:04+0000 lvl=info msg="Starting DB migration" logger=migrator
grafana | t=2019-10-14T12:14:04+0000 lvl=info msg="Initializing HTTPServer" logger=server
grafana | t=2019-10-14T12:14:04+0000 lvl=info msg="Initializing InternalMetricsService" logger=server
grafana | t=2019-10-14T12:14:04+0000 lvl=info msg="Initializing RemoteCache" logger=server
grafana | t=2019-10-14T12:14:04+0000 lvl=info msg="Initializing QuotaService" logger=server
grafana | t=2019-10-14T12:14:04+0000 lvl=info msg="Initializing ServerLockService" logger=server
grafana | t=2019-10-14T12:14:04+0000 lvl=info msg="Initializing UserAuthTokenService" logger=server
grafana | t=2019-10-14T12:14:04+0000 lvl=info msg="Initializing PluginManager" logger=server
grafana | t=2019-10-14T12:14:04+0000 lvl=info msg="Starting plugin search" logger=plugins
grafana | t=2019-10-14T12:14:04+0000 lvl=info msg="Initializing RenderingService" logger=server
grafana | t=2019-10-14T12:14:04+0000 lvl=info msg="Initializing AlertEngine" logger=server
grafana | t=2019-10-14T12:14:04+0000 lvl=info msg="Initializing DatasourceCacheService" logger=server
grafana | t=2019-10-14T12:14:04+0000 lvl=info msg="Initializing HooksService" logger=server
grafana | t=2019-10-14T12:14:04+0000 lvl=info msg="Initializing LoginService" logger=server
grafana | t=2019-10-14T12:14:04+0000 lvl=info msg="Initializing SearchService" logger=server
grafana | t=2019-10-14T12:14:04+0000 lvl=info msg="Initializing TracingService" logger=server
grafana | t=2019-10-14T12:14:04+0000 lvl=info msg="Initializing UsageStatsService" logger=server
grafana | t=2019-10-14T12:14:04+0000 lvl=info msg="Initializing CleanUpService" logger=server
grafana | t=2019-10-14T12:14:04+0000 lvl=info msg="Initializing NotificationService" logger=server
grafana | t=2019-10-14T12:14:04+0000 lvl=info msg="Initializing provisioningServiceImpl" logger=server
grafana | t=2019-10-14T12:14:04+0000 lvl=info msg="Backend rendering via phantomJS" logger=rendering
grafana | t=2019-10-14T12:14:04+0000 lvl=warn msg="phantomJS is deprecated and will be removed in a future release. You should consider migrating from phantomJS to grafana-image-renderer plugin." logger=rendering
grafana | t=2019-10-14T12:14:04+0000 lvl=info msg="Initializing Stream Manager"
prometheus | level=info ts=2019-10-14T12:05:49.520Z caller=main.go:296 msg="no time or size retention was set so using the default time retention" duration=15d
prometheus | level=info ts=2019-10-14T12:05:49.521Z caller=main.go:332 msg="Starting Prometheus" version="(version=2.13.0, branch=HEAD, revision=6ea4252299f542669aca11860abc2192bdc7bede)"
prometheus | level=info ts=2019-10-14T12:05:49.521Z caller=main.go:333 build_context="(go=go1.13.1, user=root@f30bdad2c3fd, date=20191004-11:25:34)"
prometheus | level=info ts=2019-10-14T12:05:49.521Z caller=main.go:334 host_details="(Linux 4.9.184-linuxkit #1 SMP Tue Jul 2 22:58:16 UTC 2019 x86_64 docker-desktop (none))"
prometheus | level=info ts=2019-10-14T12:05:49.521Z caller=main.go:335 fd_limits="(soft=1048576, hard=1048576)"
prometheus | level=info ts=2019-10-14T12:05:49.521Z caller=main.go:336 vm_limits="(soft=unlimited, hard=unlimited)"
prometheus | level=info ts=2019-10-14T12:05:49.525Z caller=main.go:657 msg="Starting TSDB ..."
prometheus | level=info ts=2019-10-14T12:05:49.525Z caller=web.go:450 component=web msg="Start listening for connections" address=0.0.0.0:9090
prometheus | level=info ts=2019-10-14T12:05:49.532Z caller=head.go:512 component=tsdb msg="replaying WAL, this may take awhile"
prometheus | level=info ts=2019-10-14T12:05:49.533Z caller=head.go:560 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0
prometheus | level=info ts=2019-10-14T12:05:49.535Z caller=main.go:672 fs_type=EXT4_SUPER_MAGIC
prometheus | level=info ts=2019-10-14T12:05:49.535Z caller=main.go:673 msg="TSDB started"
prometheus | level=info ts=2019-10-14T12:05:49.535Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
prometheus | level=info ts=2019-10-14T12:05:49.554Z caller=main.go:771 msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml
prometheus | level=info ts=2019-10-14T12:05:49.554Z caller=main.go:626 msg="Server is ready to receive web requests."
prometheus | level=info ts=2019-10-14T12:12:45.589Z caller=main.go:296 msg="no time or size retention was set so using the default time retention" duration=15d
prometheus | level=info ts=2019-10-14T12:12:45.589Z caller=main.go:332 msg="Starting Prometheus" version="(version=2.13.0, branch=HEAD, revision=6ea4252299f542669aca11860abc2192bdc7bede)"
prometheus | level=info ts=2019-10-14T12:12:45.589Z caller=main.go:333 build_context="(go=go1.13.1, user=root@f30bdad2c3fd, date=20191004-11:25:34)"
prometheus | level=info ts=2019-10-14T12:12:45.589Z caller=main.go:334 host_details="(Linux 4.9.184-linuxkit #1 SMP Tue Jul 2 22:58:16 UTC 2019 x86_64 docker-desktop (none))"
prometheus | level=info ts=2019-10-14T12:12:45.590Z caller=main.go:335 fd_limits="(soft=1048576, hard=1048576)"
prometheus | level=info ts=2019-10-14T12:12:45.590Z caller=main.go:336 vm_limits="(soft=unlimited, hard=unlimited)"
prometheus | level=info ts=2019-10-14T12:12:45.593Z caller=main.go:657 msg="Starting TSDB ..."
prometheus | level=info ts=2019-10-14T12:12:45.593Z caller=web.go:450 component=web msg="Start listening for connections" address=0.0.0.0:9090
prometheus | level=info ts=2019-10-14T12:12:45.597Z caller=head.go:512 component=tsdb msg="replaying WAL, this may take awhile"
prometheus | level=info ts=2019-10-14T12:12:45.605Z caller=head.go:560 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=1
prometheus | level=info ts=2019-10-14T12:12:45.605Z caller=head.go:560 component=tsdb msg="WAL segment loaded" segment=1 maxSegment=1
prometheus | level=info ts=2019-10-14T12:12:45.607Z caller=main.go:672 fs_type=EXT4_SUPER_MAGIC
prometheus | level=info ts=2019-10-14T12:12:45.607Z caller=main.go:673 msg="TSDB started"
prometheus | level=info ts=2019-10-14T12:12:45.607Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
prometheus | level=info ts=2019-10-14T12:12:45.706Z caller=main.go:771 msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml
prometheus | level=info ts=2019-10-14T12:12:45.707Z caller=main.go:626 msg="Server is ready to receive web requests."
prometheus | level=info ts=2019-10-14T12:14:05.027Z caller=main.go:296 msg="no time or size retention was set so using the default time retention" duration=15d
prometheus | level=info ts=2019-10-14T12:14:05.027Z caller=main.go:332 msg="Starting Prometheus" version="(version=2.13.0, branch=HEAD, revision=6ea4252299f542669aca11860abc2192bdc7bede)"
prometheus | level=info ts=2019-10-14T12:14:05.028Z caller=main.go:333 build_context="(go=go1.13.1, user=root@f30bdad2c3fd, date=20191004-11:25:34)"
prometheus | level=info ts=2019-10-14T12:14:05.028Z caller=main.go:334 host_details="(Linux 4.9.184-linuxkit #1 SMP Tue Jul 2 22:58:16 UTC 2019 x86_64 docker-desktop (none))"
prometheus | level=info ts=2019-10-14T12:14:05.028Z caller=main.go:335 fd_limits="(soft=1048576, hard=1048576)"
prometheus | level=info ts=2019-10-14T12:14:05.028Z caller=main.go:336 vm_limits="(soft=unlimited, hard=unlimited)"
prometheus | level=info ts=2019-10-14T12:14:05.031Z caller=main.go:657 msg="Starting TSDB ..."
prometheus | level=info ts=2019-10-14T12:14:05.031Z caller=web.go:450 component=web msg="Start listening for connections" address=0.0.0.0:9090
prometheus | level=info ts=2019-10-14T12:14:05.035Z caller=head.go:512 component=tsdb msg="replaying WAL, this may take awhile"
prometheus | level=info ts=2019-10-14T12:14:05.043Z caller=head.go:560 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=2
prometheus | level=info ts=2019-10-14T12:14:05.044Z caller=head.go:560 component=tsdb msg="WAL segment loaded" segment=1 maxSegment=2
prometheus | level=info ts=2019-10-14T12:14:05.044Z caller=head.go:560 component=tsdb msg="WAL segment loaded" segment=2 maxSegment=2
prometheus | level=info ts=2019-10-14T12:14:05.046Z caller=main.go:672 fs_type=EXT4_SUPER_MAGIC
prometheus | level=info ts=2019-10-14T12:14:05.047Z caller=main.go:673 msg="TSDB started"
prometheus | level=info ts=2019-10-14T12:14:05.047Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
prometheus | level=info ts=2019-10-14T12:14:05.169Z caller=main.go:771 msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml
prometheus | level=info ts=2019-10-14T12:14:05.170Z caller=main.go:626 msg="Server is ready to receive web requests."
grafana | t=2019-10-14T12:14:04+0000 lvl=info msg="HTTP Server Listen" logger=http.server address=0.0.0.0:3000 protocol=http subUrl= socket=
_

Query with instance in Prometheus

Hello Resilience4j :)

I got curious when I tried to make monitoring system with Prometheus.

is it possible to query with instance in Prometheus?

sum(resilience4j_circuitbreaker_state{instance="$instance", state="closed"})

I can see some query using instance in dashboard.json file. but I can not find any datas using instance in Prometheus.

# HELP resilience4j_circuitbreaker_state The states of the circuit breaker
# TYPE resilience4j_circuitbreaker_state gauge
resilience4j_circuitbreaker_state{application="resilience4j-jc",name="testCircuitBreaker",state="disabled",} 0.0
resilience4j_circuitbreaker_state{application="resilience4j-jc",name="testCircuitBreaker",state="closed",} 1.0
resilience4j_circuitbreaker_state{application="resilience4j-jc",name="testCircuitBreaker",state="open",} 0.0
resilience4j_circuitbreaker_state{application="resilience4j-jc",name="testCircuitBreaker",state="half_open",} 0.0
resilience4j_circuitbreaker_state{application="resilience4j-jc",name="testCircuitBreaker",state="forced_open",} 0.0
resilience4j_circuitbreaker_state{application="resilience4j-jc",name="testCircuitBreaker",state="metrics_only",} 0.0

is there any option to use instance?

Resilience4J metrics is not adding in to prometheus metric

@RobWin Tried the following example, the only difference in the docker-compose configuration

    ports:
      - 9090:9090
    #network_mode: "host"

I see the resilience4J metric in actuator/metrics, however, I could not find in prometheus. Could you please let me know what is missing.

Screen Shot 2020-01-13 at 5 43 01 AM

Here is what I see in Prometheus metrics

Screen Shot 2020-01-13 at 5 45 18 AM

Have an example using pom.xml?

I am using pom.xml

How can I execute the fallback method via annotation?

Do you have an example without using application.yml?

Is it possible?

Getting the issue -java.lang.NoSuchFieldError: entryMap

I just downloaded this application and able to run in my local.

But when i try to fetch health of the application - http://localhost:9080/actuator/health

2020-08-05 21:34:48.449 ERROR 2468 --- [nio-9080-exec-1] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Handler dispatch failed; nested exception is java.lang.NoSuchFieldError: entryMap] with root cause

java.lang.NoSuchFieldError: entryMap
at io.github.resilience4j.circuitbreaker.internal.InMemoryCircuitBreakerRegistry.getAllCircuitBreakers(InMemoryCircuitBreakerRegistry.java:127) ~[resilience4j-circuitbreaker-1.3.1.jar:1.3.1]
at io.github.resilience4j.circuitbreaker.monitoring.health.CircuitBreakersHealthIndicator.health(CircuitBreakersHealthIndicator.java:107) ~[resilience4j-spring-boot2-1.4.0.jar:1.4.0]
at org.springframework.boot.actuate.health.HealthIndicator.getHealth(HealthIndicator.java:37) ~[spring-boot-actuator-2.2.4.RELEASE.jar:2.2.4.RELEASE]
at org.springframework.boot.actuate.health.HealthEndpointWebExtension.getHealth(HealthEndpointWebExtension.java:95) ~[spring-boot-actuator-2.2.4.RELEASE.jar:2.2.4.RELEASE]
at org.springframework.boot.actuate.health.HealthEndpointWebExtension.getHealth(HealthEndpointWebExtension.java:43) ~[spring-boot-actuator-2.2.4.RELEASE.jar:2.2.4.RELEASE]
at org.springframework.boot.actuate.health.HealthEndpointSupport.getContribution(HealthEndpointSupport.java:108) ~[spring-boot-actuator-2.2.4.RELEASE.jar:2.2.4.RELEASE]
at org.springframework.boot.actuate.health.HealthEndpointSupport.getAggregateHealth(HealthEndpointSupport.java:119) ~[spring-boot-actuator-2.2.4.RELEASE.jar:2.2.4.RELEASE]
at org.springframework.boot.actuate.health.HealthEndpointSupport.getContribution(HealthEndpointSupport.java:105) ~[spring-boot-actuator-2.2.4.RELEASE.jar:2.2.4.RELEASE]
at org.springframework.boot.actuate.health.HealthEndpointSupport.getHealth(HealthEndpointSupport.java:83) ~[spring-boot-actuator-2.2.4.RELEASE.jar:2.2.4.RELEASE]
at org.springframework.boot.actuate.health.HealthEndpointSupport.getHealth(HealthEndpointSupport.java:70) ~[spring-boot-actuator-2.2.4.RELEASE.jar:2.2.4.RELEASE]
at org.springframework.boot.actuate.health.HealthEndpointWebExtension.health(HealthEndpointWebExtension.java:81) ~[spring-boot-actuator-2.2.4.RELEASE.jar:2.2.4.RELEASE]
at org.springframework.boot.actuate.health.HealthEndpointWebExtension.health(HealthEndpointWebExtension.java:70) ~[spring-boot-actuator-2.2.4.RELEASE.jar:2.2.4.RELEASE]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_191]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_191]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_191]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_191]
at org.springframework.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:282) ~[spring-core-5.2.3.RELEASE.jar:5.2.3.RELEASE]
at org.springframework.boot.actuate.endpoint.invoke.reflect.ReflectiveOperationInvoker.invoke(ReflectiveOperationInvoker.java:77) ~[spring-boot-actuator-2.2.4.RELEASE.jar:2.2.4.RELEASE]
at org.springframework.boot.actuate.endpoint.annotation.AbstractDiscoveredOperation.invoke(AbstractDiscoveredOperation.java:60) ~[spring-boot-actuator-2.2.4.RELEASE.jar:2.2.4.RELEASE]
at org.springframework.boot.actuate.endpoint.web.servlet.AbstractWebMvcEndpointHandlerMapping$ServletWebOperationAdapter.handle(AbstractWebMvcEndpointHandlerMapping.java:305) ~[spring-boot-actuator-2.2.4.RELEASE.jar:2.2.4.RELEASE]
at org.springframework.boot.actuate.endpoint.web.servlet.AbstractWebMvcEndpointHandlerMapping$OperationHandler.handle(AbstractWebMvcEndpointHandlerMapping.java:388) ~[spring-boot-actuator-2.2.4.RELEASE.jar:2.2.4.RELEASE]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_191]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_191]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_191]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_191]
at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:190) ~[spring-web-5.2.3.RELEASE.jar:5.2.3.RELEASE]
at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:138) ~[spring-web-5.2.3.RELEASE.jar:5.2.3.RELEASE]
at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:106) ~[spring-webmvc-5.2.3.RELEASE.jar:5.2.3.RELEASE]
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:888) ~[spring-webmvc-5.2.3.RELEASE.jar:5.2.3.RELEASE]
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:793) ~[spring-webmvc-5.2.3.RELEASE.jar:5.2.3.RELEASE]
at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) ~[spring-webmvc-5.2.3.RELEASE.jar:5.2.3.RELEASE]
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1040) ~[spring-webmvc-5.2.3.RELEASE.jar:5.2.3.RELEASE]
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:943) ~[spring-webmvc-5.2.3.RELEASE.jar:5.2.3.RELEASE]
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1006) ~[spring-webmvc-5.2.3.RELEASE.jar:5.2.3.RELEASE]
at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:898) ~[spring-webmvc-5.2.3.RELEASE.jar:5.2.3.RELEASE]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:634) ~[tomcat-embed-core-9.0.30.jar:9.0.30]
at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:883) ~[spring-webmvc-5.2.3.RELEASE.jar:5.2.3.RELEASE]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:741) ~[tomcat-embed-core-9.0.30.jar:9.0.30]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231) ~[tomcat-embed-core-9.0.30.jar:9.0.30]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-9.0.30.jar:9.0.30]
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53) ~[tomcat-embed-websocket-9.0.30.jar:9.0.30]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[tomcat-embed-core-9.0.30.jar:9.0.30]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-9.0.30.jar:9.0.30]
at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) ~[spring-web-5.2.3.RELEASE.jar:5.2.3.RELEASE]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119) ~[spring-web-5.2.3.RELEASE.jar:5.2.3.RELEASE]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[tomcat-embed-core-9.0.30.jar:9.0.30]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-9.0.30.jar:9.0.30]
at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) ~[spring-web-5.2.3.RELEASE.jar:5.2.3.RELEASE]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119) ~[spring-web-5.2.3.RELEASE.jar:5.2.3.RELEASE]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[tomcat-embed-core-9.0.30.jar:9.0.30]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-9.0.30.jar:9.0.30]
at org.springframework.boot.actuate.metrics.web.servlet.WebMvcMetricsFilter.doFilterInternal(WebMvcMetricsFilter.java:108) ~[spring-boot-actuator-2.2.4.RELEASE.jar:2.2.4.RELEASE]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119) ~[spring-web-5.2.3.RELEASE.jar:5.2.3.RELEASE]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[tomcat-embed-core-9.0.30.jar:9.0.30]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-9.0.30.jar:9.0.30]
at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) ~[spring-web-5.2.3.RELEASE.jar:5.2.3.RELEASE]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119) ~[spring-web-5.2.3.RELEASE.jar:5.2.3.RELEASE]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[tomcat-embed-core-9.0.30.jar:9.0.30]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-9.0.30.jar:9.0.30]
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:202) ~[tomcat-embed-core-9.0.30.jar:9.0.30]
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96) [tomcat-embed-core-9.0.30.jar:9.0.30]
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:541) [tomcat-embed-core-9.0.30.jar:9.0.30]
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:139) [tomcat-embed-core-9.0.30.jar:9.0.30]
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92) [tomcat-embed-core-9.0.30.jar:9.0.30]
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) [tomcat-embed-core-9.0.30.jar:9.0.30]
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343) [tomcat-embed-core-9.0.30.jar:9.0.30]
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:367) [tomcat-embed-core-9.0.30.jar:9.0.30]
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65) [tomcat-embed-core-9.0.30.jar:9.0.30]
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:860) [tomcat-embed-core-9.0.30.jar:9.0.30]
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1598) [tomcat-embed-core-9.0.30.jar:9.0.30]
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) [tomcat-embed-core-9.0.30.jar:9.0.30]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_191]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_191]
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) [tomcat-embed-core-9.0.30.jar:9.0.30]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_191]

dashboard.json from /docker is not pulling data

I Think with respective to changes,
dashboard.json from /docker is not updated . because of that some most of the variables are not get pulling to dashboard.

image

and

image

error : Cannot read property 'result' of undefined

~~ Hope this helps to update dashboard.json
Thanks!~

CircuitBreaker with Retry

Hi,

I want to define a Cuircuit-Breaker with retry for a restcall.
My Config:

resilience4j.circuitbreaker:
    configs:
        default:
            registerHealthIndicator: false
            slidingWindowSize: 10
            minimumNumberOfCalls: 2
            permittedNumberOfCallsInHalfOpenState: 3
            automaticTransitionFromOpenToHalfOpenEnabled: true
            waitDurationInOpenState: 10s
            slowCallDurationThreshold: 10
            slowCallRateThreshold: 30
    instances:
        restCallCB:
            baseConfig: default
            
resilience4j.retry:
    configs:
        default:
            maxRetryAttempts: 2
            waitDuration: 1000
    instances:
        restCallRE:
            baseConfig: default

I use this spring annotation:

@CircuitBreaker(name = "restCallCB")
@Retry(name = "restCallRE", fallbackMethod = "fallbackRetry")
public String callRest(...)

I use a wrong URL to test the restcall.
Problem: the retry will not be executed und the circuit breaker ist alway closed.

If I define a fallback-Method for the circuit breaker, the first test call this fallback-Method and the retry will not be executed. Without the circuit breaker the retry works fine.

Could you please help me find the right config.

Thanks,
Maalej

Broken tests (Java 11)

Hi,

Several tests are broken when running this demo with java 11 (LTS) and gradle 6.6.

image

Debugging the code, the problem is due to the test asserts of the metric generated:

Generated metric (application):
resilience4j_retry_calls_total {application = "resilience4j-demo", kind = "failed_with_retry", name = "backendB",} 1.0

Assert of a retry test (for example, ReactiveRetryTest#backendBshouldRetryThreeTimes#checkMetrics)
resilience4j_retry_calls_total {application = "resilience4j-demo", kind = "failed_with_retry", name = "backendB",} 1,0

The comma that was inserted by String.format ("%. 1f", count) in L.36 (AbstractRetryTest) to do the validations is the problem in this case, to fix it, I just remove this String.format by only the count value.

I saw that this was added by @dlsrb6342 for "Fix failed tests" on 23/04/2020. Should I do a PR?

Defining state in application.yml

Hi

Is there a way to specify the state of a circuit breaker in application.yml? I wanted to deploy the circuitBreaker in monitor state before enabling them after a couple of days.

Thanks,
Lawrence

registerHealthIndicator defined in CB default is not read by the backend cfg

When using the following cfg:

resilience4j.circuitbreaker:
  configs:
    default:
      registerHealthIndicator: false
      ringBufferSizeInClosedState: 10
      ringBufferSizeInHalfOpenState: 3
      automaticTransitionFromOpenToHalfOpenEnabled: true
      waitDurationInOpenState: 10
      failureRateThreshold: 30
      eventConsumerBufferSize: 10
  backends:
    backendA:
      baseConfig: default

backendACircuitBreaker is shown in the /health endpoint when it shouldn't.

To make it work, I ended up moving registerHealthIndicator: false to the backend cfg

backends:
    backendA:
      baseConfig: default
      registerHealthIndicator: false

and backendACircuitBreaker disappeared

Odd gradle issues bringing in old version for spring-boot2 dependency.

I am trying to use io.github.resilience4j:resilience4j-spring-boot2:1.7.0 in my gradle (kotlin) based build to configure resilience4j. However I keep getting ClassDefNotFound on things that appear to be internal classes to the library. I noticed when running a dependency tree that the spring-boot2 lib seems to be pulling in other dependencies at version 1.6.1 instead of 1.7.0.

Note I do have the spring-cloud BOM configured and I'm using spring-cloud-loadbalancer and spring-cloud-open-feign from there as well...not sure if that matters.

+--- io.github.resilience4j:resilience4j-spring-boot2:1.7.0
|    +--- io.vavr:vavr:0.10.2
|    |    \--- io.vavr:vavr-match:0.10.2
|    +--- org.slf4j:slf4j-api:1.7.30
|    +--- io.github.resilience4j:resilience4j-spring:1.7.0 -> 1.6.1
|    |    +--- io.vavr:vavr:0.10.2 (*)
|    |    +--- org.slf4j:slf4j-api:1.7.30
|    |    +--- io.github.resilience4j:resilience4j-annotations:1.6.1
|    |    |    +--- io.vavr:vavr:0.10.2 (*)
|    |    |    \--- org.slf4j:slf4j-api:1.7.30
|    |    +--- io.github.resilience4j:resilience4j-consumer:1.6.1
|    |    |    +--- io.vavr:vavr:0.10.2 (*)
|    |    |    +--- org.slf4j:slf4j-api:1.7.30
|    |    |    +--- io.github.resilience4j:resilience4j-core:1.6.1
|    |    |    |    +--- io.vavr:vavr:0.10.2 (*)
|    |    |    |    \--- org.slf4j:slf4j-api:1.7.30
|    |    |    \--- io.github.resilience4j:resilience4j-circularbuffer:1.6.1
|    |    |         +--- io.vavr:vavr:0.10.2 (*)
|    |    |         \--- org.slf4j:slf4j-api:1.7.30
|    |    \--- io.github.resilience4j:resilience4j-framework-common:1.6.1
|    |         +--- io.vavr:vavr:0.10.2 (*)
|    |         +--- org.slf4j:slf4j-api:1.7.30
|    |         +--- io.github.resilience4j:resilience4j-core:1.6.1 (*)
|    |         +--- io.github.resilience4j:resilience4j-circuitbreaker:1.6.1
|    |         |    +--- io.vavr:vavr:0.10.2 (*)
|    |         |    +--- org.slf4j:slf4j-api:1.7.30
|    |         |    \--- io.github.resilience4j:resilience4j-core:1.6.1 (*)
|    |         +--- io.github.resilience4j:resilience4j-ratelimiter:1.6.1
|    |         |    +--- io.vavr:vavr:0.10.2 (*)
|    |         |    +--- org.slf4j:slf4j-api:1.7.30
|    |         |    \--- io.github.resilience4j:resilience4j-core:1.6.1 (*)
|    |         +--- io.github.resilience4j:resilience4j-retry:1.6.1
|    |         |    +--- io.vavr:vavr:0.10.2 (*)
|    |         |    +--- org.slf4j:slf4j-api:1.7.30
|    |         |    \--- io.github.resilience4j:resilience4j-core:1.6.1 (*)
|    |         +--- io.github.resilience4j:resilience4j-bulkhead:1.6.1
|    |         |    +--- io.vavr:vavr:0.10.2 (*)
|    |         |    +--- org.slf4j:slf4j-api:1.7.30
|    |         |    \--- io.github.resilience4j:resilience4j-core:1.6.1 (*)
|    |         \--- io.github.resilience4j:resilience4j-timelimiter:1.6.1
|    |              +--- io.vavr:vavr:0.10.2 (*)
|    |              +--- org.slf4j:slf4j-api:1.7.30
|    |              \--- io.github.resilience4j:resilience4j-core:1.6.1 (*)
|    \--- io.github.resilience4j:resilience4j-micrometer:1.7.0 -> 1.6.1
|         +--- io.vavr:vavr:0.10.2 (*)
|         \--- org.slf4j:slf4j-api:1.7.30

Any help deciphering why this is happening much appreciated!

Broken tests - CircuitBreakerTest and ReactiveCircuitBreakerTest

Hi @RobWin, all the circuit breaker tests have failed (CircuitBreakerTest and ReactiveCircuitBreakerTest)

image

It looks like the circuit is not respecting the waitDurationInOpenState property.

So when these tests are trying to access the success endpoint, the error "o CircuitBreaker 'backendA' is OPEN...." happens.

Do you know what it can be?

Log: Reactor and RxJava2

Hi,

I see this two entries in the log:
RxJava2 related Aspect extensions are not activated, because RxJava2 is not on the classpath.
Reactor related Aspect extensions are not activated because Reactor is not on the classpath.

I don't know what I'm doing wrong!

I have a Maven-Project with this dependency:

io.github.resilience4j
resilience4j-spring-boot2
1.3.1

I use Java 1.8 and Spring boot Version 2.2.6.

Thanks,
Maalej

No class Def Error - Caused by: java.lang.ClassNotFoundException: io.github.resilience4j.spelresolver.SpelResolver

Hi , I am on spring boot version v2.3.4 , getting the below error while the app boots.. Please advise..

resilience4jVersion = '1.6.1'

compile("io.github.resilience4j:resilience4j-spring-boot2:${resilience4jVersion}")
compile("io.github.resilience4j:resilience4j-all:${resilience4jVersion}") // Optional, only required when you want to use the Decorators class
compile("io.github.resilience4j:resilience4j-reactor:${resilience4jVersion}")
compile("io.micrometer:micrometer-registry-prometheus")


ERROR org.springframework.boot.SpringApplication java.lang.IllegalStateException: Error processing condition on io.github.resilience4j.spelresolver.autoconfigure.SpelResolverConfigurationOnMissingBean.spelExpressionParser
at org.springframework.boot.autoconfigure.condition.SpringBootCondition.matches(SpringBootCondition.java:60)Application run failed java.lang.IllegalStateException: Error processing condition on io.github.resilience4j.spelresolver.autoconfigure.SpelResolverConfigurationOnMissingBean.spelExpressionParser
at org.springframework.boot.autoconfigure.condition.SpringBootCondition.matches(SpringBootCondition.java:60)
at org.springframework.context.annotation.ConditionEvaluator.shouldSkip(ConditionEvaluator.java:108)
at org.springframework.context.annotation.ConfigurationClassBeanDefinitionReader.loadBeanDefinitionsForBeanMethod(ConfigurationClassBeanDefinitionReader.java:184)
at org.springframework.context.annotation.ConfigurationClassBeanDefinitionReader.loadBeanDefinitionsForConfigurationClass(ConfigurationClassBeanDefinitionReader.java:144)
at org.springframework.context.annotation.ConfigurationClassBeanDefinitionReader.loadBeanDefinitions(ConfigurationClassBeanDefinitionReader.java:120)
at org.springframework.context.annotation.ConfigurationClassPostProcessor.processConfigBeanDefinitions(ConfigurationClassPostProcessor.java:331)
at org.springframework.context.annotation.ConfigurationClassPostProcessor.postProcessBeanDefinitionRegistry(ConfigurationClassPostProcessor.java:236)
at org.springframework.context.support.PostProcessorRegistrationDelegate.invokeBeanDefinitionRegistryPostProcessors(PostProcessorRegistrationDelegate.java:280)
at org.springframework.context.support.PostProcessorRegistrationDelegate.invokeBeanFactoryPostProcessors(PostProcessorRegistrationDelegate.java:96)
at org.springframework.context.support.AbstractApplicationContext.invokeBeanFactoryPostProcessors(AbstractApplicationContext.java:707)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:533)
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:143)
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:758)
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:750)
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:397)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:315)

Caused by: java.lang.IllegalStateException: Failed to introspect Class [io.github.resilience4j.spelresolver.autoconfigure.SpelResolverConfigurationOnMissingBean] from ClassLoader [sun.misc.Launcher$AppClassLoader@18b4aac2]
at org.springframework.util.ReflectionUtils.getDeclaredMethods(ReflectionUtils.java:481)
at org.springframework.util.ReflectionUtils.doWithMethods(ReflectionUtils.java:358)
at org.springframework.util.ReflectionUtils.getUniqueDeclaredMethods(ReflectionUtils.java:414)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.lambda$getTypeForFactoryMethod$2(AbstractAutowireCapableBeanFactory.java:742)
at java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1688)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.getTypeForFactoryMethod(AbstractAutowireCapableBeanFactory.java:741)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.determineTargetType(AbstractAutowireCapableBeanFactory.java:680)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.predictBeanType(AbstractAutowireCapableBeanFactory.java:648)
at org.springframework.beans.factory.support.AbstractBeanFactory.isFactoryBean(AbstractBeanFactory.java:1614)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.doGetBeanNamesForType(DefaultListableBeanFactory.java:523)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.getBeanNamesForType(DefaultListableBeanFactory.java:495)
at org.springframework.boot.autoconfigure.condition.OnBeanCondition.collectBeanNamesForType(OnBeanCondition.java:238)
at org.springframework.boot.autoconfigure.condition.OnBeanCondition.getBeanNamesForType(OnBeanCondition.java:231)
at org.springframework.boot.autoconfigure.condition.OnBeanCondition.getBeanNamesForType(OnBeanCondition.java:221)
at org.springframework.boot.autoconfigure.condition.OnBeanCondition.getMatchingBeans(OnBeanCondition.java:169)
at org.springframework.boot.autoconfigure.condition.OnBeanCondition.getMatchOutcome(OnBeanCondition.java:144)
at org.springframework.boot.autoconfigure.condition.SpringBootCondition.matches(SpringBootCondition.java:47)
... 16 more
Caused by: java.lang.NoClassDefFoundError: io/github/resilience4j/spelresolver/SpelResolver
at java.lang.Class.getDeclaredMethods0(Native Method)
at java.lang.Class.privateGetDeclaredMethods(Class.java:2701)
at java.lang.Class.getDeclaredMethods(Class.java:1975)
at org.springframework.util.ReflectionUtils.getDeclaredMethods(ReflectionUtils.java:463)
... 32 more
Caused by: java.lang.ClassNotFoundException: io.github.resilience4j.spelresolver.SpelResolver
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
... 36 more

how to check thread-pool-bulkhead health?

i had config these code in yaml file,but it not effected when i use 20 concurrent call with jmeter.

how can i check these issue?

resilience4j.thread-pool-bulkhead:
    instances:
      default:
        max-thread-pool-size: 1
        core-thread-pool-size: 1
        queue-capacity: 1
      singleAnalysisThreadPoolBulkhead:
        max-thread-pool-size: 1
        core-thread-pool-size: 1
        queue-capacity: 1
        keep-alive-duration: 1ms
        writable-stack-trace-enabled: true
      batchAnalysisThreadPoolBulkhead:
        max-thread-pool-size: 5
        core-thread-pool-size: 5
        queue-capacity: 10
        keep-alive-duration: 20ms
        writable-stack-trace-enabled: true

call method

    @Bulkhead(name = "singleAnalysisThreadPoolBulkhead",type = Bulkhead.Type.THREADPOOL,fallbackMethod ="fallbackForThreadPoolBulkhead")
    public VueElementAdminResponse singleAnalysis(HxConsistencyAnalysisParam param) {
        try {
            Stopwatch stopwatch=Stopwatch.createStarted();
            VueElementAdminResponse vueElementAdminResponse=analysis(param);
            stopwatch.stop();
            logger.warn("cost_time:{}",stopwatch);
            return vueElementAdminResponse;
        }catch (RejectedExecutionException e){
            VueElementAdminResponse vueElementAdminResponse = new VueElementAdminResponse();
            vueElementAdminResponse.setCode(ResponseCodeEnum.THREAD_POOL_OF_CALL.getCode());
            vueElementAdminResponse.setMessage("thread pool is full");
            vueElementAdminResponse.setData(e.getMessage());
            return vueElementAdminResponse;
        } catch (Exception e) {
            VueElementAdminResponse vueElementAdminResponse = new VueElementAdminResponse();
            vueElementAdminResponse.setCode(ResponseCodeEnum.OTHER_SERVER_EXCEPTION.getCode());
            vueElementAdminResponse.setMessage("call exception");
            vueElementAdminResponse.setData(e);
            return vueElementAdminResponse;
        }
    }
   public VueElementAdminResponse rateLimiterFallback(HxConsistencyAnalysisParam param, Throwable throwable) {
        return VueElementAdminResponse.builder()
                .code(ResponseCodeEnum.REQUEST_FREQUENCY_OF_CALL.getCode())
                .message(ResponseCodeEnum.REQUEST_FREQUENCY_OF_CALL.getMessage())
                .data(param)
                .build();
    }
    public VueElementAdminResponse fallbackForThreadPoolBulkhead(HxConsistencyAnalysisParam param,Throwable throwable) {
        return VueElementAdminResponse.builder()
                .code(ResponseCodeEnum.THREAD_POOL_OF_CALL.getCode())
                .message(ResponseCodeEnum.THREAD_POOL_OF_CALL.getMessage())
                .data(param)
                .build();
    }

Getting error to make an upgrade to at least Gradle 6.8

Hi there,

I can only run your docker compose demo if I make some changes. Namely, the application port to 9081 in the docker-compose-desktop.yml (the application is configured to run on 9081 and not 9080 as it is) and the parent image to gradle:6.8.3-jdk-openj9. I tried to open a pull request for you, but I can't. Maybe I could fork the project, make the change and then make a pull request to you that way, but it think it's probably better to leave this to you, to keep it simple.

This is what I was getting:

Welcome to Gradle 5.4!

Here are the highlights of this release:
 - Run builds with JDK12
 - New API for Incremental Tasks
 - Updates to native projects, including Swift 5 support

For more details see https://docs.gradle.org/5.4/release-notes.html

Starting a Gradle Daemon (subsequent builds will be faster)

FAILURE: Build failed with an exception.

* Where:
Build file '/code/build.gradle' line: 15

* What went wrong:
A problem occurred evaluating root project 'code'.
> Failed to apply plugin [id 'org.springframework.boot']
   > Spring Boot plugin requires Gradle 6.8.x, 6.9.x, or 7.x. The current version is Gradle 5.4

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

BUILD FAILED in 6s
ERROR: Service 'app' failed to build: The command '/bin/sh -c gradle clean build -x test' returned a non-zero code: 1

It would be better to make this update in a way that people don't run into this in the future. Btw,, I think this is a great project! 👍

How to keep different CircuitBreakerConfiguration for multiple circuit breakers?

I have created a @bean of CircuitBreakerRegistry and have annotated four different methods using @CIRCUITBREAKER annotation. All have different names.

Screenshot 2022-04-21 at 9 35 44 AM

Screenshot 2022-04-21 at 9 36 32 AM

I want to keep 4 different kinds of CircuitBreakerConfig in the registry for each of the circuit breaker.

How can I achieve this?

Also, I want to log every event on these four CircuitBreakers. I know we can achieve that using:
registry.circuitBreaker(name).getEventPublisher().onEvent(logger)

But how do I get hold of the reference to those four circuit breakers?

@RobWin

TimeLimiter Configuration wrong in application.yml

In the file src/main/resources/application.yml, the following timelimiter configuration is present

resilience4j:
    configs:
        default:
            cancelRunningFuture: false
            timeoutDuration: 2s

This config should be (according to docu)

resilience4j.timelimiter:
    configs:
        default:
            cancelRunningFuture: false
            timeoutDuration: 2s

How I can get the current Circuit Breaker using Circuit Breaker annotation?

How I can get the current Circuit Breaker using Circuit Breaker annotation?

I tried this to get and reset the circuit breaker:
(https://resilience4j.readme.io/docs/examples -> Reset CircuitBreaker)

   CircuitBreaker circuitBreaker = CircuitBreaker.ofDefaults(BACKEND_A);
   circuitBreaker.reset();

And this:

  • This returns a empty list.
CircuitBreakerRegistry circuitBreakerRegistry = CircuitBreakerRegistry.ofDefaults();
circuitBreakerRegistry.getAllCircuitBreakers();

But I have some issues with one scenario:

  • When I have two test in the same suite I need to that one test does not affect others.

Example:

cfg -> ringBufferSizeInClosedState: 5

  @Test
    public void shouldStayClosedCircuitBreaker() {
        Stream.rangeClosed(1, 4).forEach((count) -> produceFailure(BACKEND_A));
        checkHealthStatus(BACKEND_A + "CircuitBreaker", Status.UP);
    }

    @Test
    public void shouldStayClosedCircuitBreaker2() throws InterruptedException {
        Stream.rangeClosed(1, 4).forEach((count) -> produceFailure(BACKEND_A));
        checkHealthStatus(BACKEND_A + "CircuitBreaker", Status.UP);
    }

Someone can help me?

Description of the options found in YML properties as documentation

Hello Resilience4j,

First of all, a big thanks for this demo project.
Having the ability to just run and see resilience4j in action really helps build confidence and interest in the whole fault-tolerant architecture.

I just have a small request for enhancement, if possible, and hope not to trouble.
Would it be possible to have a documentation listing the options and configurations please?

Currently, in the doc, we can find something such as:

resilience4j.circuitbreaker:
    instances:
        backendA:
            registerHealthIndicator: true
            slidingWindowSize: 100
        backendB:
            registerHealthIndicator: true
            slidingWindowSize: 10
            permittedNumberOfCallsInHalfOpenState: 3
            slidingWindowType: TIME_BASED
            minimumNumberOfCalls: 20
            waitDurationInOpenState: 50s
            failureRateThreshold: 50
            eventConsumerBufferSize: 10
            recordFailurePredicate: io.github.robwin.exception.RecordFailurePredicate

Agree, some of the properties/configurations are very straightforward.
However, some are not, and having to tune the values on trial and error might not be the best approach.

Some documentation pages, like those found in other Spring Boot projects can help understand this in a friendly way.
Some human readable explanations can definitely bring more values.

Many thanks

Retry annotation not working for monoFailure and fluxFailure

I am trying to validate the resilience4j.retry functionality for fluxFailure and monoFailure but these methods are not tried. However, it works for failure method

@Override
//@CircuitBreaker(name = BACKEND_A)
//@Bulkhead(name = BACKEND_A)
@Retry(name = BACKEND_A)
public Flux<String> fluxFailure() {
	System.out.println("Trying one more time");
    return Flux.error(new IOException("BAM!"));
}

@Override
//@CircuitBreaker(name = BACKEND_A)
//@Bulkhead(name = BACKEND_A)
@Retry(name = BACKEND_A)
public Mono<String> monoFailure() {
    System.out.println("Trying one more time");
	return Mono.error(new IOException("BAM!"));
}

@Override
@CircuitBreaker(name = BACKEND_A)
@Bulkhead(name = BACKEND_A)
@Retry(name = BACKEND_A)
public String failure() {
	System.out.println("Trying one more time");
    throw new HttpServerErrorException(HttpStatus.INTERNAL_SERVER_ERROR, "This is a remote exception");
}

Also another issue in application.yml, is maxRetryAttempts valid property for retry or it should be maxAttempts?

resilience4j.retry:
    configs:
        default:
            maxRetryAttempts: 3
            waitDuration: 100
            retryExceptions:
                - org.springframework.web.client.HttpServerErrorException
                - java.util.concurrent.TimeoutException
                - java.io.IOException
            ignoreExceptions:
                - io.github.robwin.exception.BusinessException

spring boot integration test is failing

Hello,
we are trying to write the integration/unit test but it is failing or not working as we expected. can you please help us?

dependency details:

<java.version>1.8</java.version>

      <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-parent</artifactId>
            <version>2.3.10.RELEASE</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
    <dependency>
        <groupId>io.github.resilience4j</groupId>
        <artifactId>resilience4j-spring-boot2</artifactId>
        <version>1.7.0</version>
    </dependency>

Basically I tried to run the integration test but it is failing. can you please help me to resolve this issue?

Do you have any sample which help me to refer and correct the issue?

building from the command line fails.

Building in IDEA IntelliJ IDEA 2020.3 (Ultimate Edition) works.

But building from the command line fails. I am not sure this is a gradle, or build.gradle error.
Here is a transcript:
` ./gradlew

FAILURE: Build failed with an exception.

  • What went wrong:
    Could not initialize class org.codehaus.groovy.runtime.InvokerHelper

  • Try:
    Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.

  • Get more help at https://help.gradle.org

BUILD FAILED in 419ms
`

Check for the retry attempts

Hi,

I am using the resilience retry functionality for my project and it is working well. That is whenever the call fails, the retry is performed and routes to fall back method when the retries are done. But I need to check the retry attempt count in order to achieve a functionality. Could you please help me on this? I didn't find any documentation regarding this.

@RobWin

Thanks
Biju

Unit Tests for "BackendC" (Circuit Breaker Annotation)

All the unit tests are only for BackendB and BackendA where it does not use Circuit Breaker Annotation

Example:

@Override
    @TimeLimiter(name = BACKEND_C)
    @CircuitBreaker(name = BACKEND_C, fallbackMethod = "fluxFallback")
    public Flux<String> fluxTimeout() {
        return Flux.
                just("Hello World from backend A")
                .delayElements(Duration.ofSeconds(10));
    }

How will I be able to test this in Unit Tests? - I tried the same with manual testing, and I am able to test it.

What is the minimum string boot2 version to apply?

  1. Does it not work with spring boot 2.0.6 version?
    In version 2.0.6, the HealthIndicatorRegistry class does not exist and an error occurs at runtime.

  2. Can I handle business logic by receiving a state change event from CircuitBreaker?
    How can I do it?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.