In this section, we describe how to customize various parts of Spring Cloud Sleuth. Please check the appendix for the list of spans, tags and events.
1. Apache Kafka
This feature is available for all tracer implementations.
We decorate the Kafka clients (KafkaProducer
and KafkaConsumer
) to create a span for each event that is produced or consumed. You can disable this feature by setting the value of spring.sleuth.kafka.enabled
to false
.
You have to register the Producer or Consumer as beans in order for Sleuth’s auto-configuration to decorate them. When you then inject the beans, the expected type must be Producer or Consumer (and NOT e.g. KafkaProducer ).
|
For use with project reactor we decorate KafkaReceiver<K,V>
with TracingKafkaReceiver<K,V>
for every bean of that type declared. This will create separate publisher for each element received with its own tracing context propagated. When used with reactor instrumentation you will have access to the context of spans.
If there’s no parent context, it will create just child span with new trace-id.
@Bean
KafkaReceiver<K, V> reactiveKafkaReceiver(ReceiverOptions<K,V> options) {
return KafkaReceiver.create(options);
}
Later you can simply start receiving your elements to process the context.
@Bean
DisposableBean exampleRunningConsumer(KafkaReceiver<String, String> receiver){
reactor.core.Disposable disposable = receiver.receive()
//If you need to read context you can for example use deferContextual
.flatMap(record -> Mono.deferContextual(context -> Mono.just(record)))
.doOnNext(record -> log.info("I will be coorelated to the child span created with parent context from kafka record"))
.subscribe(record -> record.receiverOffset().acknowledge());
return disposable::dispose;
}
2. Asynchronous Communication
In this section, we describe how to customize asynchronous communication with Spring Cloud Sleuth.
2.1. @Async
Annotated methods
This feature is available for all tracer implementations.
In Spring Cloud Sleuth, we instrument async-related components so that the tracing information is passed between threads.
You can disable this behavior by setting the value of spring.sleuth.async.enabled
to false
.
If you annotate your method with @Async
, we automatically modify the existing Span as follows:
-
If the method is annotated with
@SpanName
, the value of the annotation is the Span’s name. -
If the method is not annotated with
@SpanName
, the Span name is the annotated method name. -
The span is tagged with the method’s class name and method name.
Since we’re modifying the existing span, if you want to maintain its original name (e.g. a span created by receiving an HTTP request)
you should wrap your @Async
annotated method with a @NewSpan
annotation or create a new span manually.
2.2. @Scheduled
Annotated Methods
This feature is available for all tracer implementations.
In Spring Cloud Sleuth, we instrument scheduled method execution so that the tracing information is passed between threads.
You can disable this behavior by setting the value of spring.sleuth.scheduled.enabled
to false
.
If you annotate your method with @Scheduled
, we automatically create a new span with the following characteristics:
-
The span name is the annotated method name.
-
The span is tagged with the method’s class name and method name.
If you want to skip span creation for some @Scheduled
annotated classes, you can set the spring.sleuth.scheduled.skipPattern
with a regular expression that matches the fully qualified name of the @Scheduled
annotated class.
2.3. Executor, ExecutorService, and ScheduledExecutorService
This feature is available for all tracer implementations.
We provide LazyTraceExecutor
, TraceableExecutorService
, and TraceableScheduledExecutorService
.
Those implementations create spans each time a new task is submitted, invoked, or scheduled.
The following example shows how to pass tracing information with TraceableExecutorService
when working with CompletableFuture
:
CompletableFuture<Long> completableFuture = CompletableFuture.supplyAsync(() -> {
// perform some logic
return 1_000_000L;
}, new TraceableExecutorService(beanFactory, executorService,
// 'calculateTax' explicitly names the span - this param is optional
"calculateTax"));
Sleuth does not work with parallelStream() out of the box.
If you want to have the tracing information propagated through the stream, you have to use the approach with supplyAsync(…) , as shown earlier.
|
If there are beans that implement the Executor
interface that you would like to exclude from span creation, you can use the spring.sleuth.async.ignored-beans
property where you can provide a list of bean names.
You can disable this behavior by setting the value of spring.sleuth.async.enabled
to false
.
2.3.1. Customization of Executors
Sometimes, you need to set up a custom instance of the AsyncExecutor
.
The following example shows how to set up such a custom Executor
:
@Configuration(proxyBeanMethods = false)
@EnableAutoConfiguration
@EnableAsync
// add the infrastructure role to ensure that the bean gets auto-proxied
@Role(BeanDefinition.ROLE_INFRASTRUCTURE)
public static class CustomExecutorConfig extends AsyncConfigurerSupport {
@Autowired
BeanFactory beanFactory;
@Override
public Executor getAsyncExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
// CUSTOMIZE HERE
executor.setCorePoolSize(7);
executor.setMaxPoolSize(42);
executor.setQueueCapacity(11);
executor.setThreadNamePrefix("MyExecutor-");
// DON'T FORGET TO INITIALIZE
executor.initialize();
return new LazyTraceExecutor(this.beanFactory, executor);
}
}
To ensure that your configuration gets post processed, remember to add the @Role(BeanDefinition.ROLE_INFRASTRUCTURE) on your
@Configuration class
|
3. HTTP Client Integration
Features from this section can be disabled by setting the spring.sleuth.web.client.enabled
property with value equal to false
.
3.1. Synchronous Rest Template
This feature is available for all tracer implementations.
We inject a RestTemplate
interceptor to ensure that all the tracing information is passed to the requests.
Each time a call is made, a new Span is created.
It gets closed upon receiving the response.
To block the synchronous RestTemplate
features, set spring.sleuth.web.client.enabled
to false
.
You have to register RestTemplate as a bean so that the interceptors get injected.
If you create a RestTemplate instance with a new keyword, the instrumentation does NOT work.
|
3.2. Asynchronous Rest Template
This feature is available for all tracer implementations.
Starting with Sleuth 2.0.0 , we no longer register a bean of AsyncRestTemplate type.
It is up to you to create such a bean.
Then we instrument it.
|
To block the AsyncRestTemplate
features, set spring.sleuth.web.async.client.enabled
to false
.
To disable creation of the default TraceAsyncClientHttpRequestFactoryWrapper
, set spring.sleuth.web.async.client.factory.enabled
to false
.
If you do not want to create AsyncRestClient
at all, set spring.sleuth.web.async.client.template.enabled
to false
.
3.2.1. Multiple Asynchronous Rest Templates
Sometimes you need to use multiple implementations of the Asynchronous Rest Template.
In the following snippet, you can see an example of how to set up such a custom AsyncRestTemplate
:
@Configuration(proxyBeanMethods = false)
public static class TestConfig {
@Bean(name = "customAsyncRestTemplate")
public AsyncRestTemplate traceAsyncRestTemplate() {
return new AsyncRestTemplate(asyncClientFactory(), clientHttpRequestFactory());
}
private ClientHttpRequestFactory clientHttpRequestFactory() {
ClientHttpRequestFactory clientHttpRequestFactory = new CustomClientHttpRequestFactory();
// CUSTOMIZE HERE
return clientHttpRequestFactory;
}
private AsyncClientHttpRequestFactory asyncClientFactory() {
AsyncClientHttpRequestFactory factory = new CustomAsyncClientHttpRequestFactory();
// CUSTOMIZE HERE
return factory;
}
}
3.2.2. Troubleshooting Async Configuration Issues
Sine Sleuth uses a Bean Post Processor (BPP) to modify executors the Executor
type that you provided will be modified and the return type will be the trace version of that executor. That can lead to exceptions simillar to this
12:12:49.606 [main] DEBUG org.springframework.context.annotation.AnnotationConfigApplicationContext - Closing org.springframework.context.annotation.AnnotationConfigApplicationContext@2401f4c3, started on Wed Jan 19 12:12:49 GMT 2022
Exception in thread "main" org.springframework.beans.factory.BeanNotOfRequiredTypeException: Bean named 'exampleConfigurer' is expected to be of type 'com.example.demo.Gh29151Application$Example' but was actually of type 'org.springframework.cloud.sleuth.instrument.async.LazyTraceAsyncCustomizer'
If you see such exceptions you must
-
disable sleuth async
-
manually instrument the executors using their trace representations
Example
spring.sleuth.async.enabled=false
and configuration example
@Configuration
@EnableAsync
public class AsyncConfiguration implements AsyncConfigurer {
private final BeanFactory beanFactory;
public AsyncConfiguration(BeanFactory beanFactory) {
this.beanFactory = beanFactory;
}
@Override
@Bean("AsyncTaskExecutor")
public Executor getAsyncExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.initialize();
return new LazyTraceThreadPoolTaskExecutor(beanFactory, executor);
}
}
You can read more about this in this issue.
3.2.3. WebClient
This feature is available for all tracer implementations.
We inject a ExchangeFilterFunction
implementation that creates a span and, through on-success and on-error callbacks, takes care of closing client-side spans.
To block this feature, set spring.sleuth.web.client.enabled
to false
.
You have to register WebClient as a bean so that the tracing instrumentation gets applied.
If you create a WebClient instance with a new keyword, the instrumentation does NOT work.
|
Logbook with WebClient
In order to add support for Logbook with WebClient org.zalando:logbook-spring-boot-webflux-autoconfigure
you need to add the following configuration. You can read more about this integration in this issue.
@Configuration
@Import(LogbookWebFluxAutoConfiguration.class)
public class LogbookConfiguration {
@Bean
public LogstashLogbackSink logbackSink(final HttpLogFormatter formatter) {
return new LogstashLogbackSink(formatter);
}
@Bean
public CorrelationId correlationId(final Tracer tracer) {
return request -> requireNonNull(requireNonNull(tracer.currentSpan())).context().traceId();
}
@Bean
ReactorNettyHttpTracing reactorNettyHttpTracing(final HttpTracing httpTracing) {
return ReactorNettyHttpTracing.create(httpTracing);
}
@Bean
NettyServerCustomizer nettyServerCustomizer(final Logbook logbook,
final ReactorNettyHttpTracing reactorNettyHttpTracing) {
return server -> reactorNettyHttpTracing.decorateHttpServer(
server.doOnConnection(conn -> conn.addHandlerFirst(new LogbookServerHandler(logbook))));
}
@Bean
WebClient webClient(final Logbook logbook, final ReactorNettyHttpTracing reactorNettyHttpTracing) {
return WebClient.builder()
.clientConnector(new ReactorClientHttpConnector(reactorNettyHttpTracing.decorateHttpClient(HttpClient
.create().doOnConnected(conn -> conn.addHandlerLast(new LogbookClientHandler(logbook))))))
.build();
}
}
3.2.4. Traverson
This feature is available for all tracer implementations.
If you use the Traverson library, you can inject a RestTemplate
as a bean into your Traverson object.
Since RestTemplate
is already intercepted, you get full support for tracing in your client.
The following pseudo code shows how to do that:
@Autowired RestTemplate restTemplate;
Traverson traverson = new Traverson(URI.create("https://some/address"),
MediaType.APPLICATION_JSON, MediaType.APPLICATION_JSON_UTF8).setRestOperations(restTemplate);
// use Traverson
3.2.5. Apache HttpClientBuilder
and HttpAsyncClientBuilder
This feature is available for Brave tracer implementation.
We instrument the HttpClientBuilder
and HttpAsyncClientBuilder
so that tracing context gets injected to the sent requests.
To block these features, set spring.sleuth.web.client.enabled
to false
.
3.2.6. Netty HttpClient
This feature is available for all tracer implementations.
We instrument the Netty’s HttpClient
.
To block this feature, set spring.sleuth.web.client.enabled
to false
.
You have to register HttpClient as a bean so that the instrumentation happens.
If you create a HttpClient instance with a new keyword, the instrumentation does NOT work.
|
3.2.7. UserInfoRestTemplateCustomizer
This feature is available for all tracer implementations.
We instrument the Spring Security’s UserInfoRestTemplateCustomizer
.
To block this feature, set spring.sleuth.web.client.enabled
to false
.
4. HTTP Server Integration
Features from this section can be disabled by setting the spring.sleuth.web.enabled
property with value equal to false
.
4.1. HTTP Filter
This feature is available for all tracer implementations.
Through the TracingFilter
, all sampled incoming requests result in creation of a Span.
You can configure which URIs you would like to skip by setting the spring.sleuth.web.skipPattern
property.
If you have ManagementServerProperties
on classpath, its value of contextPath
gets appended to the provided skip pattern.
If you want to reuse the Sleuth’s default skip patterns and just append your own, pass those patterns by using the spring.sleuth.web.additionalSkipPattern
.
By default, all the spring boot actuator endpoints are automatically added to the skip pattern.
If you want to disable this behaviour set spring.sleuth.web.ignore-auto-configured-skip-patterns
to true
.
To change the order of tracing filter registration, please set the
spring.sleuth.web.filter-order
property.
To disable the filter that logs uncaught exceptions you can disable the
spring.sleuth.web.exception-throwing-filter-enabled
property.
4.2. Async Servlet support
This feature is available for all tracer implementations.
If your controller returns a Callable
or a WebAsyncTask
, Spring Cloud Sleuth continues the existing span instead of creating a new one.
4.3. WebFlux support
This feature is available for all tracer implementations.
Through TraceWebFilter
, all sampled incoming requests result in creation of a Span.
That Span’s name is http:
+ the path to which the request was sent.
For example, if the request was sent to /this/that
, the name is http:/this/that
.
You can configure which URIs you would like to skip by using the spring.sleuth.web.skipPattern
property.
If you have ManagementServerProperties
on the classpath, its value of contextPath
gets appended to the provided skip pattern.
If you want to reuse Sleuth’s default skip patterns and append your own, pass those patterns by using the spring.sleuth.web.additionalSkipPattern
.
In order to achieve best results in terms of performance and context propagation we suggest that you switch the spring.sleuth.reactor.instrumentation-type
to MANUAL
.
In order to execute code with the span in scope you can call WebFluxSleuthOperators.withSpanInScope
.
Example:
@GetMapping("/simpleManual")
public Mono<String> simpleManual() {
return Mono.just("hello").map(String::toUpperCase).doOnEach(WebFluxSleuthOperators
.withSpanInScope(SignalType.ON_NEXT, signal -> log.info("Hello from simple [{}]", signal.get())));
}
To change the order of tracing filter registration, please set the
spring.sleuth.web.filter-order
property.
4.4. Reactor Netty HttpServer
If you’re using Reactor Netty and would like to have your access logs instrumented you need to add the io.projectreactor.netty:reactor-netty-http-brave
(this will work only for the Brave Tracer). Also add the following configuration to your project.
@Configuration(proxyBeanMethods = false)
class TraceNettyConfig {
@Bean
NettyServerCustomizer traceNettyServerCustomizer(ObjectProvider<HttpTracing> tracing) {
return server -> ReactorNettyHttpTracing.create(tracing.getObject()).decorateHttpServer(server);
}
}
5. Messaging
Features from this section can be disabled by setting the spring.sleuth.messaging.enabled
property with value equal to false
.
5.1. Spring Integration
This feature is available for all tracer implementations.
Spring Cloud Sleuth integrates with Spring Integration.
It creates spans for publish and subscribe events.
To disable Spring Integration instrumentation, set spring.sleuth.integration.enabled
to false
.
You can provide the spring.sleuth.integration.patterns
pattern to explicitly provide the names of channels that you want to include for tracing.
By default, all channels but hystrixStreamOutput
channel are included.
When using the Executor to build a Spring Integration IntegrationFlow , you must use the untraced version of the Executor .
Decorating the Spring Integration Executor Channel with TraceableExecutorService causes the spans to be improperly closed.
|
If you want to customize the way tracing context is read from and written to message headers, it’s enough for you to register beans of types:
-
Propagator.Setter<MessageHeaderAccessor>
- for writing headers to the message -
Propagator.Getter<MessageHeaderAccessor>
- for reading headers from the message
5.1.2. Customizing messaging spans
In order to change the default span names and tags, just register a bean of type MessageSpanCustomizer
. You can also
override the existing DefaultMessageSpanCustomizer
to extend the existing behaviour.
@Component
class MyMessageSpanCustomizer extends DefaultMessageSpanCustomizer {
@Override
public Span customizeHandle(Span spanCustomizer,
Message<?> message, MessageChannel messageChannel) {
return super.customizeHandle(spanCustomizer, message, messageChannel)
.name("changedHandle")
.tag("handleKey", "handleValue")
.tag("channelName", channelName(messageChannel));
}
@Override
public Span.Builder customizeSend(Span.Builder builder,
Message<?> message, MessageChannel messageChannel) {
return super.customizeSend(builder, message, messageChannel)
.name("changedSend")
.tag("sendKey", "sendValue")
.tag("channelName", channelName(messageChannel));
}
}
5.2. Spring Cloud Function and Spring Cloud Stream
This feature is available for all tracer implementations.
Spring Cloud Sleuth can instrument Spring Cloud Function. Since Spring Cloud Stream uses Spring Cloud Function you will get the messaging instrumentation out of the box.
The way to achieve it is to provide a Function
or Consumer
or Supplier
that takes in a Message
as a parameter e.g. Function<Message<String>, Message<Integer>>
.
If the type is not Message
then instrumentation will not take place.
For a reactive Consumer<Flux<Message<?>>>
remember to manually close the span and clear the context before you call .subscribe()
. Example:
@Bean
Consumer<Flux<Message<String>>> channel(Tracer tracer) {
// For the reactive consumer remember to call "subscribe()" at the end, otherwise
// you'll get the "Dispatcher has no subscribers" error
return i -> i
.doOnNext(s -> log.info("HELLO"))
// You must finish the span yourself and clear the tracing context like presented below.
// Otherwise you will be missing out the span that wraps the function execution.
.doOnNext(s -> {
tracer.currentSpan().end();
tracer.withSpan(null);
})
.subscribe();
}
}
NOTE:
For Sleuth to work with any Supplier
(e.g. Supplier<Flux<Message<String>>>
) you must fall back to Spring Integration based instrumentation by setting spring.sleuth.integration.enabled
to true
.
You can disable Spring Cloud Stream integration by setting the value of spring.sleuth.function.enabled
to false
.
If you want to fully control the life cycle of spans within the reactive messaging context of Spring Cloud Stream
remember to disable the Spring Cloud Stream integration and leverage the MessagingSleuthOperators
utility
class that allows you to manipulate the input and output messages in order to continue the tracing context and to execute custom code within the tracing context.
class SimpleReactiveManualFunction implements Function<Flux<Message<String>>, Flux<Message<String>>> {
private static final Logger log = LoggerFactory.getLogger(SimpleReactiveFunction.class);
private final BeanFactory beanFactory;
SimpleReactiveManualFunction(BeanFactory beanFactory) {
this.beanFactory = beanFactory;
}
@Override
public Flux<Message<String>> apply(Flux<Message<String>> input) {
return input.map(message -> (MessagingSleuthOperators.asFunction(this.beanFactory, message))
.andThen(msg -> MessagingSleuthOperators.withSpanInScope(this.beanFactory, msg, stringMessage -> {
log.info("Hello from simple manual [{}]", stringMessage.getPayload());
return stringMessage;
})).andThen(msg -> MessagingSleuthOperators.afterMessageHandled(this.beanFactory, msg, null))
.andThen(msg -> MessageBuilder.createMessage(msg.getPayload().toUpperCase(), msg.getHeaders()))
.andThen(msg -> MessagingSleuthOperators.handleOutputMessage(this.beanFactory, msg)).apply(message));
}
}
5.3. Spring RabbitMq
This feature is available for Brave tracer implementation.
We instrument the RabbitTemplate
so that tracing headers get injected into the message.
To block this feature, set spring.sleuth.messaging.rabbit.enabled
to false
.
5.4. Spring Kafka
This feature is available for Brave tracer implementation.
We instrument the Spring Kafka’s ProducerFactory
and ConsumerFactory
so that tracing headers get injected into the created Spring Kafka’s
Producer
and Consumer
.
To block this feature, set spring.sleuth.messaging.kafka.enabled
to false
.
5.5. Spring Kafka Streams
This feature is available for Brave tracer implementation.
We instrument the KafkaStreams
KafkaClientSupplier
so that tracing headers get injected into the Producer
and Consumer`s. A `KafkaStreamsTracing
bean allows for further instrumentation through additional TransformerSupplier
and
ProcessorSupplier
methods.
To block this feature, set spring.sleuth.messaging.kafka.streams.enabled
to false
.
5.6. Spring JMS
This feature is available for Brave tracer implementation.
We instrument the JmsTemplate
so that tracing headers get injected into the message.
We also support @JmsListener
annotated methods on the consumer side.
To block this feature, set spring.sleuth.messaging.jms.enabled
to false
.
We don’t support baggage propagation for JMS |
6. OpenFeign
This feature is available for all tracer implementations.
By default, Spring Cloud Sleuth provides integration with Feign through TraceFeignClientAutoConfiguration
.
You can disable it entirely by setting spring.sleuth.feign.enabled
to false
.
If you do so, no Feign-related instrumentation take place.
Part of Feign instrumentation is done through a FeignBeanPostProcessor
.
You can disable it by setting spring.sleuth.feign.processor.enabled
to false
.
If you set it to false
, Spring Cloud Sleuth does not instrument any of your custom Feign components.
However, all the default instrumentation is still there.
7. OpenTracing
This feature is available for all tracer implementations.
Spring Cloud Sleuth is compatible with OpenTracing.
If you have OpenTracing on the classpath, we automatically register the OpenTracing Tracer
bean.
If you wish to disable this, set spring.sleuth.opentracing.enabled
to false
8. Quartz
This feature is available for all tracer implementations.
We instrument quartz jobs by adding Job/Trigger listeners to the Quartz Scheduler.
To turn off this feature, set the spring.sleuth.quartz.enabled
property to false
.
9. Reactor
This feature is available for all tracer implementations.
We have the following modes of instrumenting reactor based applications that can be set via spring.sleuth.reactor.instrumentation-type
property:
-
DECORATE_QUEUES
- With the new Reactor queue wrapping mechanism (Reactor 3.4.3) we’re instrumenting the way threads are switched by Reactor. This should lead to feature parity withON_EACH
with low performance impact. -
DECORATE_ON_EACH
- wraps every Reactor operator in a trace representation. Passes the tracing context in most cases. This mode might lead to drastic performance degradation. -
DECORATE_ON_LAST
- wraps last Reactor operator in a trace representation. Passes the tracing context in some cases thus accessing MDC context might not work. This mode might lead to medium performance degradation. -
MANUAL
- wraps every Reactor in the least invasive way without passing of tracing context. It’s up to the user to do it.
Current default is ON_EACH
for backward compatibility reasons, however we encourage the users to migrate to the MANUAL
instrumentation and profit from WebFluxSleuthOperators
and MessagingSleuthOperators
.
The performance improvement can be substantial.
Example:
@GetMapping("/simpleManual")
public Mono<String> simpleManual() {
return Mono.just("hello").map(String::toUpperCase).doOnEach(WebFluxSleuthOperators
.withSpanInScope(SignalType.ON_NEXT, signal -> log.info("Hello from simple [{}]", signal.get())));
}
To disable Reactor support, set the spring.sleuth.reactor.enabled
property to false
.
10. Redis
This feature is available for all tracer implementations.
We’re using the Tracing
abstraction from Lettuce. If Brave is on the classpath we configure Tracing
to be BraveTracing
.
To disable Redis support, set the spring.sleuth.redis.enabled
property to false
.
10.1. Redis With Legacy Brave Only Support
To use the Brave only supported feature you need to set the value of spring.sleuth.redis.legacy.enabled
to true
. This is the default mechanism available up till version 3.1.0 of Spring Cloud Sleuth.
We set tracing
property to Lettuce ClientResources
instance to enable Brave tracing built in Lettuce.
Spring Cloud Sleuth will provide a traced version of the ClientResources
bean. If you have your own implementation of that bean, remember to customize the ClientResources.Builder
with a stream of `ClientResourcesBuilderCustomizer`s like presented below:
@Bean(destroyMethod = "shutdown")
DefaultClientResources myLettuceClientResources(ObjectProvider<ClientResourcesBuilderCustomizer> customizer) {
DefaultClientResources.Builder builder = DefaultClientResources.builder();
// setting up the builder manually
customizer.stream().forEach(c -> c.customize(builder));
return builder.build();
}
11. Runnable and Callable
This feature is available for all tracer implementations.
If you wrap your logic in Runnable
or Callable
, you can wrap those classes in their Sleuth representative, as shown in the following example for Runnable
:
Runnable runnable = new Runnable() {
@Override
public void run() {
// do some work
}
@Override
public String toString() {
return "spanNameFromToStringMethod";
}
};
// Manual `TraceRunnable` creation with explicit "calculateTax" Span name
Runnable traceRunnable = new TraceRunnable(this.tracer, spanNamer, runnable, "calculateTax");
The following example shows how to do so for Callable
:
Callable<String> callable = new Callable<String>() {
@Override
public String call() throws Exception {
return someLogic();
}
@Override
public String toString() {
return "spanNameFromToStringMethod";
}
};
// Manual `TraceCallable` creation with explicit "calculateTax" Span name
Callable<String> traceCallable = new TraceCallable<>(tracer, spanNamer, callable, "calculateTax");
That way, you ensure that a new span is created and closed for each execution.
12. RPC
This feature is available for Brave tracer implementation.
Sleuth automatically configures the RpcTracing
bean which serves as a foundation for RPC instrumentation such as gRPC or Dubbo.
If a customization of client / server sampling of the RPC traces is required, just register a bean of type brave.sampler.SamplerFunction<RpcRequest>
and name the bean sleuthRpcClientSampler
for client sampler and
sleuthRpcServerSampler
for server sampler.
For your convenience the @RpcClientSampler
and @RpcServerSampler
annotations can be used to inject the proper beans or to reference the bean names via their static String NAME
fields.
Ex. Here’s a sampler that traces 100 "GetUserToken" server requests per second. This doesn’t start new traces for requests to the health check service. Other requests will use the global sampling configuration.
@Configuration(proxyBeanMethods = false)
class Config {
@Bean(name = RpcServerSampler.NAME)
SamplerFunction<RpcRequest> myRpcSampler() {
Matcher<RpcRequest> userAuth = and(serviceEquals("users.UserService"), methodEquals("GetUserToken"));
return RpcRuleSampler.newBuilder().putRule(serviceEquals("grpc.health.v1.Health"), Sampler.NEVER_SAMPLE)
.putRule(userAuth, RateLimitingSampler.create(100)).build();
}
}
12.1. Dubbo RPC support
Via the integration with Brave, Spring Cloud Sleuth supports Dubbo.
It’s enough to add the brave-instrumentation-dubbo
dependency:
<dependency>
<groupId>io.zipkin.brave</groupId>
<artifactId>brave-instrumentation-dubbo</artifactId>
</dependency>
You need to also set a dubbo.properties
file with the following contents:
dubbo.provider.filter=tracing
dubbo.consumer.filter=tracing
12.2. gRPC
Spring Cloud Sleuth provides instrumentation for gRPC via the Brave tracer.
You can disable it entirely by setting spring.sleuth.grpc.enabled
to false
.
12.2.1. Variant 1
Dependencies
The gRPC integration relies on two external libraries to instrument clients and servers and both of those libraries must be on the class path to enable the instrumentation. |
Maven:
<dependency>
<groupId>io.github.lognet</groupId>
<artifactId>grpc-spring-boot-starter</artifactId>
</dependency>
<dependency>
<groupId>io.zipkin.brave</groupId>
<artifactId>brave-instrumentation-grpc</artifactId>
</dependency>
Gradle:
compile("io.github.lognet:grpc-spring-boot-starter")
compile("io.zipkin.brave:brave-instrumentation-grpc")
Server Instrumentation
Spring Cloud Sleuth leverages grpc-spring-boot-starter to register Brave’s gRPC server interceptor with all services annotated with @GRpcService
.
Client Instrumentation
gRPC clients leverage a ManagedChannelBuilder
to construct a ManagedChannel
used to communicate to the gRPC server.
The native ManagedChannelBuilder
provides static methods as entry points for construction of ManagedChannel
instances, however, this mechanism is outside the influence of the Spring application context.
Spring Cloud Sleuth provides a SpringAwareManagedChannelBuilder that can be customized through the Spring application context and injected by gRPC clients.
This builder must be used when creating ManagedChannel instances.
|
Sleuth creates a TracingManagedChannelBuilderCustomizer
which inject Brave’s client interceptor into the SpringAwareManagedChannelBuilder
.
12.2.2. Variant 2
Grpc Spring Boot Starter automatically detects the presence of Spring Cloud Sleuth and Brave’s instrumentation for gRPC and registers the necessary client and/or server tooling.
13. RxJava
This feature is available for all tracer implementations.
We registering a custom RxJavaSchedulersHook
that wraps all Action0
instances in their Sleuth representative, which is called TraceAction
.
The hook either starts or continues a span, depending on whether tracing was already going on before the Action was scheduled.
To disable the custom RxJavaSchedulersHook
, set the spring.sleuth.rxjava.schedulers.hook.enabled
to false
.
You can define a list of regular expressions for thread names for which you do not want spans to be created.
To do so, provide a comma-separated list of regular expressions in the spring.sleuth.rxjava.schedulers.ignoredthreads
property.
The suggested approach to reactive programming and Sleuth is to use the Reactor support. |
14. Spring Cloud CircuitBreaker
This feature is available for all tracer implementations.
If you have Spring Cloud CircuitBreaker on the classpath, we will wrap the passed command Supplier
and the fallback Function
in its trace representations. We will also instrument the reactive implementation of the CircuitBreaker.
In order to disable this instrumentation set spring.sleuth.circuitbreaker.enabled
to false
.
15. Spring Cloud Config Server
This feature is available for all tracer implementations.
If you have Spring Cloud Config Server running on the classpath, we will wrap the EnvironmentRepository
in a span.
In order to disable this instrumentation set spring.sleuth.config.server.enabled
to false
.
16. Spring Cloud Deployer
This feature is available for all tracer implementations.
If you have Spring Cloud Deployer running on the classpath, we wrap the AppDeployer
in a trace representation. We are polling the application for its status at a default interval. You can change that default by setting the spring.sleuth.deployer.status-poll-delay
property.
In order to disable this instrumentation set spring.sleuth.deployer.enabled
to false
.
17. Spring RSocket
This feature is available for all tracer implementations.
If you have Spring RSocket running on the classpath, we wrap the inbound and outbound communication to propagate the tracing context via the metadata.
In order to disable this instrumentation set spring.sleuth.rsocket.enabled
to false
.
18. Spring Batch
This feature is available for all tracer implementations.
If you have Spring Batch running on the classpath, we wrap the StepBuilderFactory
and the JobBuilderFactory
to propagate the tracing context.
In order to disable this instrumentation set spring.sleuth.batch.enabled
to false
.
19. Spring Cloud Task
This feature is available for all tracer implementations.
If you have Spring Cloud Task running on the classpath, we’re instrumenting TaskExecutionListener
and CommandLineRunner
and ApplicationRunner
.
In order to disable this instrumentation set spring.sleuth.task.enabled
to false
.
20. Spring Tx
This feature is available for all tracer implementations.
If you have Spring Tx on the classpath we will instrument the PlatformTransactionManager
and the ReactiveTransactionManager
to create a span whenever a new transaction is created. Due to technical constraints we will not instrument classes that extend Spring’s AbstractPlatformTransactionManager
.
In order to disable this instrumentation set spring.sleuth.tx.enabled
to false
.
21. Spring Security
This feature is available for all tracer implementations.
If you have Spring Security on the classpath, we create an implementation of SecurityContextChangedListener
that annotates a current span with an event when context has changed.
In order to disable this instrumentation set spring.sleuth.security.enabled
to false
.
22. R2DBC
This feature is available for all tracer implementations.
If you have R2DBC Proxy on the classpath we will instrument the ConnectionFactory`so that it contains a custom `ProxyExecutionListener
.
In order to disable this instrumentation set spring.sleuth.r2dbc.enabled
to false
.
23. Spring Vault
This feature is available for all tracer implementations.
We’re instrumenting the RestTemplate
or WebClient
instances used by Spring Vault to communicate with Vault.
In order to disable this instrumentation set spring.sleuth.vault.enabled
to false
.
24. Spring Tomcat
This feature is available for all tracer implementations.
We’re adding an instrumented Tomcat’s Valve
that originates the span.
In order to disable this instrumentation set spring.sleuth.web.tomcat.enabled
to false
.
25. Spring Data Cassandra
This feature is available for all tracer implementations.
We’re instrumenting Casandra’s CqlSession
and ReactiveSession
interfaces and we’re providing our own implementation of the RequestTracker
.
In order to disable this instrumentation set spring.sleuth.cassandra.enabled
to false
.
26. Spring JDBC
This feature is available for all tracer implementations. It has been ported from the spring-boot-datasource-decorator project.
We’re decorating `DataSource`s in a trace representation. We delegate actual proxying to either p6spy or datasource-proxy. In order to use this feature you need to have them on the classpath.
<dependency>
<groupId>p6spy</groupId>
<artifactId>p6spy</artifactId>
<version>${p6spy.version}</version>
<scope>runtime</scope>
</dependency>
runtimeOnly "p6spy:p6spy:${p6spyVersion}"
<dependency>
<groupId>net.ttddyy</groupId>
<artifactId>datasource-proxy</artifactId>
<version>${datasource-proxy.version}</version>
<scope>runtime</scope>
</dependency>
runtimeOnly "net.ttddyy:datasource-proxy:${datasourceProxyVersion}"
Please check the appendix page under spring.sleuth.jdbc.p6spy
for all p6spy configuration options and spring.sleuth.jdbc.datasource-proxy
for all datasource proxy configuration options.
For P6Spy by default logging parameter values will be disabled, set spring.sleuth.jdbc.p6spy.tracing.include-parameter-values
to true
to enable it.
You can configure P6Spy manually using one of available configuration methods. For more information please refer to the P6Spy Configuration Guide.
For Datasource Proxy by default logging queries will be disabled, set spring.sleuth.jdbc.datasource-proxy.slow-query.enable-logging
to true
to enable logging slow queries
and set spring.sleuth.jdbc.datasource-proxy.query.enable-logging
to true
to enable logging all queries.
In order to disable this instrumentation set spring.sleuth.jdbc.enabled
to false
.
27. MongoDB
This feature is available for all tracer implementations.
We’re adding command listeners that wrap all commands in a span.
If you want to have additional socket address related tags on the span set the spring.sleuth.mongodb.socket-address-span-customizer.enabled
to true
.
In order to disable this instrumentation set spring.sleuth.mongodb.enabled
to false
.
28. Spring Session
This feature is available for all tracer implementations.
We’re instrumenting the Session
repositories that wraps all operations in a span.
In order to disable this instrumentation set spring.sleuth.session.enabled
to false
.
29. Kotlin Coroutines
This feature is available for all tracer implementations.
We’re adding Kotlin Coroutines that allow you to retrieve the current span via the Tracer
bean. You can either pass the bean to the Kotlin Coroutine context via Tracer.asContextElement()
method execution or if you have Reactor Kotlin Coroutine integration on the classpath, we will retrieve it from Reactor’s context. To retrieve the current span you can call the currentSpan()
method within the Kotlin Coroutine.
30. Prometheus Exemplars
This feature is available for all tracer implementations.
Prometheus Exemplars are supported through SpanContextSupplier
. If you use Micrometer, this will be auto-configured for you, but you can register SpanContextSupplier
directly to Prometheus if you want.
Please check the Prometheus Docs, since this feature needs to be explicitly enabled on Prometheus' side, and it is only supported using the OpenMetrics format.