The primary purpose of any software development framework is to help you be productive as quickly and as easily as possible, and to do so in a reliable manner.
As application developers, we want a framework to provide constructs that are both intuitive and familiar so that their behaviors are boringly predictable. This provided convenience not only helps you hit the ground running in the right direction sooner but increases your focus on the application domain so you are able to better understand the problem you are trying to solve in the first place. Once the problem domain is well understood, you are more apt to make informed decisions about the design, which leads to better outcomes, faster.
This is exactly what Spring Boot’s auto-configuration provides for you… enabling features, services and supporting infrastructure for Spring applications in a loosely integrated way by using conventions (e.g. classpath) that ultimately helps you keep your attention and focus on solving the problem at hand and not on the plumbing.
For example, if you are building a Web application, simply include the org.springframework.boot:spring-boot-starter-web
dependency on your application classpath. Not only will Spring Boot enable you to build Spring Web MVC Controllers
appropriate to your application UC (your responsibility), but will also bootstrap your Web app in an embedded Servlet
Container on startup (Boot’s responsibility).
This saves you from having to handle many low-level, repetitive and tedious development tasks that are highly error-prone when you are simply trying to solve problems. You don’t have to care how the plumbing works until you do. And, when you do, you will be better informed and prepared to do so.
It is also equally essential that frameworks, like Spring Boot, get out of the way quickly when application requirements diverge from the provided defaults. The is the beautiful and powerful thing about Spring Boot and why it is second to none in its class.
Still, auto-configuration does not solve every problem all the time. Therefore, you will need to use declarative configuration in some cases, whether expressed as bean definitions, in properties or by some other means. This is so frameworks don’t leave things to chance, especially when they are ambiguous. The framework simply gives you a choice.
Now, that we explained the motivation behind this chapter, let’s outline what we will discuss:
![]() | Note |
---|---|
SDG refers to Spring Data for Apache Geode & Pivotal GemFire. SSDG refers to Spring Session for Apache Geode & Pivotal GemFire and SBDG refers to Spring Boot for Apache Geode & Pivotal GemFire, this project. |
![]() | Tip |
---|---|
The list of SDG Annotations covered by SBDG’s Auto-configuration is discussed in detail in the Appendix, in the section, Auto-configuration vs. Annotation-based configuration. |
To be absolutely clear about which SDG Annotations we are referring to, we mean the SDG Annotations in the package: org.springframework.data.gemfire.config.annotation.
Additionally, in subsequent sections, we will cover which Annotations are added by SBDG.
Auto-configuration was explained in complete detail in the chapter, "Auto-configuration".
The following SDG Annotations are not implicitly applied by SBDG’s Auto-configuration:
@EnableAutoRegionLookup
@EnableBeanFactoryLocator
@EnableCacheServer(s)
@EnableCachingDefinedRegions
@EnableClusterConfiguration
@EnableClusterDefinedRegions
@EnableCompression
@EnableDiskStore(s)
@EnableEntityDefinedRegions
@EnableEviction
@EnableExpiration
@EnableGatewayReceiver
@EnableGatewaySender(s)
@EnableGemFireAsLastResource
@EnableGemFireMockObjects
@EnableHttpService
@EnableIndexing
@EnableOffHeap
@EnableLocator
@EnableManager
@EnableMemcachedServer
@EnablePool(s)
@EnableRedisServer
@EnableStatistics
@UseGemFireProperties
![]() | Tip |
---|---|
This was also covered here. |
Part of the reason for this is because several of the Annotations are server-specific:
@EnableCacheServer(s)
@EnableGatewayReceiver
@EnableGatewaySender(s)
.
@EnableHttpService
@EnableLocator
@EnableManager
@EnableMemcachedServer
@EnableRedisServer
And, we already stated that SBDG is opinionated about providing a ClientCache
instance out-of-the-box.
Other Annotations are driven by need, for example:
@EnableAutoRegionLookup
& @EnableBeanFactoryLocator
- really only useful when mixing configuration metadata
formats, e.g. Spring config with GemFire cache.xml
. This is usually only the case if you have legacy cache.xml
config to begin with, otherwise, don’t do this!
@EnableCompression
- requires the Snappy Compression Library on your application classpath.
@EnableDiskStore(s)
- only used for overflow and persistence.
@EnableOffHeap
- enables data to be stored in main memory, which is only useful when your application data
(i.e. Objects stored in GemFire/Geode) are generally uniform in size.
@EnableGemFireAsLastResource
- only needed in the context of JTA Transactions.
@EnableStatistics
- useful if you need runtime metrics, however enabling statistics gathering does consume
considerable system resources (e.g. CPU & Memory).
While still other Annotations require more careful planning, for example:
@EnableEviction
@EnableExpiration
@EnableIndexing
One in particular is used exclusively for Unit Testing:
@EnableGemFireMockObjects
The bottom-line is, a framework should not Auto-configure every possible feature, especially when the features consume additional system resources, or requires more careful planning as determined by the use case.
Still, all of these Annotations are available for the application developer to use when needed.
This section calls out the Annotations we believe to be most beneficial for your application development purposes when using either Apache Geode or Pivotal GemFire in Spring Boot applications.
The @EnableClusterAware
annotation is arguably the most powerful and valuable Annotation in the set of Annotations!
When you annotate your main @SpringBootApplication
class with @EnableClusterAware
:
Declaring @EnableClusterAware
.
@SpringBootApplication @EnableClusterAware class SpringBootApacheGeodeClientCacheApplication { ... }
Your Spring Boot, Apache Geode ClientCache
application is able to seamlessly switch between client/server
and local-only topologies with no code or configuration changes.
When a cluster of Apache Geode or Pivotal GemFire servers is detected, the client application will send and receive data
to and from the cluster. If a cluster is not available, then the client automatically switches to storing data locally
on the client using LOCAL
Regions.
Additionally, the @EnableClusterAware
annotation is meta-annotated with SDG’s
@EnableClusterConfiguration
annotation.
The @EnableClusterConfiguration
enables configuration metadata defined on the client (e.g. Region and Index
definitions) as needed by the application based on requirements and use cases, to be sent to the cluster of servers.
If those schema objects are not already present, they will be created by the servers in the cluster in such a way that
the servers will remember the configuration on a restart as well as provide the configuration to new servers joining
the cluster when scaling out. This feature is careful not to stomp on any existing Region or Index objects already
present on the servers, particularly since you may already have data stored in the Regions.
The primary motivation behind the @EnableClusterAware
annotation is to allow you to switch environments with very
little effort. It is a very common development practice to debug and test your application locally, in your IDE,
then push up to a production-like environment for more rigorous integration testing.
By default, the configuration metadata is sent to the cluster using a non-secure HTTP connection. Using HTTPS, changing host and port, and configuring the data management policy used by the servers when creating Regions is all configurable.
![]() | Tip |
---|---|
Refer to the section in the SDG Reference Guide on Configuring Cluster Configuration Push for more details. |
These Annotations are used to create Regions in the cache to manage your application data.
Of course, you can create Regions using Java configuration and the Spring API as follows:
Creating a Region with Spring JavaConfig.
@Bean("Customers") ClientRegionFactoryBean<Long, Customer> customersRegion(GemFireCache cache) { ClientRegionFactoryBean<Long, Customer> customers = new ClientRegionFactoryBean<>(); customers.setCache(cache); customers.setShortcut(ClientRegionShortcut.PROXY); return customers; }
Or XML:
Creating a client Region using Spring XML.
<gfe:client-region id="Customers" shorcut="PROXY"/>
However, using the provided Annotations is far easier, especially during development when the complete Region configuration may be unknown and you simply want to create a Region to persist your application data and move on.
The @EnableCachingDefinedRegions
annotation is used when you have application components registered in the Spring
Container that are annotated with Spring or JSR-107, JCache annotations.
Caches that identified by name in the caching annotations are used to create Regions holding the data you want cached.
For example, given:
Defining Regions based on Spring or JSR-107 JCache Annotations.
@Service class CustomerService { @Cacheable("CustomersByAccountNumber" key="#account.number") Customer findBy(Account account) { ... } }
When your main @SpringBootApplication
class is annotated with @EnableCachingDefinedRegions
:
Using @EnableCachingDefinedRegions
.
@SpringBootApplication @EnableCachingDefineRegions class SpringBootApacheGeodeClientCacheApplication { ... }
Then, SBDG would create a client PROXY
Region (or PARTITION_REGION
if your application were a peer member of the
cluster) with the name "CustomersByAccountNumber" as if you created the Region using either the JavaConfig or XML
approaches shown above.
You can use the clientRegionShortcut
or serverRegionShortcut
attribute to change the data management policy of the
Regions created on the client or servers, respectively.
For client Regions, you can additionally assign a specific Pool of connections used by the client *PROXY
Regions
to send data to the cluster by setting the poolName
attribute.
Like @EnableCachingDefinedRegions
, @EnableEntityDefinedRegions
allows you to create Regions based on the entity
classes you have defined in your application domain model.
For instance, if you have entity class annotated with SDG’s
@Region
mapping annotation:
Customer entity class annotated with @Region
.
@Region("Customers") class Customer { @Id private Long id; @Indexed private String name; ... }
Then SBDG will create Regions from the name specified in the @Region
mapping annotation on the entity class. In this
case, the Customer
application-defined entity class will result in the creation of a Region named "Customers" when
the main @SpringBootApplication
class is annotated with @EnableEntityDefinedRegions
:
Using @EnableEntityDefinedRegions
.
@SpringBootApplication @EnableEntityDefinedRegions(basePackageClasses = Customer.class, clientRegionShortcut = ClientRegionShortcut.CACHING_PROXY) class SpringBootApacheGeodeClientCacheApplication { ... }
Like the @EnableCachingDefinedRegions
annotation, you can set the client and server Region data management policy
using the clientRegionShortcut
and serverRegionShortcut
attributes, respectively, as well as set a dedicated Pool
of connections used by client Regions with the poolName
attribute.
However, unlike the @EnableCachingDefinedRegions
annotation, users are required to specify either the basePackage
,
or the type-safe alternative, basePackageClasses
attribute (recommended) when using the @EnableEntityDefinedRegions
annotation.
Part of the reason for this is that @EnableEntityDefinedRegions
performs a component scan for the entity classes
defined by your application. The component scan loads each class to inspect the Annotation metadata for that class.
This is not unlike the JPA entity scan when working with JPA providers like Hibernate.
Therefore, it is customary to limit the scope of the scan, otherwise you end up potentially loading many classes unnecessarily so. After all, the JVM uses dynamic linking to only load classes when needed.
Both the basePackages
and basePackageClasses
attributes accept an array of values. With basePackageClasses
you
only need to refer to a single class type in that package and every class in that package as well as classes in the
sub-packages will be scanned to determine if the class type represents an entity. A class type is an entity if it
is annotated with the @Region
mapping annotation, otherwise it is not considered an entity.
By example, suppose you had the following structure:
Entity Scan.
- example.app.crm.model |- Customer.class |- NonEntity.class |- contact |- Address.class |- PhoneNumber.class |- AnotherNonEntity.class - example.app.accounts.model |- Account.class ... .. .
Then, you could configure the @EnableEntityDefinedRegions
as follows:
Targeting with @EnableEntityDefinedRegions
.
@SpringBootApplication @EnableEntityDefinedRegions(basePackageClasses = { NonEntity.class, Account.class } ) class SpringBootApacheGeodeClientCacheApplication { ... }
If Customer
, Address
, PhoneNumber
and Account
were all entity classes properly annotated with @Region
, then
the component scan would pick up all these classes and create Regions for them. The NonEntity
class only serves as
a marker in this case pointing to where (i.e. what package) the scan should begin.
Additionally, the @EnableEntityDefinedRegions
annotation provides include and exclude filters, the same as
the core Spring Frameworks @ComponentScan
annotation.
![]() | Tip |
---|---|
Refer to the SDG Reference Guide on Configuring Regions for more details. |
Sometimes it is ideal or even necessary to pull configuration from the cluster (rather than push to the cluster). That is, you want the Regions defined on the servers to be created on the client and used by your application.
This is as simple as annotating your main @SpringBootApplication
class with @EnableClusterDefinedRegions
:
Using @EnableClusterDefinedRegions
.
@SpringBootApplication @EnableClusterDefinedRegions class SpringBootApacheGeodeClientCacheApplication { ... }
Every Region that exists on the cluster of servers will have a corresponding PROXY
Region defined and created on the
client as a bean in your Spring Boot application.
If the cluster of servers defines a Region called "ServerRegion" you can inject the client PROXY
Region
by the same name (i.e. "ServerRegion") into your Spring Boot application and use it:
Using a server-side Region on the client.
@Component class SomeApplicationComponent { @Resource(name = "ServerRegion") private Region<Integer, EntityType> serverRegion; public void sometMethod() { EntityType entity = ...; this.serverRegion.put(1, entity); ... }
Of course, SBDG auto-configures a GemfireTemplate
for the "ServerRegion" Region (as described here),
so a better way to interact with the client PROXY
Region corresponding to the "ServerRegion" Region on the server
is to inject the template:
Using a server-side Region on the client with a template.
@Component class SomeApplicationComponent { @Autowired @Qualifier("serverRegionTemplate") private GemfireTemplate serverRegionTemplate public void sometMethod() { EntityType entity = ...; this.serverRegionTemplate.put(1, entity); ... }
![]() | Tip |
---|---|
Refer to the SDG Reference Guide on Configuring Cluster-defined Regions for more details. |
Only when using @EnableEntityDefinedRegions
can you also use the @EnableIndexing
annotation. This is because
@EnableIndexing
requires the entities to be scanned and analyzed for mapping metadata defined on the class type
of the entity. This includes annotations like Spring Data Commons @Id
annotation as well as SDG provided annotations,
@Indexed
and @LuceneIndexed
.
The @Id
annotation identifies the (primary) key of the entity. The @Indexed
defines OQL Indexes on object fields
which are used in the predicates of your OQL Queries. The @LuceneIndexed
annotation is used to define Apache Lucene
Indexes required for searches.
![]() | Note |
---|---|
Lucene Indexes can only be created on |
You may have noticed that the Customer
entity class’s name
field was annotated with @Indexed
.
Customer entity class with @Indexed
annotated name
field.
@Region("Customers") class Customer { @Id private Long id; @Indexed private String name; ... }
As a result, when our main @SpringBootApplication
class is annotated with @EnableIndexing
:
Using @EnableIndexing
.
@SpringBootApplication @EnableEntityDefinedRegions(basePackageClasses = Customer.class) @EnableIndexing class SpringBootApacheGeodeClientCacheApplication { ... }
An Apache Geode OQL Index for the Customer.name
field will be created thereby making OQL Queries on Customers by name
use this Index.
![]() | Note |
---|---|
Keep in mind that OQL Indexes are not persistent between restarts (i.e. Apache Geode & Pivotal GemFire maintains Indexes in-memory only). An OQL Index is always rebuilt when the node is restarted. |
When you combine @EnableIndexing
with either @EnableClusterConfiguration
or @EnableClusterAware
, then the Index
definitions will be pushed to the server-side Regions where OQL Queries are generally executed.
![]() | Tip |
---|---|
Refer to the SDG Reference Guide on Configuring Indexes for more details. |
It is often useful to define both Eviction and Expiration policies, particularly with a system like Apache Geode or Pivotal GemFire, especially given it primarily keeps data in-memory, on the JVM Heap. As you can imagine your data volume size may far exceed the amount of available JVM Heap memory and/or keeping too much data on the JVM Heap can cause Garbage Collection (GC) issues.
![]() | Tip |
---|---|
You can enable off-heap (or main memory usage) capabilities by declaring SDG’s |
Defining Eviction and Expiration policies is a useful for limiting what is kept in memory and for how long.
While configuring Eviction is easy with SDG, we particularly want to call out Expiration since configuring Expiration has special support in SDG.
With SDG, it is possible to define the Expiration policies associated with a particular application class type on the
class type itself, using the @Expiration
,
@IdleTimeoutExpiration
and @TimeToLiveExpiration
annotations.
![]() | Tip |
---|---|
Refer to the Apache Geode User Guide for more details on the different Expiration Types (i.e. Idle Timeout (TTI) vs. Time-To-Live (TTL)). |
For example, suppose we want to limit the number of Customers
maintained in memory for a period of time (measured in
seconds) based on the last time a Customer
was accessed (e.g. read). We can the define an Idle Timeout Expiration
policy on our Customer
class type, like so:
Customer entity class with @Indexed
annotated name
field.
@Region("Customers") @IdleTimeoutExpiration(action = "INVALIDATE", timeout = "300") class Customer { @Id private Long id; @Indexed private String name; ... }
The Customer
entry in the "Customers" Region will be invalidated
after 300 seconds
(or 5 minutes
).
All we need to do to enable annotation-based Expiration policies is annotate our main @SpringBootApplication
class
with @EnableExpiration
:
Enabling Expiration.
@SpringBootApplication @EnableExpiration class SpringBootApacheGeodeApplication { ... }
![]() | Note |
---|---|
Technically, this entity class specific Annotation-based Expiration policy is implemented using Apache Geode’s
|
![]() | Tip |
---|---|
Refer to the SDG Reference Guide for more details on configuring Expiration, along with Annotation-based Data Expiration in particular. |
Software Testing in general, and Unit Testing in particular, are a very important development tasks to ensure the quality of your Spring Boot applications.
Apache Geode and Pivotal GemFire can make testing difficult in some cases, especially when tests have to be written as Integration Tests in order to assert the correct behavior. This can be very costly and lengthens the feedback cycle. Fortunately, it is possible to write Unit Tests as well!
Spring has your back and once again provides a framework for testing Spring Boot applications using either Apache Geode or Pivotal GemFire. This is where the Spring Test for Apache Geode & Pivotal GemFire (STDG) project can help, particularly with Unit Testing.
For example, if you do not care what Apache Geode or Pivotal GemFire would actually do in certain cases and only care about the "contract", which is what mocking a collaborator is all about, then you could effectively mock Apache Geode or Pivotal GemFire’s objects in order to isolate the "Subject Under Test" (SUT) and focus on the interaction(s) or outcomes you expect to happen.
With STDG, you don’t have to change a bit of configuration to enable mocks in the Unit Tests for your Spring Boot
applications. You simply only need to annotate the test class with @EnableGemFireMockObjects
, like so:
Using Mock Apache Geode or Pivotal GemFire objects.
@RunWith(SpringRunner.class) @SpringBootTest class MyApplicationTestClass { @Test public void someTestCase() { ... } @Configuration @EnableGemFireMockObjects static class GeodeConfiguration { } }
Your Spring Boot configuration of Apache Geode will return mock objects for all Apache Geode objects, such as Regions.
Mocking Apache Geode or Pivotal GemFire objects even works for GemFire/Geode objects created from the productivity annotations discussed in the previous sections above.
For example, given the following Spring Boot, Apache Geode ClientCache
application class:
Main @SpringBootApplication
class under test.
@SpringBootApplication @EnableEntityDefinedRegions(basePackageClasses = Customer.class) class SpringBootApacheGeodeClientCacheApplication { ... }
The "Customers" Region defined by the Customer
entity class and created by the @EnableEntityDefinedRegions
annotation would be a "mock" Region and not an actual Region. You can still inject the Region in your test as before
and assert interactions on the Region based on your application workflows:
Using Mock Apache Geode or Pivotal GemFire objects.
@RunWith(SpringRunner.class) @SpringBootTest class MyApplicationTestClass { @Resource(name = "Customers") private Region<Long, Customer> customers; @Test public void someTestCase() { Customer jonDoe = ...; // Use the application in some way and test the interaction on the "Customers" Region assertThat(this.customers).containsValue(jonDoe); ... } ... }
There are many more things that STDG can do for you in both Unit & Integration Testing.
Refer to the documentation on Unit Testing for more details.
It is possible to write Integration Tests using STDG as well. Writing Integration Tests is an essential concern when you need to assert whether your application OQL Queries are well-formed, for instance. There are many other valid cases where Integration Testing is also applicable.