Outline of Scrum Guide 2020

I created my personal notes about the new scrum guide for a while, as the guide is out since 2020 November. However I decided to summarize/outline in an article only now. However it was written in early 2021.

At its core, Scrum is a lightweight framework designed to guide individuals and teams in generating value by addressing complex problems.

Scrum has a key role called as the Scrum Master, who cultivates an environment conducive to the following iterative process during a Sprint:

  1. Product Owner’s Role: Ordering work into a backlog.
  2. Team’s Responsibility: Transforming selected work into an Increment of value during the Sprint.
  3. Joint Inspection: Team and Stakeholders collaboratively inspect results and make adjustments.

While the structure of Scrum is straightforward, each component is integral. It’s not a detailed set of instructions but rather a guiding framework for team relationships and interactions.

Scrum Theory

Scrum is built on an iterative, incremental approach with four formal events within a Sprint. The Scrum Theory hinges on key attributes:
Empiricism: Knowledge derived from experience, emphasizing observation, learning, and informed decision-making.
Lean Thinking: A focus on reducing waste and prioritizing essentials.

Scrum events are implementing the following empirical pillars:

  • Transparency – Visibility of processes and work, promoting inspiration
    low: diminish value, increase risk
    high: enables inspiration
  • Inspection – Frequent scrutiny of artifacts and agreed-upon goals, enabling adoption.
    artifacts and aggred goals must be inspected frequently
    enables adoption
    scrum events designed to provoke change
  • Adaptation – Rapid adjustments in response to process deviations or changes in product requirements
    process deviation, product result requires process or material changes, adjustments
    dificulty if people not empowered or self-managing
    Adapt fast

Scrum Values

commitment, focus, opennes, respect, courage

Scrum Team

The Scrum Team comprises a Scrum Master, a Product Owner, and developers,
forming a cohesive, cross-functional, and self-managing unit. This unit, focused on a singular objective—the Product Goal

cross-functional: all skills to create value
self-managing: internally decides what, when, and how

Small enough to remain nimble, big enough to complete significant work in a Sprint ( <10)
Smaller team communicates better and delivers better.
To large team should consider reorganize multiple focused on the same product
All accountable for creating valuable, useful increment every sprint.


Developers, committed to creating a usable increment, are accountable for various tasks, including:

  • creating plan for sprint; sprint backlog
  • instilling quality by DoD
  • adapting plan toward sprint goal
  • holding each other accountable

Product Owner

maximizes the value of product resulting the work of team accounted for:

  • developing and communicating Product Goal
  • creating, communicating backlog items
  • ordering backlog
  • ensuring backlog transparency, visibility, understanding

Can represent multiple stakeholders

Scrum Master

establishes scrum as defined. Communicates to team and organization
accountable for effectiveness by enabling practices.

  • coaching team ( self-management, cross-functionality )
  • helps focusing high value increments by keeping DoD
  • removes impediments
  • Scrum Events -> positive, productive, time-boxed
  • serves PO: Product Goal, backlog items, Product Planning, stakeholder collaboration

Scrum Events

Each Event is an opportunity to inspect and adapt
These events should minimize the need of other meetings. Hold at the same time and place


Ideas into value
Fixed length (1 month or less)
All the work necessary to achieve the Product Goal, including Sprint Planning, Daily Scrums, Sprint Review, Retrospective


  • keep Quality
  • Product Backlog is refined
  • scope may be clarified – renegotiated with PO

long vs. short sprint:
with short: less complexity, more clarity (Sprint Goal), more learning cycle, limited risk

Predictability by burn-down, burn-up, comulative flows

Sprint can be canceled by PO if the Sprint Goal becomes obsolete

Sprint Planning

Initiates the sprint by laying out thge work to be performed,
PO ensures the team collaborates on the plan by mapping the backlog items to the Product Goal
Team can invite other people for advice

Planning addresses:

  • Why sprint is valuable?
    PO proposes how to increase value of the product, then the team defines the Sprint Goal
  • What can be done in a Sprint?
    Through discussion with PO, devs select Backlog items and may refine them to increase understanding and confidence
    Challenges: How much items? Know past performance, upcoming capacity, DoD to forecast
  • How will the chosen work get done?
    For each item devs plan the work to create the Increment that meets DoD. This is often done by decomposing items into smaller work items
Sprint Backlog

Sprint Goal and Product Backlog items selected for Sprint plus their delivery plan
Planning is time-boxed for 1 month Sprint is max. 8 hours.

Daily scrum

Purpose to inspect progress towards the Sprint Goal, adjust in the upcoming work
15 min event for devs, held the same time, same place every working day
PO and SM participates if they actively working on items
Structure and techniques are open as long it focuses on progress toward the Sprint Goal and produces actionable plan
Improves communication, identifies impediments, promotes quick decision-making, eliminates need of other meetings

Sprint Review

Purpose to inspect outcome of Sprint and determine adaptations.
Team presents the work to stakeholders and progress toward Sprint Goal is discussed
based on accomplishment and environment changes attendees collaborate on what to do next
This is a working session, should avoid limiting to presentation
Time-box: 4 hours on monthly sprint

Sprint Retrospective

Purpose to plan ways to increase quality and effectiveness
Inspect last Sprint worth regards to individuals, interactions, processes, tools and DoD
Discuss what went well, that problems occurred, how those were solved
Identify and address improvements
Retrospective concludes a Sprint
time-boxed: 3 hours for a month sprint

Scrum Artifacts

Represents work or value, designed for maximize transparency of key information for

  • Product Backlog -> Product Goal
  • Sprint Backlog -> Sprint Goal
  • Increment -> Definition of Done
Product Backlog

Emergent, ordered list of improvements of the product
Refined items going into the Sprint Planning selection
Refinement – until an item fits into one sprint and satisfies Definition of Ready – is the breakdown process to split and clarify items to be more precise
Devs responsible for sizing
PO influences devs by helping understanding

Commitment: Product Goal

describes a future state of product as a target to plan against
Product is a vehicle to deliver value. It has clear boundary, known stakeholders, well-defined users/costumers.
Product could be a service. Physical product or more abstract as well
It’s the long term goal of the Team

Sprint Backlog

composed of:

  • (why) Sprint Goal
  • (what) set of Backlog items selected
  • ( how) actionable plan to deliver the Increment

Highly visible picture of work that devs plan to accomplish to achieve Sprint Goal

Commitment: Sprint Goal

single objective for the Sprint
creates coherence and focus encouraging the team to work together
created during Sprint Planning event
If work turns to be different devs and PO collaborate on the scope of backlog


Concrete stepping stone forward Product Goal
Increments are additive to prior ones and working together to provide value.
Increment must be usable
Multiple increments may be created in one sprint
They may be delivered before sprint end, Sprint Review shouldn’t gate release
work cannot be considered part of Increment unless meats DoD

Commitment: Definition of Done

Formal description of state of Increment, when it meets quality measures of the Product
as a backlog item meets DoD an Increment is born
creates transparency by shared understanding
item doesn’t meet DoD shouldn’t be released

Scrum framework is immutable. While implementing parts of it is possible, however the result won’t be Scrum.

Exists only in it’s entirety and functions well as a container for other techniques, methodologies and practices.

Correlation Identifier

Correlation identifier of requests-responses is an essential feature of microservice platforms for monitoring, reporting, debugging and diagnostics.
Allows tracing a single request inside the application flow, when it can often be dealt with by multiple downstream services.

Problem statement

The most trending direction in software development is distributing processing and systems. I nsoftware architectual system designs there are multiple service layouts: single monolithic systems, SOA distribution, and nowadays microservice level grouping of services, applications.
In case of multiple services, we should consider, even multiple running entities from one specific service.

In case of looking into the logs, we could have a harder time to track down the chain of calls and locate the specific instances, which was hit by the request, through serving a user request.

A request hits through GraphQL service, the User, Order, Payment, Shipping services on different instances

You will need to centralize the log messages from multiple services and multiple instances.
Order these log entries in timely order and then somehow group the relevant ones.
Here comes the correlation identifier in picture. It uniquely identifies the request, and generated-passed through and persisted along the log entries of the systems.

With propagated identifiers instead of various, you’ll have consistent log ids through a specific request life-cycle across every single service


You have multiple options how to generate, transport and store this identifier.

Generally you want to generate it at the first entry point and pass it down.
In the above the GraphQL proxy would serve as an entry point, which makes it easy to handle the generate and addition in one place.
In another layout I would suggest the following:
if you don’t have id, generate and add to the request
if you received an id, use and pass it along the outgoing requests

You have to define your HTTP interface and pass this data on requests. Similar options applicable non-HTTP request.

You can either extend the request/response body, (POST request, Responses with payload). However this way your model interfaces will be distorted with a non-product specific attribute.
A better option is to use a custom header parameter or cookies (Header: x-correlation-id).

What value?
The correlation Id value should be an unique identifier, so UUID is a good candidate. (while there is theoretical chance of, collision, the probability is very-very low)

Also the correlation Id could contain business level identifiers as well. This way the id will be more informative, it even adds extra query grouping ability. On simple systems it could make sense to use a basic product id. it’s simpler and smaller.
Also you can combine the above two. Using 1-2 unique identifier with a timestamp or simply generated value. UniqueId-productType-UUIDsmallHash


If we made all the decisions, we should put the identifiers into the application logs.
From that on, you only have to use aggregated log solutions to gain on the the correlation id.
Track the request along the services with simple querying.
Either with simple grep if the logs are accessible from a central place or with the use of log aggregate applications, loKi, loggy, splunk, etc.

In the second article Correlation Identifier - In Practice, I'll present examples in more detail how you can effectively generate-process the identifier, and how to add the information to the logs.


Amazon uses X-Amzn-Trace-Id https://aws.amazon.com/premiumsupport/knowledge-center/trace-elb-x-amzn-trace-id/

Spring Cloud Sleuth – uses trace-id and provide span-id s as work-units

correlation-id is used by jms systems, i.e.: ActiveMQ, RabbitMQ

Spring WebFlux + Server Sent Events (SSE)


What options we have to gather long processing data from the backend to send out to the UI?
Usually the followings:

  • open frequent connections – polling
  • keep an open connection – websocket

Server Sent Events

With Server Sent Events you’re able to send continuous updates from the server.
Imagine as a client subscription to a server for a stream.
Sometimes we don’t need a full bi-directional protocol (sending data back from client side), just a non-blocking data flow with the ability to process data row by row, while still receiving from the connection.
These are updates, time splitted data chunks from the server side to provide a fluent UI experience.



  • unidirectional protocol
  • simultaneous request limit (browser defaults: 6)


With reactive programming, the purpose of this type of data processing is the fluency and non-blocking nature.

There are more conceptual advantages, like: backpressure, split and delayed and limited processing of data.

It supports asynchronous and event-driven concept in programing with a fluent stream-like API.

The reactive API is present in Spring since version 5

It can be used with the specific content types in between:

  • backend-backend server-client (SOA/microservice calls)
  • backend-ui server client (server-browser calls)

Example Project

I picked some sample data about a movie database.
Our sample application is very simple: We stream the file line-by-line. Convert the data into a data model and producing the result as a stream to the endpoint.
The client side subscribed to the backend endpoint and processing the data line-by-line.
This way we have a stream from a datasource till the client visualization streamlined.

Backend stream logs

UI streaming processing and visualization

Besides the standard Flux stream we only needed to use the EVENT_STREAM Mediatype. While on the UI client side EventSource interface is required to capture the stream.

For more details see the Webflux + SSE example project: https://github.com/PazsitZ/webflux_sse

Testing – Test Doubles

The topic tries to cover high level of Test Doubles, Mocks. Here we have many named concepts around, Fake, Mock, Spy, Stub, etc…

Testing with mocks, stubs is primarily happens with unit, integration testing.
In object oriented world, bigger or smaller units of codes are tested, and while we having SRP objects, we still have to deal with dependencies and other object structures while we processing code.
To clean cut and provide a stable state we’re should fix those parameters/dependencies, which are not in the question from the testing perspective.

Theory on the Test Doubles

Let’s see a black-box case on signature level.

  • Objects a and b with methods a.foo(..), b.bar()
  • a(b), a contains b as constructor dependency
  • the method in question to test: a.foo(x) : y

So while y output depends on x input parameter, there is the possibility our result also depends on b / b.bar()
Here comes the idea to fix b.bar(), so we can make sure only x has effect on y.
By replacing b we’re able to isolate and focus on the code being tested and not on the behavior or state of external dependencies.
We should replicate the object b, to get a return value as a real b object would do, but this should be ideally a fixed, predefined value.

b is replaceable by satisfying the contract, which is ideally an interface, while the correctness should remain.

Prove on replacing – correctness, see Liskov Substitution Principle – there are criticism on LSP if we consider super-subtype, however if we look at interface-implementation level, the interface contract exactly defines the behavior, anyhow the small details of implementation changes.

Type of Test Doubles

I get Martin Fowler List of Test Doubles

  • Dummy
    simple, non-production object. Used solely to replace unused inputs. Most simpler example: DummyRestApiExample
  • Fake
    real objects, but simplified only with data or objects with very limited, partial functionality
    examples: file-fake-java
  • Stub
    stubs are usually dumb, fixed with static values. It return a specific result, based on a specific set of inputs. Usually provides bigger, but more rigid set of results or structure compared to mocks.
    With it you can primarily verify state.
    example: during TDD you’ll create and empty shell with hard-coded return statements to replicate a non-existing service.
  • Mock
    replaces the whole object with the possibility of dynamic behavior (varied on call times, varied by parameters and even ability to verify calls)
    methods either overridden with static value or logic
    on undefined it returns either with exception, default value, etc.
    With it you can verify not just state, but rather behavior.
    example: mock object, returns with a fixed value or small logical answer, then you verify the call and another input parameters
    example: InputPopulatorActionTests.java
  • Spy – partially replacing methods, half mocks
    methods overridden with static value or logic
    on undefined it’ll use/invoke the original Object method
    example: MockitoSpyUnitTest.java, AbstractPageObjectTests.java

Other Tools for Testing

You can always capitalize on Builders, Mock builders provide an easy way to create similar mock in a dynamic manner and makes the code more fluent: PizzaUnitTest.java
With Fluent interface driven builder you can even build more complex logic.

Object Mothers are helping to create example data filled objects. This helps great in code reuse

// https://blog.codeleak.pl/2014/06/test-data-builders-and-object-mother.html
public class TestUsers {
    public static User aUser() {
        return new User("John Smith", "jsmith", "42xcc", "ROLE_USER");
    // other factory methods
User user = TestUsers.aUser();

And finally Know Your Tools
As an example from Mockito, if you need a long chain of object calls, maybe you don’t need to build a whole structure of objects.

Foo mock = mock(Foo.class, RETURNS_DEEP_STUBS);
// note that we're stubbing a chain of methods here: getBar().getName()




Why? What’s this all about?

This is a library to handle (store-search) multi level key attributes.
The implementation was driven by the need of

  • handle multiple level keys
  • flexible search
  • ranking/ordering

on the matches .

Lets consider data with multiple levels: category / subcategory / group / item levels for example.
If you want to find specific let’s say: subcategory, that’s an easy task. You can filter/map and you get it. But what if you have multiple attribute to match, multiple key lengths? How will you sort them? This task can be achieved, but requires more and more code and you lose fluency.

The details

The idea was, that you can easily instantiate keys, then define a collection. This way where you can pre-define the filter and sort algorithm on the collection itself, and you can use it almost the regular way.
The key has the capability to define wildcard, which is customizable.

As an example, we define a flexible matching keylength, an comparison which uses null as wildcard:

new CustomMultiKey(    

I choose map to store the keys, this provides even a value or it can be the key itself.

Let’s get some examples:

MultiKeyMap<String, String> map = getMap();
String aHuman = map.getIfAbsentFirst(map.newKey().addAny().add(HUMAN));

This way you will get all the human subcategory, and by your map it will be a sorted order, you will get the first one.

On the key and map you can define:

  • keylength rule
  • Comparison rule
  • Sorting rule

All the above have already implementations provided, but by the interfaces you can easily provide further custom rules.

What features are ahead?

My future ideas revolving around more query like syntax, parallel search and multi-rule definition.

The source is available: https://github.com/PazsitZ/multilevel-keysearch
The jars available: http://pazsitz.hu/repo/hu/pazsitz/keysearch/

Class-Method Caching

When you have the need of caching that’s a quite common topic, you can find many out-of-the-box solutions.

But what if if you want to cache whole class, with all the methods, maybe across your whole application?
My solution based on a proxy class and this way applies cache on all method invocations. The cache configuration of course can be defined, but the basis can see in the example:

ClassInterface class = new Class();
ClassInterface cachedClass = CacheClass.localCache(ClassInterface .class, class, CacheClass.CacheType.ACCESS, 30);

So in the above example we need an interface, a class instance and some configuration options for caching.
This way we’ll get a Proxy class, which will cache our methods, where we use the cachedClass instance.

This is all good, but what if we want to have caching across the application.

public ClassInterface getCachedClass() {
    ClassInterface class = new Class();
    ClassInterface cachedGloballyClass = CacheClass.globalCache(ClassInterface.class, class );
    return cachedGloballyClass;

This way whereever we inject the cached instance all of the invocation, will point the same instance, this way it will share the cachepool.

My implementation is using the guava caching library, hardwired for now.
But any caching library can serve inside the proxy solution.

The source is available: https://github.com/PazsitZ/methodcaching
The jars available: http://pazsitz.hu/repo/hu/pazsitz/methodcaching/