mgm sportsbook illinoisBlog http://localhost:8000/blog.rss My Feed Description en-us mgm sportsbook illinoisBlog http://localhost:8000/akka-durable-state

Actor persistence without event-sourcing complexity

Having persistent actors with events journaling can be an excellent choice if used for the right use case, e.g. if you know you’ll benefit from having history of how entity state changed over time. Unfortunately, it comes with a cost and complexity. Traditional event sourcing architectures are not easy in their nature and the same applies to akka persistent actors built on top of these. You have commands, events, state changes, recovery, effects that have to be run in response to an event (but cannot be run from event handlers to avoid side effects on recovery), etc. All that makes any reasonably big application inherently complex, but that’s one of the tradeoffs you need to consider when choosing your weapons.

A few months ago, the Akka team has released a riversweeps online casino 777 flavour of actors persistence called Durable State that aims to simplify how actor state is managed and saved. Obviously, not all is gold, so don’t expect it to be a drop-in replacement for event-based persistence. While it’s simpler and easier to use, it also trades off some of the core traits and benefits of the classic event sourcing approach. Nevertheless, it may be worth giving it a look and having it at the back of your head for potential use cases.

Entering durable state

This riversweeps online casino 777 actor persistence flavour seems to resemble the classic CRUD approach while maintaining the well-known actor model of handling messages or commands. The difference is the API that conceptually is a function (State, Command) => State, while for event-based persistence it is (State, Event) => State.

At first glance, they look pretty similar. The difference is huge, though. In the event-based version, you need to have an event at hand in order to come up with a riversweeps online casino 777 state. That means you have to have a command handler that produces that event when executed. So it’s not the command handling that changes the state, it’s the event that, when persisted and applied to the current state, makes it transition to the riversweeps online casino 777 one.

Durable state shortens the cycle by eliminating events from the picture. In the durable state flavour, you just handle a command and as a result of command handling, you produce the riversweeps online casino 777 state that gets persisted. No event handlers that modify state, no recovery by replaying sequence of events to arrive at the current state.

This lack of events already exhibits the most important differences when compared to classical persistence:

  • No events means no activity log, no history of what happened to the entity and how the state evolved over time.
  • No ability to replay and reprocess events as only state snapshot is being stored.
  • It's the command that comes up with a riversweeps online casino 777 state that gets persisted, which means only the recent state is saved and the next command overrides the previous one with an updated version. You can think of it as always having only one snapshot of state in classical persistence.

That means...

]]>
Scala,Akka,riversweeps online casino 777 Engineering http://localhost:8000/akka-durable-state Thu, 21 Apr 2022 11:44:00 +0000
mgm sportsbook illinoisBlog http://localhost:8000/the-software-development-process-steps-at-softwaremill-part-2

5 Steps of software development process at riversweeps online casino 777Mill

The time has come - let me take you once again “behind the scenes” of riversweeps online casino 777Mill’s software development process! In the previous part, I’ve described what is happening from the moment you contact us, through determining your business requirements to gathering the tech dream team to work on your software project.

Remember the analogy about learning to play an instrument from the last piece? This part has got more, like going on a tour with your band to get the work done vibe! I’ll tell you what the whole (remote) communication looks like inside our teams, what steps our engineers take to make your software fault-tolerant, and last but not least - what kind of “extra” qualities you can expect from our developers. So it’s time to pack your bags, tune up the guitars, and go - the world is waiting for you to rock it!

What is the riversweeps online casino 777 Development Life Cycle?

Let’s start with some theories that will help you understand why we work the way we do. Building software requires commitment. You need to make sure that you can adapt to any changes that occur, in a way as unnoticeable to your product users as possible.

This is where some kind of framework comes in handy or sticking to the music performance analogy - a rider. A framework like that can’t be too strict but it should give you the feeling that you are prepared for 95% of the challenges that might happen along your development journey. In the IT world, it’s called the riversweeps online casino 777 Development Lifecycle.

riversweeps online casino 777Mill’s riversweeps online casino 777 Development Lifecycle

By definition, riversweeps online casino 777 Development Life Cycle (SDLC) is a framework that defines the steps involved in the development of software at each phase. It covers the detailed plan for building, deploying, and maintaining the software. SDLC defines the complete cycle of development i.e. all the tasks involved in planning, creating, testing, and deploying a riversweeps online casino 777 Product.

In riversweeps online casino 777Mill, we stick to these rules, leaving some room for “intentional flexibility” (kudos to Ward for that name, loving it!). That’s why the project scoping part of your software development is so important - because it affects the way we’ll adjust our process to your needs! As a result, we maintain a steady and predictable work schedule that can be bent a little if anything unexpected happens.

Step 1 - Your idea: general tech analysis & gathering requirements

Let’s start with making sure that our engineering work meets your business and strategic goals! Many times, we perform workshops for our clients and their development team members. We discuss the general software architecture, the software solutions’ details, and, once again, dig into details of what our clients' needs are. One of the methodologies our engineers use during these workshops is Event Storming.

Choosing the right tech stack

Here we also determine the technology requirements. And, once again, the approach differs a bit depending on whether our engineers join an already...

]]>
riversweeps online casino 777 Development,Distributed Teams,Agile Methodology,Continuous Delivery http://localhost:8000/the-software-development-process-steps-at-softwaremill-part-2 Thu, 14 Apr 2022 12:26:00 +0000
mgm sportsbook illinoisBlog http://localhost:8000/big-data-vs-data-science

Let’s dive deep into the important concepts around data and technology. In this post, I will cover definitions, tools, and examples of possible applications, as well how Big Data and Data Science relate to each other.

Why Big data and Data Science are important

Big Data and Data Science are the two concepts visible in all discussions about the potential benefits of enabling data-driven decision making. It’s been measured that 90% of the world's data has been created in the last two years alone, which gives us an incredible 2.5 quintillion bytes of data being created every day. There are zettabytes of information available that we all leave behind when buying things, selling things, and leaving digital footprints of our modern daily life, so it’s only natural that the data-driven approach rules everything from business automation to social interaction.

Learning from digital data and getting a broader and more comprehensive perspective on processes and the future gives early adopters of data technologies a chance to seize strategic opportunities and drive full speed ahead. Advances in cloud computing and machine learning additionally allow the extraction of coherent, strategic insights from digital residues, while IT engineers and scientists come together to help businesses make sense out of complex data and drive profits.

What is Big Data?

Big Data refers to an ever-growing volume of information of various formats that belongs to the same context. But when we say Big Data, how big exactly are we talking? We usually refer to data residues bigger than terabytes or petabytes, but Big Data is not only about large amounts of data.

Current definitions of Big Data are dependent on the methods and technologies used to collect, store, process, and analyse available data. The most popular definition of Big Data focuses on data itself by differentiating Big Data from “normal” data based on the 7 Vs characteristics:

  • Volume - is about the size and amounts of Big Data that is being created. It is usually greater than terabytes and petabytes.
  • Velocity - is about the speed at which data is being generated and processed. Big Data is often available in real-time.
  • Value - is about the importance of insights generated from Big Data. It may also refer to profitability of information that is retrieved from Big Data analysis.
  • Variety - is about the diversity and range of different data formats and types (images, text, video, xml, etc.). Big Data technologies evolved with the prime intention to leverage semi-structured and unstructured data, but Big Data can include all types: structured, unstructured, or combinations of structured and unstructured data.
  • Variability - is about how the meaning of different types of data is constantly changing when being processed. Big Data often operates on raw data and involves transformations of unstructured data to structured data.
  • Veracity - is about making sure that data is accurate and guaranteeing the truthfulness and reliability of data, which refers to data quality and data value.
  • Visualisation - is about using charts and...
]]>
Big Data,Data Science,Machine Learning http://localhost:8000/big-data-vs-data-science Tue, 12 Apr 2022 12:26:00 +0000
mgm sportsbook illinoisBlog http://localhost:8000/bootzooka-2022-cats-effect-3-autowire-and-tapir

Bootzooka is our template project to quickly bootstrap a simple Scala+React-based microservice or webapp.

It contains the basic infrastructure such an application might need: relational database access, an HTTP API, fat-jar/docker/Heroku deployments. The basic user-management functionality is available as well, that is registering users, logging in and resetting passwords, serving both as a template for developing other services and as a jumpstart to focus on core business requirements, instead of reimplementing security.

By design, it’s an “opinionated” project, using a hand-picked set of libraries. While it’s fairly easy to replace a particular component, the Bootzooka project focuses on providing good developer experience with the choice we’ve made.

However, we all know that our industry is in constant flux! That means that from time to time, we need to update the stack; 2022 is no different. There’s a couple of important updates that we’d like to share.

cats-effect 3

The first major change is updating from Monix to cats-effect 3. While Monix served us well, it seems that most riversweeps online casino 777 development is currently focused on cats-effect, so this platform seems to be a more Future-proof (pun intended) solution.

Both cats-effect and Monix are functional effect systems, hence the philosophy of structuring code remains the same. On the surface, it might seem that the migration amounts to changing the datatype used for describing side-effecting computations to IO, instead of Monix’s Task. And indeed, that was one of the initial steps. But as always, the devil is in the details!

Correlation ids

One of the features of Bootzooka is its support for correlation ids. A correlation id is an identifier that:

  • is read from the incoming HTTP requests’ headers, or a riversweeps online casino 777 one is generated,
  • is associated with the request for the whole duration of request processing,
  • is added to any outgoing HTTP requests in a header,
  • is included in all log messages produced by the application during request processing.

Before, this was implemented using TaskLocals, transparently to the user; the signature of the business logic methods was unaffected and unaware that a correlation id is being passed behind the scenes. Because TaskLocals where also available as a ThreadLocal, integration with logging (slf4j/logback in our case) was possible using MDC.

This changed in cats-effect 3. There is a similar construct, IOLocal, however, it has some limitations compared to its predecessor. Firstly, its value is not available as a ThreadLocal, you can only read it using IOLocal.get: IO[A]. Note that the result type is not the A value itself, but an effect yielding the local value, which means that this must be included in the overall effect returned by the method in question.

To include the correlation ids in log messages, we had to create a thin wrapper on top of slf4j’s loggers to read the current correlation id from the IOLocal, populate the MDC, and only then call the underlying logging. This also means that our logging calls now...

]]>
Scala,Cats Effect,Tapir,Bootzooka,Autowire http://localhost:8000/bootzooka-2022-cats-effect-3-autowire-and-tapir Fri, 08 Apr 2022 10:00:00 +0000
mgm sportsbook illinoisBlog http://localhost:8000/overview-of-next-generation-java-frameworks

What constitutes a modern framework? What criteria should it fulfill to be considered as the next-gen one? I think the following aspects are essential when looking for the answer:

  • support for writing microservices,
  • integration with cloud providers,
  • startup time and memory consumption,
  • possibility to write reactive applications,
  • support for the riversweeps online casino 777est versions of JDK,
  • various integrations with external systems.

In this post, I'd like to briefly describe frameworks that could be named modern ones. I will focus on open source options only.

Spring Boot

Spring Boot emerged during a natural evolution of the Spring Framework, the most popular Java framework. This fact should be not surprising since it has been on the market for a long time. The VMware company backs it.

By default, it runs on Tomcat container (other servlet containers available as well); however, we can also apply a reactive approach based on the Reactor project and run it on Netty. The Spring Boot actuator provides superb support in terms of management and monitoring, which is very useful when you think about microservices. In addition, you can find plenty of sub-projects that bring integrations with external systems.

Developers can use a rich set of cloud-specific extensions based on third-party libraries gathered under the Spring Cloud project. These cover topics like cloud providers integration, distributed configuration, service discovery, ensuring the resilience of services, and monitoring them. All of this battle-tested among plenty of projects.

The framework uses annotations extensively, with no preprocessing during the compilation phase. That means doing reflection calls and creating proxies at the runtime. As a result, we may get a bigger memory footprint and slower startup time.

Spring offers the Spring Native to provide a native deployment option with GraalVM. However, the project is in the beta phase and is not as mature as GraalVM support in other projects below.

VMware prepared an event at the end of January 2022, where Spring Framework 6 and Spring Boot 3 were presented. The baseline for the riversweeps online casino 777 version is Java 17, although you can use it already in the recent releases.

Micronaut

Micronaut is one of the youngest kids on the street, provided by the Object Computing company. Its development started in 2017, and currently, the Micronaut Foundation manages it.

The framework was created from the ground up to support work with microservices and serverless functions. The creators advertise it as a natively cloud-native stack, meaning various aspects of cloud deployment (service discovery, distributed tracing, fast startup, and small memory footprint) have been considered while designing the framework. Although it is cloud-focused, we can create command-line applications as well.

Thanks to the ahead-of-time compilation and resolving DI during the build phase, the memory usage and startup times are low. Such features are crucial when working with serverless functions.

Micronaut is compatible with Java, Groovy, and Kotlin. In addition, it works with Java 17, uses Reactor to provide reactive streams, and includes plenty of integrations with external systems and cloud providers. We can create...

]]>
Java,Spring Boot,Micronaut,Quarkus,Helidon http://localhost:8000/overview-of-next-generation-java-frameworks Fri, 01 Apr 2022 09:00:00 +0000
mgm sportsbook illinoisBlog http://localhost:8000/fancy-strings-in-scala-3

Let’s put some of the riversweeps online casino 777 Scala 3 features to work! While we try to escape using Strings as much as possible, we still end up manipulating them in our codebases daily. However, quite often, these are not arbitrary strings but ones with some special properties. In these cases, Scala’s compiler might be able to offer some help.

Our goal will be to create dedicated types for non-empty and lowercase strings. We’ll use opaque types and inlines, which are riversweeps online casino 777 features of the Scala 3 compiler.

Non-empty strings

First, let’s create a zero-cost abstraction that will represent non-empty strings. For that, we’ll define an opaque type NonEmptyString:

opaque type NonEmptyString = String

At runtime, values of type NonEmptyString will be simply Strings (hence, there’s no additional cost to the abstraction that we are introducing). At compile-time, however, these two types are treated by the compiler as completely unrelated, except for the scope in which this type alias is defined. Here, if this is a top-level definition, the scope will be the entire file, but if we created the type alias in an object, the scope in which it’s known that a NonEmptyString is in fact a String would be that object.

A type alias is a good start, but we’ll also need some way to “lift” values from a String into our riversweeps online casino 777 type. First, given an arbitrary value at runtime, we can write a function that returns an optional NonEmptyString. Note that this definition needs to be placed next to the opaque type alias as the compiler needs to know that these types are indeed equal:

opaque type NonEmptyString = String

object NonEmptyString:
  def apply(s: String): Option[NonEmptyString] = 
    if s.isEmpty then None else Some(s)

It’s worth noting that here we do have some additional runtime cost - allocating the option. However, we can do better with constants. We can check at compile time if they are indeed empty or not! For this, we’ll use an inline method that is guaranteed to be evaluated by the compiler as the compilation happens. In the inline’s method definition, we’ll use an inline if which can be used to verify whether a constant expression is true at compile time:

inline def from(inline s: String): NonEmptyString =
  requireConst(s)
  inline if s == "" then error("got an empty string") else s

We’re also using scala.compiletime.requireConst to get a nice error message if the value passed as a parameter is not a constant (but e.g. a value reference). If the string is empty, we’re using scala.compiletime.error to report a custom error message.

Finally, we need a way to upcast a NonEmptyString into a String. This can be done using an implicit conversion. In order to avoid an additional runtime method call, we define the conversion as an inline method as well (evaluated at compile-time). This conversion will be added automatically by the compiler, given that we import it into scope.

As Julien noted, a better way to...

]]>
Scala,Functional Programming,Type Safety,Scala 3,Functional programming http://localhost:8000/fancy-strings-in-scala-3 Wed, 30 Mar 2022 12:47:00 +0000
mgm sportsbook illinoisBlog http://localhost:8000/visualise-your-data

This post is a short story about how to use a JavaScript library to see what a backend developer cannot see from raw data.

Background

I’m currently working on a project where we are trying to solve the Vehicle Routing Problem (VRP in short) on scale, which is a more complicated variation of the Travelling Salesman Problem (TSP in short), which you are probably familiar with.

The project is about preparing the most optimal routes for couriers by utilizing a number of factors and different constraints like working time windows, service times at depots, unique skill requirements by a courier, and so on. These movable variables impact the calculation of the optimal route the most and can highly reorder the way the final route is computed. Yet the base factor for all the calculations is time & distance cost from point to point.

You are probably already imaging a Google Map with waypoints and routes between them, albeit it isn’t that easy.

The problem

The output from the computation is a JSON payload with routes for each courier that consists of number of stops with defined jobs to either pick up or deliver a package — you can even have multiple pickups and deliveries at the same location. And each location is identified by latitude and longitude coordinates. This makes it very hard to analyse the output by the human eye and to spot potential problems.

Also, as I’m working on the backend part of the project, I don’t have access to all the customer data — this is a SaaS platform with a plethora of clients. All I get is anonymous data with couriers’ IDs, orders’ IDs, and coordinates.

Any change to the logic, small adjustment of the service time behaviour can impact the final result a lot. It can produce unsustainable routes with a strange behaviour like leaving a depot, picking up a package and going back to the depot to pick up the rest of the orders.

Any manual analysis of the results is very time consuming and not only by decoding the coordinates but also because of a large volume of data — just imagine how big the DHL fleet of couriers is for example.

Solution

To solve the presented problem, I needed a tool to quickly visualise the output on a map with additional info like courier ID, coordinates of each location, and jobs to do in each stop. As I never built such a tool, I was looking for something simple to use and that’s how I found the Leaflet library.

Leaflet is a JavaScript library to create interactive maps, it has a large number of features, but from my perspective, I just needed two things: draw a line representing the calculated route and put markers on each stop with detailed info.

Implementation

I’m familiar with the JavaScript and NodeJS environment — maybe not an expert but I’m not scared to use such tools. Also, by having a support group of...

]]>
JavaScript,riversweeps online casino 777 Engineering,Data Visualization,Leaflet http://localhost:8000/visualise-your-data Thu, 24 Mar 2022 13:00:00 +0000
mgm sportsbook illinoisBlog http://localhost:8000/three-tools-to-improve-your-scala-workflow

A programming language itself, while being definitely in the centre of interest, is only one of the components of a programming ecosystem. Another crucial component is tooling - and even given the best programming language out there, productivity might be poor if the tooling isn’t right.

Luckily enough, Scala is not only a great programming language but a significant investment is done in the tooling area. There are a couple of tools that I find especially useful in my day-to-day work. And while I might be biased, as apart from commercial projects I only work a lot on open-source ones such as tapir, hopefully, you’ll find the below useful in your everyday work.

Formatting

First and foremost comes scalafmt. Scala, being a flexible language, gives you a number of possibilities to pick from when creating abstractions, choosing either a more object-oriented or a more function-oriented approach, but it also allows for a great deal of flexibility when it comes to syntax.

While the discussions on which programming construct to choose to create an abstraction are good to have, as they directly impact future extensibility, maintainability, and readability of the code, code-formatting discussions are almost entirely bikeshedding. Scalafmt removes all that.

All of the open-source projects that I work on (and most closed-source ones) use scalafmt for formatting. Personally, I use format-on-save in my editor, but when you do the formatting, it is, of course, up to you. The important requirement is that you format before pushing to version control. Not only does it remove needles discussions, but it also causes diffs to contain less noise - no more “artificial” changes because someone added indentation or removed a riversweeps online casino 777line.

Nicely formatted source code using scalafmt

Scalafmt can be configured itself, and its configuration might be a source of pointless debates as well. That’s why I simply use the default configuration, it’s good enough and again, ends any discussions on the subject before they even start. Well, to be honest, I do have one configuration override - max line length. Here’s the .scalafmt.conf file that configures the formatting:

version = 3.4.3
maxColumn = 140

If there’s one thing I might wish for in scalafmt, it’s that it would always create the same “canonical” version of the source code on reformat, regardless of the riversweeps online casino 777lines you add manually. That’s not always the case currently - scalafmt allows some of your syntax-related inventions to be preserved. Maybe in a future release? :)

Documentation

Docs aren’t the most exciting thing to work on in a project, but they are a necessity. One of the reasons we don’t like working on documentation is because it gets outdated so quickly. But there’s a tool for that: mdoc. It’s a markdown pre-processor that type-checks and optionally runs any code snippets that you embed in your code.

Not only are you sure that any code examples in the documentation are up-to-date; it also forces you to put all the necessary imports next to the code....

]]>
Scala,Scala Steward,mdoc,scalafmt,Tooling http://localhost:8000/three-tools-to-improve-your-scala-workflow Mon, 21 Mar 2022 11:55:00 +0000
mgm sportsbook illinoisBlog http://localhost:8000/autowire-overview

Recently, we released the first version of a riversweeps online casino 777 Macwire feature called autowire. It derives from the well known wire function, but introduces a few really interesting changes, namely:

  • recursive wiring,
  • explicit list of instances that are used in the wire process,
  • integration with cats-effect library.

In this article, I’ll try to describe this feature in-depth and also walk you through some examples.

What is MacWire?

If you’re not yet familiar with MacWire: it’s a lightweight and non-intrusive Scala dependency injection library, and in many cases a replacement for DI containers.

In case you're interested in DI in Scala, I highly recommend you this guide.

The goal of autowire

autowire is a macro-based functionality that builds an instance of the target type based on a list of available dependencies. When applying a creator of the target class, it’s not limited to the defined dependencies, but it’s also able to create intermediate dependencies. It makes it possible to get rid of a great part of boilerplate that is required when we’re using wire. Moreover, autowire is able to inject instances that are wrapped into common cats containers: IO[_] and Resource[IO, _], which makes it quite useful in the cats stack.

A simple example

There are quite a few test cases defined in the integration tests module, but let’s start with a simple one to get a general overview of this feature.

class A()
class B()
class C(a: A, b: B)
case class D(c: C)

object Test {

  val ioA: IO[A] = IO { riversweeps online casino 777 A() }
  val resourceB: Resource[IO, B] = Resource.eval(IO { riversweeps online casino 777 B() })
  def makeC(a: A, b: B): Resource[IO, C] = Resource.eval(IO{ riversweeps online casino 777 C(a, b) })

  val theD = autowire[D](ioA, resourceB, makeC _)

}

In this example, we use three ways to provide an instance to autowire:

  • an effect,
  • a resource,
  • a result of a factory method.

Since autowire always returns result instances wrapped in Resource[IO, *], ioA under the hood is lifted with Resource.eval. The result - theD resource is a composition of input resources and the underlying instance is created with the given dependencies, so autowire generates something similar to:

val theD = for {
    fresh$macro$1 <- Resource.eval[IO, A](ioA)
    fresh$macro$2 <- resourceB
    fresh$macro$3 <- makeC(fresh$macro$1, fresh$macro$2)
    fresh$macro$4 <- riversweeps online casino 777 D(fresh$macro$3)
} yield fresh$macro$4

It’s not exactly the generated code, I simplified it for the sake of readability.

Macwire at compile-time performs an in-place sort of dependencies to make the wiring process possible. “In place” in this context means that only dependencies that are required by preceding creators are moved. It may sound a little bit mysterious but let’s consider another simple example to make it clear.

case class A()
class B(i: Int)
class C(b: B, s: String)
case class D(c: C)

object Test {
  def makeB(a: A): Resource[IO, B] = Resource.eval(IO{ riversweeps online casino 777 B(0) })
  def makeC(b: B): Resource[IO, C] = Resource.eval(IO{ riversweeps online casino 777 C(b, "c") })

  val theD = autowire[D](makeC _, makeB _)
}

We need to swap results of...

]]>
Scala,Macwire,Autowire http://localhost:8000/autowire-overview Thu, 17 Mar 2022 13:50:00 +0000
mgm sportsbook illinoisBlog http://localhost:8000/continuous-integration-with-hyperledger-fabric

Yes, it is possible. Seriously. And Fablo makes it easy. Have a look at those three samples for GitHub Actions.

Fablo for a chaincode developer

Olivia is a chaincode developer. She just wants to have a working network. She writes some chaincode tests on the test stub, but knows she cannot trust them. Unit tests won’t catch all the quirks of transaction execution in Hyperledger Fabric. Often there are some inconsistencies with the test stub and the actual network. Not to mention key collisions and read conflicts that cannot be tested with unit tests.

Olivia started to use Fablo in her CI process. She has the following GitHub Actions config file:

name: Test
on: [ push ]
jobs:
 Test:
   runs-on: ubuntu-latest
   steps:
     - name: Check out repository code
       uses: actions/checkout@v2
     - name: Start Hyperledger Fabric network
       run: ./fablo up
     - name: Run test script
       run: ./tests.sh

Apart from checking out the code, there are only two steps in the CI process:

  1. Start the Hyperledger Fabric network with a single ./fablo up command. This step requires only a fablo-config.json and fablo executable to be in the project repository. Fablo handles all the complexity associated with starting the network, configuring it, and installing chaincodes.
  2. Execute the ./tests.sh script with tests. Fortunately, Fablo supports a simple tool that serves the REST API for chaincodes. Thanks to that, Olivia doesn’t need to mess with Client SDKs for Hyperledger Fabric and can just use some CURLs and GREPs to call the network.

Now, Olivia can focus on chaincode development and be sure chaincodes are tested on a real Hyperledger Fabric network.

Fablo for an app developer

Liam develops a backend application in Node.js. He feels a bit uncomfortable to mock the whole module of the application that handles chaincode calls. After short research, he managed to run the Hyperledger Fabric network locally in Fablo and created some end-to-end tests for his app. Now he is going to use the following YAML file to execute the tests in GitHub Actions:

name: Test
on: [ push ]
jobs:
 Test:
   runs-on: ubuntu-latest
   steps:
     - name: Check out repository code
       uses: actions/checkout@v2
     - name: Start Hyperledger Fabric network
       run: ./fablo up
     - name: Run E2E tests
       run: npm i && npm test:e2e

It is almost the same as Olivia's configuration file. And it serves him well, but, after a while, he finds out setting up the initial network state takes a long time. Before testing some edge cases, he needs to create many users in the CA and call many transactions on the blockchain.

Liam finds out he can create the initial data on the local network and create a snapshot of the network. Then he can use this snapshot in GitHub Action in the following way:

…
     - name: Start Hyperledger Fabric network from a snapshot
       run: ./fablo restore my-snapshot && ./fablo start
     - name: Run E2E tests
       run: npm i && npm test:e2e

Now, when his network starts in GitHub Actions, all the data is there, ready...

]]>
Blockchain,Hyperledger Fabric,Private Blockchain,Fablo,Continuous Integration http://localhost:8000/continuous-integration-with-hyperledger-fabric Wed, 09 Mar 2022 13:35:00 +0000