How to create self-signed certificates

When we create a Web Server that should run over HTTPS we need some server certificate. If the Web Server will be exposed to internet we should buy a certificate signed by a well-known authority but if we are coding a Web Server for some internal or private use we can create our Server Certificate and sign it by ourselves.

All the certificates had to be signed by another certificate. This second certificate have to belong to a Certificate Authority (CA) in which all the clients that will receive the first certificate should trust.

But as I said before, If the clients that will use our Web Server are also our own clients we can specify that the client should trust in any CA Certificate we want.
So we can create our own CA Certificate to later signed any Server Certificate we want to use in a Web Server.

We are going to use openssl for creating the certificates.

CA Certificate

The first step is to create our own CA Certificate. To do that we should run the following commands in a terminal.

  • For generating the Private Key for the CA Certificate.
> openssl genrsa -des3 -out myOwnCA.key 2048

Passphrase: Whatever you want. For example ‘myca’.

  • For generating the CA Certificate (also know as Public Key) to later signed the Server Certificate.
> openssl req -x509 -new -nodes -key myOwnCA.key -sha256 -days 1024 -out myOwnCA.pem

Passphrase: What you put in the previous step.

When we create a certificate openssl asks us some information. We should complete at least Common Name. We can use the default values for the rest of the fields just entering a dot ‘.’

I usually completes 3 fields and for the sake of the example we can set:
1) Country Name as ‘US’
2) Organization Name as ‘My Organization’
3) Common Name with ‘MyOwnCA’

Server Certificate

Once we created the CA Certificate we have to proceed to create the Server Certificate. To do that we should run the following commands in a terminal.

  • For generating a Private Key for the Server Certificate. This step is similar to what we did for the CA Certificate.
> openssl genrsa -des3 -out server.key 2048

Passphrase: Whatever you want. For example: ‘server’.

  • For generating a Server Certificate Sign Request with the Server Private Key. Now, we don’t create the Server Certificate, we create a Server Certificate Sign Request. This request is the one we should send to some Certificate Authority for signing if we pay for that.
> openssl req -new -key server.key -out server.csr

Passphrase: What you put in the previous step.

Here again openssl asks us some information. The most important one is ‘Common Name’. We have to enter there the IP, or host name. For example: ‘localhost’. Remember that we can complete the rest of the fields or just entering a dot for the default values.

Once we have the Server Certificate Sign Request (server.csr) we should sign this request with the CA Certificate in order to get the Server Certificate Signed by our own CA.

For generating the Server Certificate we have to sign the Server Certificate Sign Request with the CA Certificate.

> openssl x509 -req -in server.csr -CA myOwnCA.pem -CAkey myOwnCA.key -CAcreateserial -out server.crt -days 500 -sha256

Passphrase: CA Certificate passphrase. Following the example it should be: ‘myca’.

Finally, we should create the PKCS12 file with the Server Private and Public Keys. To do that we should run the following commands in a terminal:

  • For joining the Server Private Key and the Server Certificate in the same file.
> cat server.key > server.pem
> cat server.crt >> server.pem
  • For creating the PKCS12 file.
> openssl pkcs12 -export -in server.pem -out server.pkcs12

Passphrase: Server passphrase. In our example: ‘server’.
Export Passphrase: Whatever you want. For example: ‘Server’ again.

Final comments:

Reach this point, we should set the server.pkcs12in the Server as the certificate to use and add the myOwnCA.pem certificate as a trusted one in the client you use. For example a Web Browser.

Reference (List of commands):

1. openssl genrsa -des3 -out myOwnCA.key 2048
2. openssl req -x509 -new -nodes -key myOwnCA.key -sha256 -days 1024 -out myOwnCA.pem
3. openssl genrsa -des3 -out server.key 2048
4. openssl req -new -key server.key -out server.csr
5. openssl x509 -req -in server.csr -CA myOwnCA.pem -CAkey myOwnCA.key -CAcreateserial -out server.crt -days 500 -sha256
6. cat server.key > server.pem
7. cat server.crt >> server.pem
8. openssl pkcs12 -export -in server.pem -out server.pkcs12

About ${packaging.type} error when sbt resolves dependencies

I have been using Swagger pages in Scala Maven project without any problem for a long time. Recently, I had to add a Swagger page to a Scala Sbt project and I faced with an unexpected problem. Besides I did the same as always I found myself struggling with some weird problem with the library used by Swagger.

This is the Sbt console when compiling:

[info] Updating {file:/home/null/code/me/packaging.type/packaging-type-workaround/}packaging-type-workaround...
[info] Resolving jline#jline;2.14.6 ...
[warn] [FAILED ];2.1.1!${packaging.type}: (0ms)
[warn] ==== local: tried
[warn] /home/null/.ivy2/local/${packaging.type}s/${packaging.type}
[warn] ==== activator-launcher-local: tried
[warn] /home/null/.activator/repository/${packaging.type}s/${packaging.type}
[warn] ==== activator-local: tried
[warn] /home/null/programs/activator-dist-1.3.12/repository/${packaging.type}s/${packaging.type}
[warn] ==== public: tried
[warn] ==== typesafe-releases: tried
[warn] ==== typesafe-ivy-releasez: tried
[warn] ::::::::::::::::::::::::::::::::::::::::::::::
[warn] :: ^ see resolution messages for details ^ ::
[warn] ::::::::::::::::::::::::::::::::::::::::::::::
[warn] ::;2.1.1!${packaging.type}
[warn] ::::::::::::::::::::::::::::::::::::::::::::::
[trace] Stack trace suppressed: run last *:update for the full output.
[error] (*:update) sbt.ResolveException: download failed:;2.1.1!${packaging.type}
[error] Total time: 4 s, completed Jan 14, 2019 4:08:00 PM

When Sbt tries to resolve the dependency a weird ${packaging.type} shows up instead of the expecting jar.

I tried to compile the same code using Maven and I everything worked as expected. So I started looking in the internet about the problem and I found that this is known problem with some dependency libraries in Sbt. Basically, Sbt doesn’t handle the ${packaging.type} variable. For more information see:

There are several workarounds but the one I like is to create an Sbt AutoPlugin for setting the jar package type in the build settings.

This is the vert simple workaround code:

import sbt._

object PackagingTypeWorkaround extends AutoPlugin {
  override val buildSettings = {
    sys.props += "packaging.type" -> "jar"

You should place this code into a PackagingTypeWorkaround.scala file in the project folder.

After that you can compile your Sbt project without any problem.

> compile
[info] Compiling 1 Scala source to /home/null/code/me/packaging.type/packaging-type-workaround/target/scala-2.12/classes...
[info] 'compiler-interface' not yet compiled for Scala 2.12.8. Compiling...
[info] Compilation completed in 15.15 s
[success] Total time: 17 s, completed Jan 14, 2019 4:15:55 PM

Final Comments:

When you build your project using Jenkins for the very first time after adding the workaround it will fail. You don’t have to do anything else, just run the Jenkins job again.


NOTE: You can find the code for this project at:

Using Akka Http to create a test server

In a recent post we explored akka http, how to define routes, handle requests/responses, and deal with json marshalling/unmarshalling using spray.
Once you know the syntax for defining routes and manage json, is pretty straightforward to run a local server:
val server = Http().bindAndHandle(routes, host, port)
One interesting application I found for this is to create a local embedded server for component testing: when test a serivce that does http calls to other services (the most common situation when using microservices), you can mock all the REST calls but that usually ignores many tricky details  (like connection, protocol, authentication, etc. ), alternatively, you can do integration test, but normally involves either connecting to a vpn to access the test environment or start a bunch of docker images with the external services.
As an intermediate solution, we can use a local akka http server that behaves like the services we depend on, avoiding mocking the http calls and the complexity of running the full serivices.The Code
Let’s create a small trait that we can add to our tests and manage the local server.
First, we want a nice syntax… something like:

withEmbeddedServer { test code }

Although we need to pass the routes, so we’ll have

withEmbeddedServer(routes){ test code }

The bindAndHandle method that starts the server needs port and the host name, we can pick some defaults (“localhost” and 8080 maybe?) but we probably want the possibility of override them:

withEmbeddedServer("myserver", 8888, routes){ test code }

The block must be evaluated lazily and we might want to capture the output of the execution, so we’re going to assign the block a type of “=>T” (so we need T as a type parameter too)
Finally, to run akka http server we need to pass an implicit ActorSystem, an ActorMaterializer, and an ExecutionContext, so we’ll add those parameters too
So, the signature of the method will look like:

def withEmbeddedServer[T](host: String = "localhost", port: Int = 8080, routes: Route)(block: => T)
   (implicit system: ActorSystem, mat: ActorMaterializer, ec: ExecutionContext): T  = ???

Now, the actual implementation is pretty straightforward: start the server, evaluate the block, stop the server, and return the value.
To keep things simple, we’re going to surround the block evaluation with a try-finally block:

def withEmbeddedServer[T](host: String = "localhost", port: Int = 8080, routes: Route)(block: => T)
  (implicit system: ActorSystem, mat: ActorMaterializer, ec: ExecutionContext): T  = {
  val server = Http().bindAndHandle(routes, host, port)
  try {
  } finally {

The terminate method requires a timeout parameter, and for more flexibility, let’s define it as member value so it can be overridden if you don’t like the default value:

trait EmbeddedHttpServer {

  val shutdownDeadline: FiniteDuration = 10.seconds

  def withEmbeddedServer[T](host: String = "localhost", port: Int = 8080, routes: Route)(block: => T)
    (implicit system: ActorSystem, mat: ActorMaterializer, ec: ExecutionContext): T  = {
    val server = Http().bindAndHandle(routes, host, port)
    try {
    } finally {


Now you can mix the trait in your test class and have access to the “withEmbeddedServer” method, but we don’t need to mix it, we can create an object providing that method and we can import it anywhere we need it:

object EmbeddedHttpServer extends EmbeddedHttpServer

As an example, let’s simulate a  service that respond  with “hello” when you do a GET to /hi path and write test code that verifies that.

Our code to test will look like:

def callHiService(): Future[Seq[ByteString]] = {
  val url = "http://localhost:8989/hi"
  val result = Http().singleRequest(HttpRequest(uri = Uri(url))).futureValue
  val HttpResponse(StatusCodes.OK, _, entity, _) = result

(In a real case you probably will call a Swagger generated client Api overriding the host and base path properties in your test config)

And our test (using ScalaTest) will be:

"The embedded server" should "respond with hello" in {
  //create a route that responds with "hello" to a GET to /hi
  val hiRoute = path("hi") { get { complete("Hello")} }
  //start the embedded server and run the test
  withEmbeddedServer(port = 8989, routes = hiRoute){
    val body = callHiService().futureValue
    body.head.utf8String shouldBe "Hello"


And that’s it, thanks for listening.
NOTE: You can find the code for this project at:

Coursera’s Functional Programming in Scala specialization review

Today I’d like to share with you a little review of the Coursera’s Functional Programming in Scala specialization.

The specialization is composed of 4 lesson+assignment courses and a final capstone project to put in practice everything you have just learned.

Kind of free?

The first course is available for free. You can access lessons and assignments without paying and your solutions to the assignments will be graded. The only thing you won’t obtain for free is the ‘Online Diploma’ once you have completed the course.

Courses 2 to 4 lessons are also available for free, you just have to subscribe as an ‘Auditor’. The option is a little bit hidden in the ‘Subscribe’ dialog, but it’s there. By accessing the course as an Auditor, you won’t have access to the course’s assignments. Course 5 is a six-weeks assignment, so you will need to pay for taking it.

The full course has recently been updated to Coursera’s subscription model. This means that you will pay a monthly fee for as long as it takes you to complete the full specialization, or until you decide to stop paying for it. You can also take pauses, and each time you re-subscribe you will have one free-trial week.


Let’s review each course in more detail.

Course 1: Functional Programming Principles in Scala

This course has all what you want from a programming course and more. The course is taught by Martin Odersky himself. You will learn how Scala evaluates code, what functional programming is, how to write recursive functions, what higher order functions are, a little bit about OOP and how Scala implements it, pattern-matching, Scala’s collections and why they are so awesome and more.

OK, I mentioned Martin Odersky but you may have no idea who is he. He is one of Scala’s authors, and you will notice it when he shares insights throughout the videos of why Scala works the way it works and the design decisions they took. That’s really valuable and it lets you appreciate a little bit more some parts of Scala you might not liked at first (I’m looking at you, implicits!).

You really don’t need previous knowledge about Scala, but I highly recommend having done the Scala tutorial first, or having read the first chapters of Programming in Scala.

I found the first week (e.g. recursion) and week 6 assignments (e.g. anagrams) harder than expected, and I’ve been working as a full time coder for a while! =)

The course is planned to be completed in six weeks, but it really depends on how much time you allocate to it. I found this course to be more time-demanding than the following courses.

Course 2: Functional Program Design in Scala

I have mixed reactions about this course. The most important thing about it for me is the explanation about how for-comprehension works in Scala and the intro to Monads. For me, it shed some light on why Slick queries work the way they work, and it let me look the whole Scala standard library with renewed eyes.

It also has an intro to Functional Reactive Programming (FRP), but I didn’t find it that great. The examples used in this course are taken from Haskell, take a look at the original source if you are interested.

I found the programming assignments a little bit lack-luster, and less time consuming than the ones from the previous course.
– The first one is about implementing a solver for the Bloxorz flash game.
– The second one is more about you learning to use ScalaCheck library by your own than anything related to that week’s material.
– The last one is about implementing a kind-of web Excel using FRP. It’s a nice intro of what can be done with Scala.js and a reactive library on the front-end.

The course is divided in 4 weeks, like the next two. The graded assignments are not very time consuming, but you also have a lot of in-lesson exercises that might take some time to complete.

Course 3: Parallel programming

This course is great! You will learn some cool Scala tricks to abstract yourself from threads and processors. You will learn how to use parallel collections and you will implement some really well known algorithms like K-Mean and Barnes-Hutt simulation.

I don’t have a background in computer science, so I found some of the lessons with demonstration theory (i.e. probe that certain properties holds or not) a little bit… tedious.

The assignments are really fun to complete, and you might find the algorithms you implement here really useful in your next job.
– Implement the blur filter for editing images
– Re-implement some of the recursive assignments from course 1 – week 1 (i.e. recursion) using parallel programming
– Implement a K-Mean algorithm
– Implement a Barnes-Hutt simulation

If you are like me, course 3 might take you a little bit longer than course 2 to complete, because you might need to rest a little bit between each video when demonstrations starts =)

Course 4: Big Data Analysis with Scala and Spark

This was the course I really wanted to do, and I did all the previous ones as a really long intro to this one. You will learn to use Spark. You will be able to appreciate how it has evolved throughout time, starting with RDDs, then DataFrames, and finally typed Datasets. It’s a really complete course and Heather Miller makes a great job explaining everything.

The assignments are all about Spark.
– The first one is about processing a lot of Wikipedia articles in a distributed way.
– The second one is two weeks long, and takes quite some time to complete. You will go through a set of Stack Overflow questions and answers, in search for the most popular programming language! You will implement a K-Mean algorithm using Spark.
– In the last assignment you will finally discover what american people spend their time on, in case you have ever wandered. You will use Spark SQL, and after completing it you will have a good idea of which Spark data-abstraction you should use depending on your use case.

The lessons on this course are really easy-going, but the second assignment might take some time to complete. All in all, it’s very straightforward course to complete.

Course 5: Functional Programming in Scala Capstone

The moment you were waiting for. You will finally have the opportunity to apply all your just-acquired knowledge in a long project. I haven’t really finished with this one yet, but so far I have found it really easy-going.

You will implement a way to visualize how temperature has varied in the work as years has passed. On the six weeks it is divided you will:
– Process a lot of temperature samples using your preferred method (Spark? parallel collections?)
– Implement spatial and linear interpolation algorithms.
– Integrate your code with a web app.
– More distributed programming for doing some data-intensive calculations
– And much more! (remember, I haven’t finished it yet =) )

This course is pure coding, so just take your favorite caffeinated beverage and you will finish it in no time!

Final thoughts

I really recommend this course if you are just starting with Scala and Functional Programming, or if you want to improve your knowledge of it.
Spark job offers are on the rise, and it’s never too late to learn a new skill.

Basic Category Theory for (Scala) Programmers (Part I)

“Aren’t you tired of just nodding along when your friends starts talking about morphisms? Do you feel left out when your coworkers discuss a coproduct endofunctor?

From the dark corners of mathematics to a programming language near you, category theory offers a compact but powerful set of tools to build and reason about programs. If you ever wondered what’s a category or a functor and why care, this series might be just what are you looking for.

But don’t wait! If you call now, you’ll get this explanation of dual categories!

Next time, you too can be the soul of the party and impress your friends with category theory!*”

*(results may vary)


Category theory is a branch of abstract math. Why it gets so much attention from (functional) programmers?

As it happens, modeling programs using category theory allows us to apply theoretical results directly to our code, explore new approaches to existing problems, and increase our confidence on the solutions. At first, category theory might seem impenetrable, but one can go far by learning the basic vocabulary

But let’s go to the beginning

What’s one of the most important technique for programming?



Removing unnecessary detail and keeping the essence is an extremely powerful tool for programming.

What if we dial it to eleven?

Let’s abstract over all the characteristics of the things we want to model, and just end with “things” (called objects) and the connections between them (called arrows, or if you want to get really fancy, morphisms).

Just things and the connections between them:



To make a category, we are going to require only two things: every object is connected with itself (identity) and if object A is connected with object B which in turn is connected with object C, we can consider that object A is connected with object C (composition)

If we formalize the definition:

A category Cat is structure consisting of:

Obj(Cat): collection of objects.

For each A,B ∈ Obj(Cat), there’s a set C(A,B) of morphisms from A to B

f:A→B means f ∈ C(A,B)

(In other words, for every pair of objects A and B, there’s a bunch of arrows connecting them… or not)

A composition operation between arrows:

if f:A→B and g:B→C, then g∘f:A→C

(I can make a “new” arrow connecting the end of f with the beginning of g )

For each object X, exists an identity arrow:


We’re going to have only two requirements (laws) for the identity and composition of a category:

Identity as unit

For any arrow f:A→B,

f∘IdA = f = IdB∘f

(f composed with identity on A is equal to f and is equal identity on B composed with f)

Composition is associative

f∘(g∘h) = (f∘g)∘h

(it doesn’t matter if I compose f and g first or g and h first, the resulting composition is the same)

Some mathematical examples of categories are:

Set:  the category where the objects are sets and the arrows are functions from one set to another

Pfn: the category of sets and partial functions

How is related to programming?

Yes, most examples in category theory are from math, but what about programming? If we consider the types of a program and the functions between those types, we can form a category: function composition will be our arrow composition and the identity function applied to each type will be our identity morphism. (with the caveat that we have to consider all our functions total and ignore non-termination [infinite loops, exceptions, etc],  also known as bottom ‘_|_’ ).

This is a toy program that takes a String, parses it as Int, divides by two, and gets the byte value of the result.

In Scala:

def toInt(s: String) = s.toInt

def divByTwo(i: Int): Float = i/2f

val program: String => Byte = (toInt _) andThen (divByTwo _) andThen (_.byteValue)

Scala provides an identity function,  so we know that identity[String] , identity[Int], identity[Float], and , identity[Byte] exists. Also, andThen acts as function composition in Scala

Now, is easy to see that (toInt _) andThen (divByTwo _) andThen (_.byteValue) == (toInt _) andThen ((divByTwo _) andThen (_.byteValue)) == ((toInt _) andThen (divByTwo _)) andThen (_.byteValue)

Since identity is defined as def identity[A](x: A): A = x , we can verify identity andThen f == f == f andThen identity is true for any f

So, if we squint and pretend the functions are total, we have:

Objects:  String, Int, Float, Byte

Arrows: toInt _divByTwo _ , _.byteValue

Id: identity

composition:  andThen

identity is neutral and andThen is associative.

So we can model our program with category theory, and take advantage of it.

That’s where the applicability of concepts like Functor, Monad, Natural transformations, etc.  come from.

In our next articles we’re going to expand on why that’s useful… stay tunned.

How to create an Akka HTTP Server

The Akka HTTP modules implement a full server-side and client-side HTTP stack on top of akka-actor and akka-stream. It offers two different API with different levels of abstraction: a high-level one and a low-level one.

The high-level routing API of Akka HTTP provides a DSL to describe HTTP “routes” and how they should be handled. Each route is composed of one or more levels of Directive s that narrows down to handling one specific type of request.

The low-level Akka HTTP server API allows for handling connections or individual requests by accepting HttpRequestobjects and answering them by producing HttpResponse objects.

Adding the Akka-HTTP dependency

Akka HTTP is provided in a separate JAR file, to use it you should include the following dependency in your build.sbt file

"com.typesafe.akka" %% "akka-http" % "10.0.9"

Creating Server basic structure

First we add an Scala file to our project and create an object that extends from App and Directives.

import akka.http.scaladsl.server.Directives

object Server extends App with Directives {}

Then we need to add the ActorSystem and ActorMaterializer to that object.

import akka.http.scaladsl.server._

object Server extends App with Directives {
  implicit val system = ActorSystem("actor-system")
  implicit val materializer: ActorMaterializer = ActorMaterializer()

Now we can get the server up by adding the following lines after the implicits.

val routes: Route = path("/test") { complete("test") }

The above line defines an example route /test which returns the text: test

Http().bindAndHandle(routes, "", 8002)

This final line allows us to bind an IP and port to a set of routes.

Server basic Structure:

import akka.http.scaladsl.server._

object Server extends App with Directives {
  implicit val system = ActorSystem("actor-system")
  implicit val materializer: ActorMaterializer = ActorMaterializer()

  val routes: Route = path("/test") { complete("test") } 
  Http().bindAndHandle(routes, "", 8002)

Defining Routes

Now we are going to replace the example route /test with some more interesting ones.

All the routes we are going to define in our server are implemented using Akka-HTTP directives and have to be assigned to a Route type variable. This variable is the one which will be used as the first parameter of thebindAndHandle method.

The most easy way to add a new route is using the directive ‘path‘ along with a path as we saw in the example route above.

All the directives should finish with a complete method call.

val routes: Route =
  path("test") {

Inside the path we can use different directives depending on the HTTP method that we want to use. Here is an example using the POST verb.

val routes: Route =
  path("test") {
    post {

If we want to add a new route we can concatenate two path directives just using the ~ symbol.

val routes: Route =
 path("test") {
 } ~
 path("test2") {

Responding requests

Till now we saw how to return a text for an endpoint using the complete method and a text. But what about if we want to return JSON data? Well certainly we can do something like this:

val routes: Route = 
  path("test") { 
    post {

However this JSON will be returned with a wrong content type.

In order to set the right content type we have to use the Akka HTTP low level objects HttpResponse and ResponseEntity

val routes: Route = 
  path("test") { 
    post {
      val resp: ResponseEntity = HttpEntity(ContentTypes.`application/json`,"{\"my_key\":\"my_value\"}")
      complete(HttpResponse(StatusCodes.OK, entity = resp)

Note: Besides HttpResponse object has a parameter to define headers the Content Type can not be defined in that parameter. It has to be defined in the HttpEntity.

Dealing with input data in URLs

If we want to get a number from an URL we should use the IntNumber object.

val routes: Route =
  path("test" / IntNumber) { id =>
    get {
      complete(s"get test - $id")

But if we want to get the content from one / to the next one (or the url end if there is no one more) we should use the Segment object.

val routes: Route =
  path("test" / Segment ) { data =>
    get {
      complete(s"get test - $data")

Of course there is more options but IntNumber and Segment are the two more useful ones.

We can also combine more than one input data in the same route as follows:

val routes: Route =
  path("test3" / Segment / IntNumber) { (data, id) =>
    get {
      complete(s"get test3 - $data, $id")

Dealing with input data in request body

One common thing when we work with HTTP requests is to get information from the request body. This information has to be converted into an Scala data type in order to be able to work with it in our Akka HTTP Server.  To do that we could use the entity(as(..)) structure.


path("test") {
  post {
    entity(as[String]) { param =>

In this example param contains the request body as an String.

This can be accomplished due Akka HTTP includes marshallers for some basic data types. Default marshallers are provided for simple objects like String or ByteString.

Dealing with JSON

Most of the times the request body information comes as a JSON structure and deal with that it’s a little more complex. There is no default marshaller for your own JSON data of course, so you have to define it.

There are some JSON libraries which can help but the most common library for these cases is the spay-json library. And here we are going to use this library to define our own marshaller.

First of all we have to define a case class with all the parameters that we are going to receive in the request JSON body.

So, for example, if our JSON data is something like:

  names: ['jhon', 'freddy', 'kurt'],
  id: 10

We should define a case class like this:

case class TestAPIParams(names: List[String], id: Int)

But after defining the right case class we have to define the marshaller.

To do that, we have to define a trait extending from prayJsonSupport and DefaultJsonProtocol. Inside this trait we have to define and implicit using jsonFormatX where X is the amount of parameters we have. In our case we should define the trait as follows:

trait JsonSupport extends SprayJsonSupport with DefaultJsonProtocol {
  implicit val testAPIJsonFormat = jsonFormat2(TestAPIParams)

Once we have a marshaller for our JSON data we have to extend the object in which we are defining the route.

object Server extends App with Directives with JsonSupport {

And now we can use the entity(as(...)) structure with the our case class

path("test") {
  post {
    entity(as[TestAPIParams]) { params =>
      ... // We can use params.names and

Final comments

It may seem a little difficult to develop a server over Akka HTTP from scratch and certainly there are complex things in the library. However following these steps it’s easy to start and have the basic server structure and functionality quite soon.  Then it’s only hard work and read the documentation. Luckily Akka HTTP has a very complete and clear official documentation, fully of examples.

5 Tools for Managing Remote Teams

There are lot of new tools available to help you effectively manage the work of remote team members. Those tools will help you to improve communication, project management and development setups.

1 – Chat rooms: Slack/Gitter/flowdock

Slack, Gitter and Flowdock are effective communication tools that allow you to create channels to talk about specific topics. These are fantastic tools when you have workers all around the globe. It’s a good way of making communication less overkill and you can assign users to channels according to your needs. They also have voice/video calls for free.

2 – Cloud storage provider: Dropbox/Google Drive

Your team needs to access documents, and Dropbox and Google Drive are the best option to do it. They are simple apps. The user just need to drop a new file and assign permissions to others users that will be notify when changes are made. They also have multi edit functionalities that enable two or more users to edit the same document simultaneously.

3 – Issue Tracker: Jira

Issue trackers like Jira have a lot of plug-in that help your team to manage requirements and projects. This helps teams to organize work, document and prioritize tickets. Define releases and dates. Jira integrates well with code repositories like Git, so when a developer closes a task, the ticket will be automatically updated. Managers can create their own dashboards to see the status of each ticket, and you can use some Tempo plugins to load hours. This is a must in any remote development team.

4 – Software container platform: Docker says “Developers use Docker to eliminate “it works on my machine” problems when collaborating on code with co-workers. Operators use Docker to run and manage apps side-by-side in isolated containers to get better compute density. Enterprises use Docker to build agile software delivery pipelines to ship new features faster, more securely and with confidence for both Linux and Windows Server apps.”
Basically you can create docker containers that will include all the dependencies and products you need to build your software and distribute them to your developers in a very easy way. That will make developers life easier.

5 – Continuous Integration tools: Bamboo/Jenkins

This CI tools will help you to build your software and run all your test to detect issues sooner. With distributed teams you don’t want to have your repository broken. These apps will help you to maintain your repository in a good state, and they can build and publish your app directly to test environments. If we integrate this with JIRA and Git, a developer can push a fix to a bug in Git that will automatically move the ticket in JIRA to “Ready to Test” and Bamboo will build the app, run the tests, notify if something went wrong, and publish the app to QA. This is a great feature when you manage remote teams.

When working with remote team members you want to automate all your process to avoid blocking someone’s work. These apps help you to make that possible and to have everything on the cloud. Each member will know where to go to get the information and how processes work. I strongly recommend to use these apps no matters if your team is remote or not, because with these apps you can highly increase your team performance . Now, for a remote team, this should be a must.

New technologies: Big challenge for non-dynamic organizations?

Innovation should be thought as a process, not as a goal. The world is changing very quickly and organizations need to move fast, so it’s not about how to innovate but to create the space to let the innovation happen.

Current mid-range level managers are Baby Boomers or Gen X that have a big challenge to manage new generation developers.

There are very few old school developers willing to learn new languages, most of them just stay coding in the same language they started coding, so we can think that new technologies are going to be mastered by new developers, I mean, Millennial developers.

Millennials understand the world in a different way, not better or worse, just different. They are not affected by the technological changes around them, and they actually enjoy learning new technologies. This is why Millennials will master new languages naturally, and that’s why companies who want to innovate will need to know how to deal with them.

So, innovation translates to our capability to give Millennials the space to learn and build new stuff. Current managers will need to adapt themselves to work with them, and that means to accept remote working, informal communication, and setting up goals instead of requesting 8 hours per day, etc.

How is your company dealing with this? Is it dealing with it at all?

Some startups are starting to accept Remote Working as they know that they can find better skilled developers faster and for less money. They only care for what the developers deliver and not the time they spend doing it. There are a lot of issue tracker tools that help to organize and supervise work . There are a lot of instant messaging tools available to have real time discussions by topic that help everyone in the organization to be updated with the latest status. Millennials handle these tools naturally. X Gen developers are also incurring in this process as they understand that this impacts positively in their performance.

Nowadays we can know where our Uber is, how long it will take to arrive and how much our trip will cost. So who is going to waste time in a corner waiting for a cub? The same happens with the new ways of working. If you have the chance to hire developers remotely and do the job easily for less money, why are you not doing it?