[OUTDATED] Starting a microservice

When starting a new microservice, generally you will have a backend part, that part which serves the API, and a frontend part which serves .js, .html, .css and whatsoever files for the page. Here we will provide a small introduction on what you should take care of when starting a new microservice. You can create microservices with either a backend, a frontend, or both, but in most cases you will need both.

As of https://github.com/containous/traefik/pull/1257 it is currently not possible to have both the frontend and the backend in the same container except you make sure your api is reachable under containername/api. (Please be aware that Traefik has different uses for the terms frontend and backend!)

Frontend

Starting with the visual (and easier part). You will have to modify (or tell someone to modify) the docker-compose.yml so it includes your frontend service. The docker-compose entry looks like this:

    omsapplications-frontend:
        build:
            context: ./omsapplications-frontend
        volumes:
            - "../oms-applications-frontend:/usr/app:ro"
        labels:
            - "traefik.frontend.rule=Host:localhost;PathPrefix:/services/omsapplications;PathPrefixStrip:/services/omsapplications"
            - "traefik.frontend.priority=100"
            - "traefik.port=8086"
            - "traefik.backend=omsapplications-frontend"
            - "traefik.enable=true"
            - "registry.modules=/getModules.json"

Please note the label "registry.modules". This tells the registry where to find the getModules.json. Your service should serve that file under that relative url, so in this case the registry would query "http://omsapplications-frontent:8086/getModules.json" (swarm-dns resolved). The getModules file provides information about the frontend entry:

{
  "name": "OMS Applications",
  "code": "omsapplications",
  "pages": [{
    "name": "Applications",
    "code": "applications",
    "module_link": "all/applicationsController.js",
    "icon": "fa fa-ticket"
  }]
}

The meaning of the parameters is the following:

getModules.json
ParameterRequiredDescription
nameYes

Module friendly name

(no limitations, can be changed in time

example: "OMS Events module")

codeyes

Module internal name - This will be the entry for the service in the baseUrlRepository (see below)

(no spaces, all lowercase, cannot be changed once registered,

example: "oms_events")

pagesyes

JSON array containing frontend pages data

Each entity inside the array should contain the following attributes:

name - Frontend friendly name (example: "List events")

code - Frontend internal name (example: "applications") - This one should be the same with the AngularJS module name defined in the module

module_link - relative (to your service) path to the module controller JS file


To understand more of what is happening above, have a look at the actual controller, which in our example sits in /all/applicationsController.js

(() => {
  'use strict';
  const baseUrl = baseUrlRepository['omsapplications'];
  const apiUrl = `${baseUrl}api/`;

  /** @ngInject */
  function config($stateProvider) {
    // State
    $stateProvider
      .state('app.applications', {
        url: '/applications',
        data: { pageTitle: 'Applications' },
        views: {
          'pageContent@app': {
            templateUrl: `${baseUrl}all/welcome.html`,
            controller: 'WelcomeController as vm',
          },
        },
      });
  }

  function WelcomeController($scope, $http) {
    alert("our first controller");
  }


  angular
    .module('app.applications', [])
    .config(config)
    .controller('WelcomeController', WelcomeController);
})()

In this example, const baseUrl would end up being "http://localhost/services/applications/". This is injected by the registry and the core automatically, you do not have to care about this. And actually you also can not care about this, as the registry sets the url for you, based on what you put in the docker-compose.yml. Also you will find the name of your controller here, hint: .module('app.applications', []). If you fuck up this configuration, you will blow the complete frontend by activating your module until  CORE-22 - Getting issue details... STATUS  is implemented, so you will know when you put something wrong in here.

Backend

This is what lives inside the cloud and is only exposing an API to the client. You can write the backend in whatever language you want, but we recommend to choose node.js or php/laravel, as we have other services written in those languages and thus other people being able to help you and to maintain your code. Again, we first have to start off with adding a docker-compose.yml entry (or asking someone to do that for you):

omsapplications:
        build:
            context: ./omsapplications
        volumes:
            - "../oms-applications:/usr/app"
            - shared:/usr/shared:ro
        links:
            - mongodb
        labels:
            - "traefik.backend=omsapplications"
            - "traefik.port=8085"
            - "traefik.frontend.rule=Host:localhost;PathPrefix:/services/omsapplications/api;PathPrefixStrip:/services/omsapplications/api"
            - "traefik.frontend.priority=110"
            - "traefik.enable=true"
            - "registry.categories=(applications, 10);(events_frontend, 10)"
            - "registry.servicename=omsapplications"

Depending on what kind of database you use you should link that up here, but the interesting part is again the labels section. These tell traefik and the registry how to integrate your service. traefik.frontend.rule defines how the api will be exposed (please note again the different usage of the terms "frontend" and "backend" with traefik - this has nothing to do with what we call frontend). Also make sure that the priority is higher than the one of your frontend service if you have one, so this request is not accidentally shadowed by the frontend rule. And then you have the categories. Each backend service can fulfill different categories, which is basically just a string name and a priority. Those categories are searchable via registry/category/<catname> and they can be used to create some kind of service-choice. E.g. in events we will have different services handling applications, oms-applications, oms-su and oms-statutory. Those will all be present in the "applications" category. Right now there is no real listing about the categories, but that will surely pop up as soon as Derk Snijders reads this article. However, if you managed to register your service like this, you are basically good to go. However, you will probably want to communicate to other services at some point in time. For this, there is a token authentication system, to be exact there are two.

  • X-Auth-Token: this is a token uniquely identifying a user. If you are in posession of this token, this is the preferred way to query other services as it prevents a privilege escalation. This is the case for all user-induced querys, meaning not cron and not startup but basically everything else. You are querying another service "on behalf" of that user, so you should also just have the rights of that user in the other service. You will get this token in the header of a user request, and if you don't that means the user didn't log in. If you want to check for authenticity of the x-auth-token, you will have to query the core (until  CORE-80 - Getting issue details... STATUS ). How to query the core? Well, that's undocumented, have a lot of fun finding out! Hint: /api/getUserByToken and then add some weird headers and stuff.
  • X-Api-Key: This is a key issued by the registry for intraservice-communication. For the exact issuing, also have a look at Micro service - communication as it describes a intraservice-communication more in detail. To get an api-key, you will have to query the registry with an api-key (yes, another key). You will find the api-key (not the x-api-key) in the shared volume that you mounted in your docker-compose.yml entry auto-generated by the registry, in this example it is /usr/shared/new-api-key. Just send all the file contents to the registry and it will trust you to be a service actually running in the same cloud. That also means: don't leak that one! If you happen to leak an x-api-key that's not too bad, as they expire after a day (yes they expire, your service will have to renew them), but please also try to avoid that.
    If you want to send a request authenticated by your x-api-key, simply add it to the headers field, and for the lulz also a field: 'X-Requested-With': 'XMLHttpRequest' (otherwise the core will not accept you).
    If you happen to receive a request which you want to authenticate by X-Api-Key, have a look at this api-call to the registry.

As you understood all that, you are ready to launch your own microservice. If you want an example for a backend in node.js, you can have a look at https://github.com/AEGEE/oms-node-ms-example. Currently there is also an example in php/laravel, but that is not using the registry